Hacker News new | past | comments | ask | show | jobs | submit login

> but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.

Citation needed.

Counterpoints: - LLMs were mistrusted well before anything recent.

- More controls make LLMs more trustworthy for many people, not less. The Snafu at Goog suggests a need for improved controls, not 0 controls.

- The American culture wars are not global. (They have their own culture wars).




> More controls make LLMs more trustworthy for many people, not less. The Snafu at Goog suggests a need for improved controls, not 0 controls.

To whom? And, as hard as this is to test, how sincerely?

> The American culture wars are not global. (They have their own culture wars).

Do people from places with different culture wars trust these American-culture-war-blinkered LLMs more or less than Americans do?


- To me, the teams I work with and everyone handling content moderation.

/ Rant /

Oh God please let these things be bottle necked. The job was already absurd, LLMs and GenAI are going to be just frikking amazing to deal with.

Spam and manipulative marketing has already evolved - and thats with bounded LLMs. There are comments that look innocuous, well written, but the entire purpose is to low key get someone to do a google search for a firm.

And thats on a reddit sub. Completely ignoring the other million types of content moderation that have to adapt.

Holy hell people. Attack and denial opportunities on the net are VERY different from the physical world. You want to keep a market place of ideas running? Well guess what - If I clog the arteries faster than you can get ideas in place, then people stop getting those ideas.

And you CANT solve it by adding MORE content. You have only X amount of attention. (This was a growing issue radio->tv->cable->Internet scales)

Unless someone is sticking a chip into our heads to increase processing capacity magically, more content isnt going to help.

And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ? Can it be operationalized? Does it require a sweet little grandma in the Philippines to learn how to run a federated server? Does it assume people will stop behaving like people?

Oh also - does it cost money and engineering resources? Well guess what, T&S is a cost center. Heck - T&S reduces churn, and that its protecting revenue is a novel argument today. T&S has existed for a decade plus.

/ Rant.

Hmm, seems like I need a break. I suppose It’s been one of those weeks. I will most likely delete this out of shame eventually.

- People in other places want more controls. The Indian government and a large portion of the populace will want stricter controls on what can be generated from an LLM.

This may not necessarily be good for free thought and culture, however the reality is that many nations haven’t travelled the same distance or path as America has.


I hope you don't delete it! I enjoyed reading it. It pleased my confirmation bias, anyways. Your comment might help someone notice patterns that they've been glancing over.... I liked it up until the T&S part. My eyes glazed over the rest since I didn't know what T&S means. But that's just me.


As of right now, the only solution I see is forums walled off in some way, complex captchas, intense proof of work, subscription fees etc. Only alternative might be obscurity, which makes the forum less useful. Maybe we could do like a web3 type thing but instead of pointless cryptos, you have a cryptographic proof that certifies you did the captcha or whatever and lots of sites accept them. I don’t think its unsolvable, just that it will make the internet somewhat worse.


Yeah, one thing I am afraid of is that forums will decide to join the Discord chatrooms on the deep web : stop being readable without an account, which is pretty catastrophic for discovery by search engines and backup crawlers like the Internet Archive.

Anyone with forum moderating experience care to chime in ? (Reddit, while still on the open web for now, isn't a forum, and worse, is a platform.)


>And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ?

The answer is curation, and no, it doesn't need to scale to a billion people. maybe not even a million.

The sad fact of life is that most people don't care enough to discrminate against low quality content, so they are already a lost cause. Focus on those who do care enough and build an audience around them. You as a likely not billion dollar company can't afford to worry about that kind of scale, and lowering the scale helps you get a solution out for the short term. You can worry about scaling if/when you tap into an audience.


I get you. That’s sounds more like membership than curation though. Or a mashup of both.

But yes- once you stop dropping constraints you can imagine all sorts of solutions.

It does work. I’m a huge advocate of it. When threads said no politics I wanted to find whoever made that decision and give them a medal.

But if you are a platform - or a social media site - or a species.?

You can’t pick and choose.

And remember - everyone has a vote.

As good as your community is, we do not live in a vacuum. If information wars are going on outside your digital fortress, it’s still going to spill into real life


> This may not necessarily be good for free thought and culture

After reading the rest of your rant (I hope you keep it) ... maybe free thought and culture aren't what LLMs are for.


Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.

I'm just saying. :) Guardrails nowadays don't really focus on dangers (it's hard to see how an image generator could produce dangers!) so much as enforcing public societal norms.


Just because something is not dangerous to the user doesn’t mean it can’t be dangerous for others when someone is wielding it maliciously


What kind of damage can you do with a current day llm? I’m guessing targeted scams or something? They aren’t even good hackers yet.


Fake revenge porn, nearly undetectable bot creation on social media with realistic profiles (I've already seen this on HN), generated artwork passed off as originals, chatbots that replace real-time human customer service but have none of the agency... I can keep going.

All of these are things that have already happened. These all were previously possible of course but now they are trivially scalable.


Most of those examples make sense, but what's this doing on your list?

> chatbots that replace real-time human customer service but have none of the agency

That seems good for society, even though it's bad for people employed in that specific job.


I've been running into chatbots that are confined to doling out information from their knowledgebase with no ability to help edge case/niche scenarios, and yet they've replaced all the mechanisms to receive customer support.

Essentially businesses have (knowingly or otherwise) dropped their ability to provide meaningful customer support.


That's the previous status quo; you'd also find this in call centres where customer support had to follow scripts, essentially as if they were computers themselves.

Even quite a lot of new chatbots are still in that paradigm, and… well, given the recent news about chatbot output being legally binding, it's precisely the extra agency of LLMs over both normal bots and humans following scripts that makes them both interestingly useful and potentially dangerous: https://www.bbc.com/travel/article/20240222-air-canada-chatb...


I don't think so. In my experience having an actual human on the other line gives you a lot more options for receiving customer support.


the issue is "none of the agency". Humans generally have enough leeway to fold to a persistant customer because it's financially unviable to have them on the phone for hours on end. a chatbot can waste all the time in the world, with all the customers, and may not even have the ability to process a refund or whatnot.


> That seems good for society, even though it's bad for people employed in that specific job.

Why?

It inserts yet another layer of crap you have to fight through before you can actually get anything done with a company. The avoidance of genuine customer service has become an artform by many companies and corporations, its demise surely should be lamented. A chatbot is just another in the arsenal of weapons designed to confuse, put-off and delay the cost of having to actually provide a decent service to you customers, which should be a basic responsibility of any public-facing company.


Two things I disagree with:

1. It's not "an extra layer", at most it's a replacement for the existing thing you're lamenting, in the businesses you're already objecting to.

2. The businesses which use this tool at its best, can glue the LLM to their documentation[0], and once that's done, each extra user gets "really good even though it's not perfect" customer support at negligible marginal cost to the company, rather than the current affordable option of "ask your fellow users on our subreddit or discord channel, or read our FAQ".

[0] a variety of ways — RAG is a popular meme now, but I assume it's going to be like MapReduce a decade ago, where everyone copies the tech giants without understanding the giant's reasons or scale


It's an extra layer of "Have you looked at our website/read our documentation/clicked the button" that I've already done, before I will (if I'm lucky) be passed onto a human that will proceed to do the same thing before I can actually get support for my issue.

If I'm unlucky it'll just be another stage in the mobius-support-strip that directs me from support web page to chatbot to FAQ and back to the webpage.

The businesses which use this tool best will be the ones that manage to lay off the most support staff and cut the most cost. Sad as that is for the staff, that's not my gripe. My gripe is that it's just going to get even harder to reach a real actual person who is able to take a real actual action, because providing support is secondary to controlling costs for most companies these days.

Take for example the pension company I called recently to change an address - their support page says to talk to their bot, which then says to call a number, which picks up, says please go to your online account page to complete this action and then hangs up... an action which the account page explicitly says cannot be completed online because I'm overseas, so please talk to the bot, or you can call the number. In the end I had to call an office number I found through google and be transferred between departments.

An LLM is not going to help with that, it's just going to make the process longer and more frustrating, because the aim is not to resolve problems, it's to stop people taking the time of a human even when they need to, because that costs money.


Why is everyone's first example of things you can do with LLMs "revenge porn"? They're text generation algorithms not even image generators. They need external capabilities to create images.


Do you also object to people saying that web browsers "display" a website even though that needs them to be plugged into a monitor?

If you chat to an LLM and you get a picture back, which some support, the image generator and the language model might as well be the same thing to all users, even if there's an important technical difference for developers.

It's a distinction that does not matter, as the question still has to be answered for the other modality. Do guns kill people, or do bad guys use guns to kill people? Does a fall kill you, or is it the sudden deceleration at the end? Lab leak or wet market? There's a technical difference, some people care, but the actionable is identical and doesn't matter unless it's your job to implement a specific part of the solution.


The moment they are good hackers, everyone has a trivially cheap hacker. Hard to predict what that would look like, but I suspect it is a world where nobody is employing software developers because a LLM that can hack can probably also write good code.

So, do you want future LLMs to be restricted, or unlimited? And remember, to prevent this outcome you have to predict model capabilities in advance, including "tricks" like prompting them to "think carefully, step by step".


Use the hacking LLM to verify your code before pushing to prod. EZ


> your code

To verify the LLM's code, because the LLM is cheaper than a human.

And there's a lot of live code already out there.

And people are only begrudgingly following even existing recommendations for code quality.


Your code because you own it. If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.


> Your code because you own it.

I code because I'm good at it, enjoy it, and it pays well.

I recommend against 3rd party libraries because they give me responsibility without authority — I'd own the problem without the means to fix it.

Despite this, they're a near-universal in our industry.

> If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.

Eventually.

But that doesn't help with the existing deployed code — and even if it did, this is a situation where, when the capability is invented, attack capability is likely to spread much faster than the ability of businesses to catch up with defence.

Even just one zero-day can be bad, this… would probably be "many" almost simultaneously. (I'd be surprised if it was "all", regardless of how good the AI was).


I never asked you why you code, this conversation isn't, or wasn't, about your hobbies. You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.

Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.


> I never asked you why you code

Edit: I misread that bit as "you code" not "your code".

But "your code because you own it", while a sound position, is a position violated in practice all the time, and not only because of my example of 3rd party libraries.

https://www.reuters.com/legal/transactional/lawyer-who-cited...

They are held responsible for being very badly wrong about what the tools can do. I expect more of this.

> You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.

And it'll be a long road, getting to there from here. The view at the top of a mountain may be great or terrible, but either way climbing it is treacherous. Metaphor applies.

> Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.

Yup, and that status quo gets headlines like this: https://tricare.mil/GettingCare/VirtualHealth/SecurePatientP...

I assume this must have killed at least one person by now. When you get too much pressure in a mechanical system, it breaks. I'd like our society to use this pressure constructively to make a better world, but… well, look at it. We've not designed our world with a security mindset, we've designed it with "common sense" intuitions, and our institutions are still struggling with the implications of the internet let alone AI, so I have good reason to expect the metaphorical "pressure" here will act like the literal pressure caused by a hand grenade in a bathtub.


The moment LLMs are good hackers every system will be continuously pen tested by automated LLMs and there will be very few remaining vulnerabilities for the black hat LLMs to exploit.


> The moment LLMs are good hackers every system will be continuously pen tested by automated LLMs

Yes, indeed.

> and there will be very few remaining vulnerabilities for the black hat LLMs to exploit.

Sadly, this does not follow. Automated vulnerability scanners already exist, how many people use them to harden their own code? https://www.infosecurity-magazine.com/news/gambleforce-websi...


Damage you can do:

- propaganda and fake news

- deep fakes

- slander

- porn (revenge and child)

- spam

- scams

- intelectual property theft

The list goes on.

And for quite a few of those use cases I'd want some guard rails even for a fully on-premise model.


Half of your examples aren't even things an LLM can do and the other half can be written by hand too. I can name a bunch of bad sounding things as well but that doesn't mean any of them have any relevance to the conversation.

EDIT: Can't reply but you clearly have no idea what you're talking about. AI is used to create these things, yes. But the question was LLMs which I reiterated. They are not equal. Please read up on this stuff before forming judgements or confidently stating incorrect opinions that other people, who also have no idea what they're talking about, will parrot.


> AI is used to create these things, yes. But the question was LLMs which I reiterated.

And the grandparent of the grandparent of your comment specifically named "Stable Diffusion": https://news.ycombinator.com/item?id=39612886

And text-based porn is still porn.

And it's a distinction without a difference that ChatGPT Pro doesn't strictly create images itself but instead forwards the request to DALL•E.

And the question of guard rails relevant to all AI, not just LLMs.


If we can change the rules of a discussion midway through, everyone loses. The parent replied to a question "What damage can be done with an llm without guardrails?" (regardless of the grandparent, this is how conversations work, you talk about the thing the other person talked about if you reply to them) and the response was to rattle off a bunch of stuff that LLMs can't do. Yes, they connected an LLM to an image generation AI. No, that doesn't mean "LLMs can generate images" aside from triggering some thing to happen. It's not pedantic or unreasonable to divide the two. The question was blatantly about LLMs.

If y'all want to rant and fear monger about any AI technology, including tech that has existed for years (deepfakes existed well before LLMs were mainstream), do that in a different thread. Don't just force every conversation to be about whatever your mind wants to rant about.

That said, arguing with you people is pointless. You don't even seem to think.


> If we can change the rules of a discussion midway through, everyone loses.

Then we lost repeatedly at almost every other step back to the root, because it switched between those two loads of times.

The change to LLMs was itself one such shift.

> No, that doesn't mean "LLMs can generate images" aside from triggering some thing to happen

The aside is important.

> It's not pedantic or unreasonable to divide the two.

It is unreasonable on the question of "guardrails, good or bad?"

It is unreasonable on the question of "can it cause harm?"

It's not unreasonable if you are building one.

> If y'all want to rant and fear monger about any AI technology, including tech that has existed for years (deepfakes existed well before LLMs were mainstream)

And caused problems for years.

> That said, arguing with you people is pointless. You don't even seem to think.

Communication isn't a single-player game, I can't make you understand something you're actively unwilling to accept, like the idea that tools enable people to do more, for good and ill, and AI is such a tool.

Perhaps you should spend less time insulting people on the internet you don't understand. Go for a walk or something. Eat a Snickers, take a nap. Come back when you're less cranky.


AI already is used to create fake porn, either of celebreties or children, fact. It is used to create propaganda pieces and fake videos and images, fact. Those can be used for everything from deffamation to online harassment. And AI is using other peoples copyrighted content to do so, also a fact. So, what's your point again?


Your other comment is nested too deeply to reply to. I edited my comment reply with my response but will reiterate. Educate yourself. You clearly have no idea what you're talking about. The discussion is about LLMs not AI in general. The question stated "LLMs" which are not equal to all of AI. Please stop spreading misinformation.

You can say "fact" all you want but that doesn't make you correct lol


You a seriously denying that generative AI is used to create fake images, videos and scam / spam texts? Really?


No. I'm declaring that you either can't read or don't understand that there's a difference between "gen AI" and LLMs. LLMs generate text. They don't generate images. Are you just a troll or not actually reading my messages? The question you're replying to asked about LLMs. I don't understand what's so difficult about this.


One has to love pedants. Your whole point was, LLMs don't create images (don't you say...), hence all the other points are wrong? Now go back to the first comment, assume LLMs and gen AI are used interchangeable (I am too lazy to re-read my initial post). Or don't, I don't care, because I do not argue semantics, tgere is hardly a more lazy, and disengenious, way to discuss. Ben Shapiro is doing that all the time and thinks he's smart.


Targeteted Spam, Reviewbombing, Political Campaigns


> Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.

Not so. I have it at home, I make nice wholesome pictures of raccoons and tigers sitting down for Christmas dinner etc., but I also see stories like this and hope they're ineffective: https://www.bbc.com/news/world-us-canada-68440150


Unfortunately you've been misled by the BBC. Please read this: https://order-order.com/2024/03/05/bbc-panoramas-disinformat...

Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.


> Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.

They specifically said who they came from, and that it wasn't the Trump campaign. They even had a photo of one of the creators, whom they interviewed in that specific piece I linked to, and tried to get interviews with others.


Look at the BBC article...

Headline: "Trump supporters target black voters with faked AI images"

@Trump_History45 does appear to be a Trump supporter. However, he is also a parody account and states as such on his account.

The BBC article goes full-on with the implication that the AI images were produced with the intent to target black voters. The BBC is expert at "lying by omission"; that is, presenting a version of the truth which is ultimately misleading because they do not present the full facts.

The BBC article itself leads a reader to believe that @Trump_History45 created those AI images with the aim of misleading black voters and thus to garner support from black voters in favour of Trump.

Nowhere in that BBC article is the word "parody" mentioned, nor any examination of any of the other AI images @Trump_History45 has produced. If they had, and had fairly represented that @Trump_History45 X account, then the article would have turned out completely different;

"Trump Supporter Produces Parody AI Images of Trump" does not have the same effect which the BBC wanted it to have.


I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has often very biased reporting for a publically funded source.


I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has ofteb very biased reporting for a publically funded source.


I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has very biased reporting for a publically funded source.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: