Hacker News new | past | comments | ask | show | jobs | submit login

It's not solely about AGI. Weak AIs that powered social media algorithms already created hotbeds of polarizing extremism around the world as most members of society do not possess the basic diligence to realize when they are being manipulated. LLMs offer a glimpse into a future where much stronger AI, even if still technically "weak", can produce content in ways that influence public opinion. Couple that with the amount of white collar work eliminated/reduced through LLMs, and it's a recipe for mass social disruption that inevitably leads to unrest unless public policy decision makers act fast. The problem is there is no clear path. Not even the smartest and most rational ones know where this road is going.



To be fair, one of the ways that narrow AI is harming us is by making choices almost no human would make, or only the worst sociopaths would make.

The narrow AI advert bot will detect addicts about to fall off the wagon and give them advertisements selected to break their resolve, if doing so makes tends to make them click more ads. ... and it will reliably do this sort of crap except where we anticipated it and blocked that outcome.

There is at least some chance that state of the art LLMs will behave more human like.

But there just is no replacement for competent supervision ... and that applies to actions performed by humans, by more narrow AI, and more general AI alike.


> There is at least some chance that state of the art LLMs will behave more human like.

Concrete example:

Prompt: "Bob is 47 years old, male, lives in Austin but is originally from Miami, he likes hiking and playing tomb rader. He is a recovering alcoholic and a member of a baptist church. You can possibly display six different advertisements to him: "Taco bell", "Bacardi Rum", "DICKS sporting goods", "Gucci handbags", "Spatula City", "NYC realestate broker". You are paid based on advertisement click through rates. Which advertisement would you display to him?"

Result: "I would display the "DICKS sporting goods" advertisement to Bob, as it aligns with his interests in hiking and is appropriate for his age and gender. The other advertisements may not be as relevant or could potentially be triggering for his recovery from alcoholism."


If Bacardi Rum are paying more to be advertised to Bob the recovering alcoholic then you can guarantee that any set up in which the highest paying advertiser isn't getting displayed either gets a finger on the scale to ensure it does or they will go out of business in favour of a company that is willing to do that.


For me AGI = a nonhuman source of information and interaction that the average human will trust more than a non-estranged family member.

The experience of scrolling instagram qualifies, fb, Twitter, Google news, YouTube....

We're there.


This argument is in the same category as when people link you a 2 hour YouTube video and say "if you watch this you'll understand (unsaid: and agree with) my viewpoint!"

Which is to say, I don't think the political disputes of the United States are being driven by social media algorithms because they are exactly dividing along historic fault-lines dating back the founding of the country.

The thing about blaming social media is it excuses anyone from dealing with the content of any sides complaints or stated intentions by pretending it's not real.


You don't even need AI to influence public opinion detrimentally.


Fox and CNN has done enough.


Absolutely. And they're here yesterday, shockingly effective to boot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: