Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".
Keep it to generated news articles, and people might pay more attention to them.
Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.
> Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad
People have been writing articles without the help of an LLM for decades.
You don't need an LLM for grammar and spell checking, arguably an LLM is less efficient and currently worse at it anyway.
The biggest help a LLM can provide is with research but that is only because search engines have been artificially enshitified these day. But even here the usefulness is very limited because of hallucinations. So you might be better off without.
There is no proof that LLMs can significantly improve the workflow of a professional journalist when it comes to creating high quality content.
So no, don't believe the hype. There will still be enough journalists not using LLMs at all.
It is worse, even less than useless. With the California case, there is very little go gain by lying and not putting a sticker on items that should have one. With AI generated content, as the models get to the point we can't tell anymore if it is fake, there are plenty of reasons to pass off a fake as real, and conditioning people to expect an AI warning will make them more likely to fall for content that ignores this law and doesn't label itself.
Spell check, autocomplete, grammar editing, A-B tests for bylines and photo use, related stories, viewers also read, tag generation
I guess you have to disclose every single item on your new site that does anything like this. Any byte that touches a stochastic process is tainted forever.
Please no. I don’t want that kind of future. It’s going to be California cancer warnings all over again.
I don’t like AI slop but this kind of legislation does nothing. Look at the low quality garbage that already exists, do we really need another step in the flow to catch if it’s AI?
AI-written articles tend to be far more regurgitative, lower in value, and easier to ghostwrite with intent to manipulate the narrative.
Economic value or not, AI-generated content should be labeled, and trying to pass it as human-written should be illegal, regardless of how used to AI content people do or don't become.
That is low quality articles in general. Have you never seen how hundreds of news sites will regurgitate the same story of another. This was happening long before AI. High quality AI written articles will still be high value.
Did you go on grokipedia at release? I still sometimes loose myself reading stuff on Wikipedia, I guarantee you that this can't happen on grok, so much noise between facts it's hard to enjoy.
This is by far the biggest failing of capitalism. The entire world contributes to economical growth, but almost all of this value is captured by a tiny group of the rich elite.
> Such situations usually correct themselves violently.
Historically, they did because everyone's capacity for violence was equal.
What about now that the best the average person can do is a firearm against coordinated, organized military with armoured vehicles, advanced weaponry, drones, and sooner than later, full access to mass surveillance?
Also, how will a revolution happen once somebody trains a ML model end-to-end to identify desire for revolution from your search and chat history and communication with like-minded others?
Assuming the intent isn't prevented altogether through algorithmic feeds showing only content that makes it seem less attractive.
> However, these threats are outweighed by the benefits that AI can eventually bring. Medical advances, power generation, manufacturing capability. Our systems for running society have a lot of problems, economically, politically, epistemologically. These can also be improved with AI assistance.
Benefits come to those who have the means to access it, and wealth is a measure of the ability to direct and influence human effort and society.
How exactly do you propose that AI will serve the wellbeing of the worker/middle classes after they've been made obsolete by it?
Goodwill of the corporations working on them? Of their shareholders, well-known to always put welfare first and profit second? Government action increasingly directed by their lobbying?
> What we need is to embrace AI and find a way to make sure that the transition and benefits of AI are distributed instead of concentrated.
Sure. How? We've not done it with any other technological advances so far, and I don't see how shifting the power balance further away from the worker/middle class will help matters.
There's a reason why the era of techno-optimism has already faded as quickly as it's begun.
Of course, and distribution and ownership of benefits is the real issue here, but I think I’ve addressed that.
Let me be clearer: I said “companies must commit to” where the stronger phrasing is “companies are forced to by legislation”. But to begin with this might be voluntarily done by some number of companies.
Also, in this vision of society the AI companies (OpenAI, Anthropic, google etc) are taxed heavily. The taxation is redistributed, there is UBI for some fraction of the population, maybe the majority. Others still work in companies mandated to keep employees as I outlined above.
Importantly, we as a society specifically aim to bring about these benefits of AI by using the redistributed funds in part to invest in them.
Part of this is the free market, part is planned government investment. If one fails, maybe the other succeeds. Either way, we try to spread the benefits and importantly to ensure the benefits are actually there in the first place.
If you raise the bar for being allowed to speak about a very real concern that high, nobody will be left to spread and debate the idea in the first place.
geohot's not a regular joe, he's founded multiple companies and is a leader in our community. This is like a general being like "why are you letting the enemy win?" while he sits comfortably in his study managing his cigar collection.
I still don't think it precludes him from having this opinion. Could he be doing more? Sure, but having found success in this system doesn't make his criticisms of it invalid.
I agree with OP, who to my mind hasn't said the so-called critique is invalid or that he's not allowed to have an opinion. Isn't the comment along the lines of "well, what have you actually tried to do? You have resources, standing, kairos, etc." Seems one of the more perceptive critiques on here.
I'd just like to add three things to the conversation:
1) Instead of LLMs, imagine large models trained end-to-end on ALL online content and the impact it has on public opinion and discourse. What about when everything is an algorithmic feed controlled by such a model under the control of the elite? You might be resistant(but probably aren't), but in aggregate this will be effective mind control over society.
2) Money directs human effort. Every quantum of bargaining power the worker/middle class lose due to being less needed is the reduction in our ability to have a say in who society should serve and how.
3) Don't forget regulatory capture is a thing. Not just a thing but happening as we speak. Are you still optimistic?
4) Tech is already addicting and ads are already everywhere even without technology that has a theory of mind.
5) Do not forget that humans are social creatures, power over others is not just an accidental byproduct of wealth. Once you're unnecessary for labor, what's left? Fulfilling sexual/emotional/social whims of the wealthy elite? Hunger games? Being a pet in a billionaire's human zoo city so he can brag about his contributions to humanity?
Inevitably? Maybe not, but the situation isn't gonna get better by saying "oh I'm sure the tech industry will do a 180 and stop making everything worse"
> seems to have no issue with Google hosting their email.
There's this meme where person A says "we should improve society somewhat", and B replies "yet you participate in society! curious". Very similar argument.
You enjoy individual benefits and completely disregard the fact that electronics addiction and loneliness get worse year by year. You've been able to Google anything and chat with anyone back in 2010, all we've achieved since is making the average person spend 4-5h mindlessly doomscrolling on their phone and watching YouTube instead of having meaningful social interaction.
Also, we've got an entire generation growing up on ads, algorithmic brainrot, and now ai slop.
You're also forgetting algorithmic price fixing, algorithmic pricing, the billions in R&D into making internet platforms and services more addicting and effective at siphoning out your money, etc.
> It is the input that continuously reconstitutes itself around whatever remains scarce, valuable and socially demanded as productivity rises.
Even if, AI is going to tank the bargaining power of the working class even harder than it already is.
It's already the reality for many that they're working for minimum wage, in toxic environments, with no benefits, and more overtime than legal in places that regulate it solely because they have very little better choice.
Furthermore, this power inequality directly translates to influence over economic output of our civilization - by the time value of human menial or cognitive labor goes low enough to delete jobs, all that will be left will be various equivalents of being a sugar baby for the rich - fulfilling their emotional, sexual, and social needs. Not even art, because that's among the things gen AI is displacing the most effectively.
> Middle class status anxiety manifesting as a rhetoric about neofeudalism.
The middle class is a tiny, tiny fraction of the population nowadays. Even among those working high-earning jobs like tech/healthcare/finance, most are just upper worker class.
The answer is obvious: abandon civilisation, join the Amish. Big Tech cannot ruin your life if you simply don't participate in the capitalistic market. I mean sure, it's difficult to live completely off-grid, but in modern world, it's a long way before you starve to death.
What I'm saying is, I have a cozy bullshit job that gives me the perspective of someday not being in the working class anymore. But if that wasn't the case, I'd 100% fuck that and look for alternative lifestyles.
> Big Tech cannot ruin your life if you simply don't participate in the capitalistic market. I mean sure, it's difficult to live completely off-grid, but in modern world, it's a long way before you starve to death.
You still have to pay taxes, and doing this means giving up on a lot of your existing human connection, joining a community in no small part comprised of crackpots, tanking your quality of life, and losing your influence over the future of broader society.
Furthermore, it's a personal solution to a systemic problem. It'll work for a few people, but it's not a fix.
"Bro I can't wait to go back to being an UberEats delivery driver without any insurance living below minimum wage working all day every single day just to afford to rent half a room and small instant ramen"
You think that these ridiculously high wages that companies like Mercor are paying for data generation are "tanking" bargaining power? Its the complete opposite: there is now a massive sector of highly skilled, specialized labor that produces the very data which trains these models, a task that will not end as long as there is demand for newer and better and more specially trained models. That is a massive amount of bargaining power. It would take far more severe shocks to the system to kill the possibility of revolution, and that whatever that would be would be bad for everyone.
Ridiculously high wages for jobs whose explicit purpose is to make human workers(including those partaking in those jobs) obsolete. THE reason why they pay so high is because their end-goal is not having to pay anyone anymore ever again. (or at least, only pay a comparatively tiny amount of people for producing the data)
> Ridiculously high wages for jobs whose explicit purpose is to make human workers(including those partaking in those jobs) obsolete
I would say they are making the current cost of one's labor "obsolete". Most jobs are like this. If you work for yourself, you're trying to make the cost of your labor obsolete in place of cheaper work. If you work for a company, the company will be trying to make the cost of your labor obsolete in place of cheaper work. Any value captured in the arbitrage of product price and employee pay is seen as the value of management.
>I would say they are making the current cost of one's labor "obsolete".
That's how it builds to making a market cost obsolete. Especially when the explicit goal is near 100% removal of human labor.
It's like fishing, do it slowly enough (like the dozens of millenia of humanity before the industrial revolution) and fish will repopulate just fine. Do it too fast and you drive fish to extinction. They want to make labor extinct.
> Not even art, because that's among the things gen AI is displacing the most effectively.
Kinda, but also no.
Yes, there's a lot of people (including me) who genuinely enjoy the output of these models; but art isn't only aesthetics, I observe it also being a peacock's tail, where the cost is the entire point.
Why are originals more valuable than reproductions? Nobody who understands the tech can seriously claim that a robot with suitable brush and paints is incapable of perfectly reproducing any old masterwork down to the individual brush strokes — of course a robot can do that, the hardest part of that is compiling the list of requisite brush strokes, but that too can be automated.
But such a copy, and lets say the paints were chemically perfect and also some blend of plant and petroleum derivatives so as to fool even a carbon-dating test, would never command as much money as the original unless someone deliberately mixed them up so that nobody would even know which was which.
However, I don't know that this would ever help the masses. Perhaps a quadrillionaire in a space mansion would like to buy all of Earth and all the people on it, but that doesn't mean we'd get any better than being forced to LARP whatever folly* they chose for us.
People boycott AI art in indie games, music, etc. but it's still finding massive use everywhere across most industries, and by and large companies are getting away with it.
> would never command as much money as the original unless someone deliberately mixed them up so that nobody would even know which was which.
That is based on current sensibilities which are capable of change. Give it 20 years of PR campaigns and who knows where we end up.
> People boycott AI art in indie games, music, etc. but it's still finding massive use everywhere across most industries, and by and large companies are getting away with it.
Use, yes. I find it on packaging, on billboards. "Massive" use? Not so clear, e.g. how much is this just a substitute for clip art and stock photography? I.e. stuff that was already low-value.
But even there, to the extent they get away with it, that's due to people like me who care about aesthetics.
To illustrate the difference:
Know what I don't care for? Der Kuss[0], the Mona Lisa[1], everything Van Gogh is famous for[2], likewise Pollock, Frida Kahlo, Monet, and Gauguin.
Know what's expensive? All of those things I don't care for.
> That is based on current sensibilities which are capable of change. Give it 20 years of PR campaigns and who knows where we end up.
When there's a thousand indistinguishable replicas of the Mona Lisa, the original becomes worthless, the replicas don't gain value.
The effort is the point, it's the reason for the price. When an expensive thing becomes cheap, it stops being an expensive signal (and vice-versa).
[0] the woman's head looks like it's been severed, rotated 90°, her ear placed where the stump of her neck ought to be. Most of Klimt's other stuff is better.
[1] her smile is not "mysterious", it's just the resting face of someone who had to keep still for long enough to get painted
[2] the stuff he's not famous for is, IMO, generally better than the stuff he is famous for.
The originals of physical art pieces operate off the same logic that drives NFTs. As stupid as people think they are, they do show a modern drive for ownership of something with provenance.
reply