Yeah, asking a programming question without some bitter old coder tut-tutting you is very much a selling point with AI chatbots, regardless of my reservations with the overall trend.
True, but the LLM is always polite and in problem solving mode, while SO is in a curating mode. This makes it a great knowledge base and LLM training set, but not a great source to have your questions answered.
I think a big part of why people prefer to ask an online forum instead of using the search function is the human interaction aspect, but that requires two people, including a mentor who is patient and helpful - and unfortunately, that's difficult to find. An LLM is patient, helpful, and problem-solving, but also responds pretty much immediately.
Sure, but so will SO. On most questions it seems that at least a third of the answers are just wrong in one way or another. At least with an LLM you can just tell it's wrong and have a new, hopefully better, answers 30 seconds later, all without hurting its feelings or getting into a massive online flame war.
It was bootstrapped from SO. Now there are third party data companies like Scale AI that pay gig workers to write code examples for LLM training. It's died down but I saw lots of spam for this (ie hiring) earlier in the year.
Sota LLMs didn't get that way by scraping the internet, it's all custom labeled datasets.
Unless you pay decent rates the code will be shite, otherwise outsourcing to underpriced developers would have worked in the last push to rid businesses of the expense of skilled developers.
Plus, they're getting real world training data from everyone who either hasn't or doesn't have the ability to opt out of their stuff being used.
For my personal stuff, I don't opt out of training for this very reason. What's more, I resent Stack Overflow and Reddit etc. trying to gate-keep the content that I wanted to give to the community and charge rent for it.
I used to intentionally post question-answer style posts where I would both ask the question,wait for a while, then answer the question on both Reddit and Stack Overflow. I don't do that anymore because I'm not giving them free money if they're not passing some of the benefit on to the community
"... resent SO and Reddit trying to gatekeep": Am curious why you felt they were gatekeeping your content. They are free websites, and anybody who wants/needs to read your content, can.
"... not giving them free money if they're notnpassing some of the benefits ..." - Could you expand on the specific benefits you wanted them to pass on to the community? As a user, being able to find other people's content that is relevant to my current need is already a pretty solid benefit.
> For my personal stuff, I don't opt out of training for this very reason. What's more, I resent Stack Overflow and Reddit etc. trying to gate-keep the content that I wanted to give to the community and charge rent for it.
And AI companies don't charge for their stuff and charge rent?
Too bad library documentation has taken a nosedive for a lot of modern tools. I blame all those automagic documentation/packaging tools that take some 2 line string and your args for each function and turn it into a boilerplate documentation page. You truly learn nothing you wouldn't by just looking at the source code of the function for pages generated like this. The author spends no additional time adding potentially useful information. A far cry from what you get when you man bash for example, where they even teach you how to file a bug report.
I'm afraid programming is going to be frozen at 2020s tech for the foreseeable future. New frameworks, libraries and languages will suffer from a chicken and egg problem where no one uses them because LLMs don't know how to answer questions about them and LLMs can't learn the new stuff because programmers aren't generating new samples for the LLMs to ingest.
This is why I've had to spend a huge amount of my free coding time this year documenting my canvas library[1][2] in a way that can (potentially[3]) be used as LLM training data instead of, well, developing my library with new and exciting (to me) features.
On the silver lining side, it's work that I should have been doing anyway. It turns out that documenting the features of the library in a way that makes sense to LLMs also helps potential users of the library. So, win:win.
[3] - Nothing is guaranteed, of course. Training data has to be curated so documentation needs to have some rigour to it. Also, the LLMs tell me it can take 6-12 months for such documentation to be picked up and applied to future LLM model iterations so I won't know if my efforts have been successful before mid-2026.
Yeah, I think that too. Same with non programming domains. Since your blog and what not wont be seen, just ingested by LLM, there will be even less motivation to write them. And they were already dying also due to need for SEO, otherwise you dont exist.
So, that stuff will just cease to exist in its previous amounts and we will all move on.
Small models aren't large enough to have knowledge about every single framework or library through pre-training and yet if you give them a man page/API reference they easily figure out how to use the new code.
Or developers will have more free time to solve novel problems instead of wasting hours digging through Google results and StackOverflow threads to find answers to already solved problems
They will be writing the answers into codebases that AI will be ingesting, but it will lack any context about the question it is answering so AI won't know how it relates to anything else
I've been wondering the same too. uv has completely transformed the Python workflow, and I really hope future documentation and knowledge bases incorporate it, but time will tell.
A lot of the questions asked on stack overflow can be answered by reading the source code and documentation of the libraries and frameworks. An LLM can do the same thing. It doesn't need stack overflow for knowledge or content, it needs it for the question->answer format.
Sure, but the technology peels off the aggravation and delivers the content without the asshats.
If someone stuck an LLM between me and facebook, so I got all my facebook content without the flat earthers, moon landing deniers and tartarians, meta would never see me again.
That's not good enough. The AI has to give me updates on important events of my friends and family without showing me everything they ate at restaurants, funny cat videos they liked, or what movies they planned to watch.
Proving that AI is not just parroting back what it reads on the web, ChatGPT manages to correct my programming mistakes without making me feel bad. If it learned from Substack, I'm glad it learned selectively!
I'd rather be treated nice by a bot, than be abused by a human. Make whatever of this you will.
Though I know the bot is not sentient. I'd rather chat with it, than some human who doesn't talk well.
Im guessing the future of relationships works the same way. All the best competing with a bot that makes you feel nice, than a spouse/partner who doesn't.
It will be a hard era to come for people who misbehave. The tolerance for that sort of stuff is going to go away entirely.
I see what you mean as in no tolerance for abusive people that can currently get away with overtly treating people like shit, but I do immediately think “that generally captures the more trivial abusers.” The most destructive ones tend to be covert, and more often will appear as friendly and polite as LLMs when setting up their targets. That makes me wonder if it will create a harder or easier era for them. Not something I’ve thought on before to have formed an opinion, but it makes me wonder.
Yeah sorry, but I think you massively misread that take. It's not about conformity. It's about keeping yourself liked so that people will voluntarily interact with you. You can no longer get away with being a narcissistic arsehole.
I'm not sure I agree that's gonna happen, I'm just trying to paraphrase what I think the GP meant.
Which is good to some extent. We have people off the deep end sharing “unlock/jailbreak prompts” which turn LLMs in to schizophrenia machines affirming any psychosis. While a real person would push back and try to get you help.
ChatGPT can’t tell the difference between being given a harmless instruction / role play prompt, vs someone who is going insane. Probably explains why many of the most vocal AI users seem detached from reality, it’s the first time they have someone who blindly affirms everything they think while telling them they are so smart and completely correct all the time.
When I look at how far tech has come in my own life, I'm mid 50's, I don't think the singularity is out of the question in my kids life, or even my own if I'm lucky. When I born there was no such thing as a PC or the internet.
As far as I'm aware, the only missing step is for the llms to be able to roll the results of a test back into its training set. It can then start proposing hypotheses and testing them. Then it can do logic.
I don't understand the skepticism. LLMs are already a lot smarter than me, all they need is the ability to learn.
** Wikipedia definition of singularity. "an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence that culminates in a powerful superintelligence, far surpassing all human intelligence.[4]"
That's highly doubtful, not unless your definition of intelligence requires volume of regurgiting information and not contextualizing and building on such knowledge. LLMs are "smart" in the same way a person who gets 1600 on the SAT* is "smart". If you spend your time minmaxing towards a specific task, you get very good at it. That skill can even get you as far in life as being a subject matter expert. But that's not why humans are "inelligent" in my eyes.
*yes, there is correlation. Because people who take the time to study and memorize for a test tend to have better work habits than those that don't. But there's a reason why some of those kinds of students can end up completely lost in college despite their diligence to study.
>I don't understand the skepticism.
To be frank, we're in a time where grifts are running wild and grifters are running away red handed. Inside and outside of tech. I am very septical by default in 2025 for anyone who talks in terms of "what can happen" and not what is actually practical or possible.
I'm no Math Olympic Athlete, and I have a terrible memory.
I don't know what they definition of smart is, but you don't have to listen to any of the grifters to know that the current LLMs can do a lot things better than the average person.
It's a machine. I or no I, we knew for decades what they can excel at and what they fall far short of. Llms only amplify those steneghrs and weaknesses.
My definition of intelligence logical intelligence involves the following factors:
- able to recognize patterns in data, even complex ones (machines have always excelled in this, with aid of a human)
- an ability to break down complex concepts into fundamentals. I.e seductive reasoning (LLMs as of now don't really do this)
- conversely, the ability to learn new concepts and apply them to more complex situations (LLMs especially cannot do this without human assistance)
- the ability to synthesize a set of data and come to a conclusion given an environmental context. I.e. Critical reasoning (Ai as of now is poor at this. Partially by design, as critique is avoided in their behavior).
As a few criteria. Artificial "intelligence" as of now is simply leveraging its superior pattern matching to give the illusion of reasoning, and it's mimicry is close enough for non-subject matter experts to choose to believe its output.
Until now computing was running on a completely different model of implied reliability. The base hardware is supposed to be as reliable as possible, software is supposed to mostly work and bugs are tolerated because they're hard to fix. No one is suggesting they're a good thing.
LLMs are more like something that looks like a text only web-browser, but you have no idea if it's producing genius or gibberish. "Just ignore the mistakes, if you can be bothered to check if they're there" is quite the marketing pitch.
The biggest development in tech has been the change in culture - from utopian libertarian "Give everyone a bicycle for the mind and watch the joy" to the corporate cynicism of "Collect as much personal information as you can get away with, and use it to modify behaviour, beliefs, and especially spending and voting, to maximise corporate profits and extreme wealth."
While the technology has developed, the values have run headlong in the opposite direction.
It's questionable if a culture with these values is even capable of creating a singularity without destroying itself first.
Is the conclusion here not that you are asking questions when you should instead have been looking for an existing resource?
The ability to search across the massive accumulation of knowledge we have already built up is a primary skill for software development, and the tut-tut'ing is a way of letting you know that you failed in that endeavor, which should be valuable feedback in itself.