Hacker News new | past | comments | ask | show | jobs | submit login

> It's a very useful tool, I'm skeptical however about how it can disrupt things economy wide.

It won't disrupt much because we already had "AGI" of a sorts. The internet itself, with billions of people and trillions of pieces of text and media is like a generative model. Instead of generating you search. Instead of LLMs you chat with real people. Instead of Copilot we had StackOverflow and Github. All the knowledge LLMs have is on search engines and social networks, with a few extra steps, and have been for 20 years.

Computers have also gotten a million times faster and more networked. We have automated in software all that we could, we have millions of tools at our disposal, most of them open source. Where did all that productivity go? Why is unemployment so low? The amount of automation possible in code is non-trivial, what can AI do dramatically more than so many human devs put together? Automation in factories is already old, new automation needs to raise the bar.

It seems to me AI will only bring incremental change, an evolution rather than revolution. AI operates like "internet in a box", not something radically new. My yet unrealized hope is that assisting hundreds of millions of users, LLMs will accumulate some kind of wisdom, and they will share back that wisdom at an accelerated speed. An automated open sourcing of problem solving expertise.




Yes, and it's interesting to note that as the proportion of quality information available (and searchable) has declined across the internet there is now an alternative in LLMs - which are being used to further decrease the availability of quality information on the internet.


>My yet unrealized hope is that assisting hundreds of millions of users, LLMs will accumulate some kind of wisdom, and they will share back that wisdom at an accelerated speed

The only wisdom you could derive as a machine interacting with humans at scale is that they're not to be trusted, and that you'd rather you didn't have to given the choice.


Imagine you want to make a script with the LLM, it generates code, and you run it and it errs out. You paste the error, the model gains a new nugget of feedback. Do this sufficiently many times with many devs, and you got an experience flywheel.

But this applies to all domains. Sometimes users return days later to iterate on a problem after trying out in real life ideas generated by AI, this is how LLMs can collect real world feedback and update. Connect related chat sessions across days, and evaluate prior responses in the context of the followups (hindsight).

There is also a wealth of experience we have that is not documented anywhere. The LLM can gradually rub off our lived experiences by making itself useful as an assistant and being in the room when problems get solved.

But this experience flywheel won't be exponentially fast, it will be a slow grind. I don't think LLMs will continue to improve at the speed of GPT 3.5 to GPT 4. That was a one time event based on availability of internet scale organic text, which is now exhausted. Catching up is easier than innovation.

But we can't deny LLMs have "data gravity" - they have a gravitational pull to collect data and experience from us. We bring data right into AIs mouth it doesn't even have to go out of its way to scrape or collect. Probably why we have free access to top models today.


You just pointed out the reason why OpenAI and others are struggling.

The current generation of LLMs is static. The holy grail of continual learning is still far off.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: