Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The rip-off wasn’t just pricing. It was the whole model of scale-for-scale’s-sake. Bigger context, bigger GPUs, more tokens; with very little introspection about whether the system is actually learning or just regurgitating at greater cost.

Most people still treat language models like glorified autocomplete. But what happens when the model starts to improve itself? When it gets feedback, logs outcomes, refines its own process; all locally, without calling home to some GPU farm?

At that point, the moat is gone. The stack collapses inward. The $100M infernos get outpaced by something that learns faster, reasons better, and runs on a laptop.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: