Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But the thing is, LLMs are already incredibly cheap to operate compared to the alternatives. Both for trivial things and for complex things.


Well recently cursor got a heat for rising price and having opaque usage, while anthropic's claude reported to be worse due to optimization. IMO the current LLMs are not sustainable, and prices are expected to increase sooner or later.

Personally, until models comparable with sonnet 3.5 can be run locally on mid range setup, people need to wary that the price of LLM can skyrocket


You can already run a large LLM (like sonnet 3.5) locally on CPU with 128GB of ram which is <300 USD, but can be offset by swap space. Obviously, response speed is going to be slower, but I can't imagine people will pay much more than 20 USD for waiting 30-60 seconds longer for a response.

And obviously consumer hardware is already being more optimized for running models locally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: