Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

well I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate

and the 1 dollar cost for your case is heavily subsidized, that price won't hold up long assuming the computing power stays the same.



Cheaper models might be around $0.01 per request, and it's not subsidized: we see a lot of different providers offering open source models, which offer quality similar to proprietary ones. On-device generation is also an option now.

For $1 I'm talking about Claude Opus 4. I doubt it's subsidized - it's already much more expensive than the open models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: