Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find these LLM doomer takes as silly as LLM maximalist takes.

LLMs are literally performing useful functions today and they're not going away. Are they AGI? No, but so what?

There is waaay too much projecting and philosophizing going on in these comments and not enough engineering-minded comments from objective observers.

Is AI hyped? Sure. Are LLMs overshadowing other approaches? Sure. Are LLMs inefficient? Somewhat. Do they have problems like hallucinations? Yes. Do they produce useful output? Yes.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: