Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Please, for the love of God, stop your models always answering with essays or littering code with tutorial style comments. Almost every task devolves into "now get rid of the comments". It seems impossible to prevent this.

And thinking is stupid. "Show me how to generate a random number in python"... 15s later you get an answer.



Take some time to understand how the technology works, and how you can configure it yourself when it comes to thinking budget. None of these problems sound familiar to me as a frequent user of LLMs.


Take some time to compare the output of Gemini vs other models instead of patronising people.


I basically only use Gemini, sorry.


They have to do that, it's how they think. If they were trained not to do that they'd produce lower quality code.


So why don't Claude and Open AI models do this?


O3 does, no? 2.5 Pro is a thinking model. Try flash if you want faster responses


No. We're not talking a few useful comments, but verbosity where typically the number of comments exceeds the actual code written. It must think we're all stupid or it's documenting a tutorial. Telling it not to has no effect.


Maybe you hit a specific use case where the LLM part turns into its roots?

I had a somehos similar problem with Claude 3.7, where I had a class named "Workflow" and it got nuts, producing code/comments I didn't ask for, all related to some "workflow" that it tried to replicate and not my code, it was strange.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: