Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's possible you are a victim of bugs in the router, and your test prompts were going to the less useful non-thinking variants.

From Sam's tweet: https://x.com/sama/status/1953893841381273969

> GPT-5 will seem smarter starting today. Yesterday, the autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber. Also, we are making some interventions to how the decision boundary works that should help you get the right model more often.





Altman is not trustworthy IMHO. So I have a really hard time taking that tweet at face value.

It seems equally possible that they had tweaked the router in order to save money (push more queries towards the lower power models) and due to the backlash are tweaking them again and calling it a bug.

I guess it’s possible they aren’t being misleading but again, Altman/OpenAI haven’t earned my trust.


I don’t buy it. I don’t trust much of what he says, especially when it’s damage control.

(Not that it really matters whether the auto router was broken, the quantization was too low, the system prompt changed, or the model sucked so they had to increase the thinking budget across the board to get a marginal improvement.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: