The thing that bothers me about "warmer, more conversational" is that it isn't just a cosmetic choice. The same feedback loop that rewards "I hear you, that must be frustrating" also shapes when the model is willing to say "I don’t know" or "you’re wrong". If your reward signal is mostly "did the user feel good and keep talking?", you’re implicitly telling the model that avoiding friction is more valuable than being bluntly correct.
I'd much rather see these pulled apart into two explicit dials: one for social temperature (how much empathy / small talk you want) and one for epistemic temperature (how aggressively it flags uncertainty, cites sources, and pushes back on you). Right now we get a single, engagement-optimized blend, which is great if you want a friendly companion, and pretty bad if you’re trying to use this as a power tool for thinking.
I'd much rather see these pulled apart into two explicit dials: one for social temperature (how much empathy / small talk you want) and one for epistemic temperature (how aggressively it flags uncertainty, cites sources, and pushes back on you). Right now we get a single, engagement-optimized blend, which is great if you want a friendly companion, and pretty bad if you’re trying to use this as a power tool for thinking.