Hacker News new | past | comments | ask | show | jobs | submit login

Base GPT-4 was highly calibrated. read open ai's technical paper.

also, this paper on gpt-4 performance of medical challenge problems confirmed the high calibration for medicine https://arxiv.org/abs/2303.13375




Thanks, but I didn't find any details about performance of pre reinforcement training and after. Looking to understand more about the assertion that hallucinations are introduced by the reinforcement training.


https://arxiv.org/abs/2303.08774 The technical report has before and after comparisons. It's a bit worse on some tests. and they pretty explicitly mention the issue of calibration (how well confidence on a problem results in the ability or accuracy solving that problem).

Hallucinations are a different matter.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: