o3 is definitely usable, as I said, it solved about half of the coding tasks I tried. My problem with your original comment was "bottlenecks in genAI is not the knowledge or accuracy". Knowledge and accuracy are absolutely the main bottlenecks for LLMs today. Hallucination rate for o3 and o4-mini models have doubled (compared to o1), and OpenAI does not understand why. If my AI model is not accurate, and if it makes up fake knowledge I don't care how fast it is - I will have to spend more time double checking its output than the time I saved by getting that output faster.