Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been trying to use other LLM providers than OpenAI over the past few weeks: Claude, Deepseek, Mistral, local Ollama ...

While Mistral might not have the best LLM performances, their UX is IMO the best, or at least a tie with OpenAI's:

- I never had any UI bug, while these were common with Claude or OpenAI (e.g. a discussion disappearing, LLM crashing mid-answer, long context errors on Claude ...);

- They support most of the features I liked from OpenAI, such as libraries and projects;

- Their app is by far the fastest, thanks to their fast reply feature;

- They allow you to disable web-search.



It is painful, but I have done the same thing: dropping any paid use of OpenAI. For years, basically since I retired from managing a deep learning team at Capital One, I have spent a ton of time experimenting with all LLM options.

Enough! I just paid for a year of Gemini Pro, I use gemini-cli for free for small sessions, turn on using my API key for longer sessions to avoid timeout, and most importantly: for API use I mostly just use Gemini 2.5-flash, sometimes -pro, and Moonshot’s Kimi K2. I also use local models on Ollama when they are sufficient (which is surprisingly often.)

I simply decided that I no longer wanted the hobby of always trying everything. I did look again at Mistral a few weeks ago, a good option, but Google was a good option for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: