It was based on GPT 3.5 Turbo, but it just changed to a new model GPT4All-J which is improved from 3.5 turbo, but still small enough to run locally. It’s still it’s a small model compared to GPT4 so can’t imagine to competes head to head with that best.
It's fine tuned on ChatGPT answers, but the actual underlying model is GPT-J, a 6 billion parameter model. Roughly one twentieth the size of GPT-3, and it's capabilities are about what you'd expect from that number. I installed it, asked a few dozen questions, then uninstalled when I got garbled nonsense in response. (Not just wrong answers, but incoherent noise, repeating chunks of training prompts back to me unrelated to the question, random python code, etc)
Did you try the new version released today? The initial version had a bad bug that is now fixed with drastically better performance. Many users are reporting some quite high quality responses on discord. Check it out! Link on the homepage…
I've tried it, I remember it being more akin to cleverbot than GPT. One thing I love about GPT - context recognition - seems somewhat broken, having model reply with random nonsense unrelated to my prompt each time.
I really hope it manages to catch up eventually. In terms of privacy, a local LLM is lightyears ahead of a SaaSS LLM controlled by Microsoft.