The vast majority of the time, especially with code, I'll point out a specific mistake, say something is wrong, and just get the typical "Sorry, you're right!" then the exact same thing back verbatim.
same thing happens in any language or platform with less than billions of OSS code to train on… in some ways i think LLMs are creating a “convergent API” in that they seem to assume any api available in any of its common languages is available in ALL of them. which would be cool, if it existed.
It doesn't even provide the right method names for an API in my own codebase when it has access to the codebase via GitHub Copilot. It just shows how artificially unintelligent it really is.
Yes, that is my experience as well. But the previous comment seems to be asking whether the LLM would be capable of identifying the mistakes and fixing it itself. So, would that work?