Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The vast majority of the time, especially with code, I'll point out a specific mistake, say something is wrong, and just get the typical "Sorry, you're right!" then the exact same thing back verbatim.


I've been getting this a lot. Especially with Rust, where it will use functions that don't exist. It's maddening


same thing happens in any language or platform with less than billions of OSS code to train on… in some ways i think LLMs are creating a “convergent API” in that they seem to assume any api available in any of its common languages is available in ALL of them. which would be cool, if it existed.


It doesn't even provide the right method names for an API in my own codebase when it has access to the codebase via GitHub Copilot. It just shows how artificially unintelligent it really is.


Agreed. I've taken to uploading all relevant documentation as a text file along with my prompt. Even that doesn't always work.


I get this except it tells me to do what I already did, and repeats my own code back to me.


Yes, that is my experience as well. But the previous comment seems to be asking whether the LLM would be capable of identifying the mistakes and fixing it itself. So, would that work?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: