Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm pretty confused trying to connect all the reports online with my own experiences as well. From what I've tried, ChatGPT does not _understand_ code at all, and there are many inconsistencies in what it says. The "confidently giving a wrong answer" problem is very real, even if the answer might look very correct at first sight. This holds across all the topics I've tried.

When people say they implement complex tasks with ChatGPT, I have to assume that it's a highly iterative process and/or that they are doing part of the design/problem solving themselves because even for a simple task I could not rely only on the bot's reasoning. (Maybe it gets things right in one shot sometimes - but my sense is that "on average" that's not the case at all.)

All that said - the progress here is really impressive, and I'm still having a hard time wrapping my head around what this can mean for the future.



Confirmation bias - people want it to be a silver bullet so that they can make a blog post about how ChatGPT is amazing.


Exactly. As the OP of the blog, the amount of handholding I had to do for it to understand the syntax of an extremely tiny language was a lot. On the other hand, I’ve messed around with codex and other models before, and something about explaining in normal English, as though I was having a conversation rather than just listing some commands made it much easier. I’m excited not because of what exists right now, but because this shows so much promise even just 1 or 2 papers down the line :D


What a time to be alive!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: