This is outright false. I have used ChatGPT many times over the last couple months and I have caught it give me un-working code, unfinished code, and terribly buggy code. When you point this out it will say Oh sorry about that here is an updated version, and I've caught it give another bug, and another after that. If you are telling me the quality of code that ChatGPT gives you is high then it pains me to say but you must not provide high quality code yourself.
When you used google previous to chatgpt, did you force yourself to only allow yourself to use the “I’m feeling lucky” way of search along with having to use the result as your unadjusted production code. Did you never modify the code you came across?
Of course not, that’s ridiculous. You probably searched, read a few stackoverflow comments, found a relevant GitHub repo, a library for python/language of choice, and probably also a SAAS offering solely focused on the 3 lines of code you need. You quickly parsed all that and decided to modify some code in one of the SO comments for your needs. Next time, you looked passed half the junk and went straight to the first SO result and was able to tweak and use the result. The next time, it didn’t help but did help you write some inspired custom code for the problem, at least you knew what not to try.
My point being ai is useful. It’s not meant to be first result is final answer type solution, if that’s how you use it you will have issues.
How can you say that something is outright false if there is not fact/claim you can disprove. You’re responding to someone you don’t know and have no idea what they are working on.
I’m (not OP!) a cloud engineer but also work on a lot of FE (React) code for internal tools. ChatGPT has saved me countless hours (literally tens a month) writing super simple code that I am able to easily write up myself but typing it out just takes time. After month of using it I find myself still quite excited whenever cGPT saved me another hour. We also use Retool, but I find myself writing code ‘myself’ more often since cGPT launched.
No, I wouldn’t just copy paste production code handling PII, but prototyping or developing simple tools is sooooo much faster, for me.
Sure, it doesn't nail it 100% on the first prompt 100% of the time. Sometimes it takes a few prompts. It's no big deal. If you can't get it to write effective code, either you're working in a very niche area, or you haven't figured out how to use it properly.
Another reason someone can’t get it to write effective code is if they don’t know how to code or aren’t a very good programmer.
I use it a ton. Most of the time it’s very helpful, sometimes I can’t get it to write effective code. If the code it outputs doesn’t meet my standards, I just don’t use it. But I know what I’m looking for, and when ChatGPT generates it, if not only saves me a shitload of time, but more importantly it saves me a ton of mental energy that I can spend elsewhere. The biggest thing for me is that using ChatGPT helps my brain do fewer “context switches” between focus on high level business logic and low level implementation logic. By staying “high level” I’m able to accomplish more each day because I don’t get lost in the sauce as often.
I often have to “upgrade” the code myself with tests, better comments, modify the data structures a bit. Sometimes I tell ChatGPT to do this, sometimes I do it myself. But it’s been very helpful overall.
The big takeaway is that your output will only be as good as your own programming skill, regardless if you use ChatGPT or write it yourself.
I concur. It's just like any other tool, it's only as good as the person using it. I just can't understand the resistance of people in this field. I was a naysayer on a number of things like Docker when it first came out because it didn't solve any of my problems at the time. Then, k8s came out and Docker was a pivotal part of that solution, and k8s solves many problems.
ChatGPT writing code so you don't have to, I just can't conceptualize how that's not an instant win for just about everyone.
Is it 'outright false'? The code it creates is can only as good as the prompt. It's just GIGO all over again...
I got it to write exactly the test I wanted for a snippet of code on the third prompt attempt by specifying exactly the two specific technologies I wanted it to use and one keyword that describes an idiom that I needed. It would have been slightly faster than doing it myself.
Technically it was test code, not production code, but had it been my code rather than just some code I was looking at I would have committed the test code it wrote to the repo with zero reservations.
No, because junior devs usually improve over time.
I've tried Copilot and a few other AI codegen tools. Aside from producing overall low quality/nonworking code, the only times they seem to get better long-term are when a new update to the model comes out.
I should have been clear but ChatGPT was one of the "other AI codegen tools" I mentioned, especially as it's the one I used most recently. I tried it for a month or so but then canceled my subscription. I got some use out of it for answering questions for friends who were learning CS for the first time in languages I didn't know, but I didn't get much else from it which felt like it was high enough quality that it really saved me time or effort.
Edit:
And to contrast with junior developers: I find pairing with them something that makes me not only help me figure out the requirements of the things we're working on--which admittedly ChatGPT does do, but I think that's mostly by virtue of rubber ducking--but it helps me figure out approaches I wouldn't have thought of before, or encourages me to write more maintainable code by seeing when another person's eyes start glazing over.
Trying to claim that someone else’s personal experience is factually wrong? The internet teaches everyone great arguing quips, sure. But “outright false” actually MEANS something. Your comment is all emotion.