I give you an example. I took advantage of some free time in these days to finally implement some small services on my home server. ChatGPT (3.5 in my case) has read the documentation of every language, framework, API out there. I asked it to start with Python3 http.server (because I know it's already on my little server) and write some code that would respond to a couple of HTTP calls and do this and that. It created an example that customized the do_GET and do_POST methods of http.server, which I didn't even know exist (the methods.) It did do well also when I asked it to write some simple web form. It did not so well when things got more complicated but at that point I already knew how to proceed. I finished everything in three hours.
What did it save me?
First of all the time to discover the do_GET and do_POST methods. I know that I should have read the docs but it's like asking a colleague "how do I do that in Python" and getting the correct answer. It happens all the time, sometimes it's me to ask, sometimes it's me to answer.
Second, the time to write the first working code. It was by no means complete but it worked and it was good enough to be the first prototype. It's easier to build on that code.
What it didn't save me? All the years spent to recognize what the code written by ChatGPT did and to learn how to go on from there. Without those years on my own I would have been lost anyway and maybe I wouldn't been able to ask it the right questions to get the code.
I've been learning boring old SQL over the last few months, and I've found the AIs quite helpful at pointing out some things that are perhaps too obvious for the tutorials to call out.
I don't mind taking suggestions about code from an AI because I can immediately verify the AI's suggestion by running the code, make small edits, and testing it.
But this is my pet peeve when people claim this; the only reason is because the AI code is small and constrained in scope. Otherwise the very claim that humans can easily verify AI code quickly would, like, violate Rice's Theorem.
That's an underutilization of the complexity theory. Since not all problems are formulated as being Turing-complete, there are better theorems to apply than Rice's Theorem, such as:
* IP=PSPACE (you can verify correctness of any PSPACE computation in polynomial time)
* NIP=NEXPTIME (you can verify correctness of any NEXPTIME computation with two non-cooperative provers)
* NP=PCP(1,log(n)) (you can verify correctness of any NP statement with O(log(n)) bits of randomness by sampling just O(1) bits from a proof)
What these means is that a human is indeed able to verify correctness of the output of a machine with stronger computational abilities than the human itself.
What did it save me?
First of all the time to discover the do_GET and do_POST methods. I know that I should have read the docs but it's like asking a colleague "how do I do that in Python" and getting the correct answer. It happens all the time, sometimes it's me to ask, sometimes it's me to answer.
Second, the time to write the first working code. It was by no means complete but it worked and it was good enough to be the first prototype. It's easier to build on that code.
What it didn't save me? All the years spent to recognize what the code written by ChatGPT did and to learn how to go on from there. Without those years on my own I would have been lost anyway and maybe I wouldn't been able to ask it the right questions to get the code.