Having played a bit with coding with AI I like this view, it's powerful, but context will always be an issue, turning real world problems into working code is a skill in it's own right, that will not change, superpowered yes, for me it's all the 'dull' parts of typing stuff out that I look forward to missing, from an early study on impact in the workplace, using a 2 year study of a call centre, the result was large improvement for less skilled staff as knowledge of best practice from those really excelling at customer service was rapidly propagated to juniors, seniors saw a much smaller increase in performance metrics. Now there is no excuse for me not to get AI to write my comments, a bunch of tests and some API's and database interconnects, etc. I've always taken a modular iterative approach, create a working basic model with a good foundation then extend it keeping it working as I build up to the final deliverable. Tempting just to go and sit in a cave for a couple of months and come back when the tools are a little more refined :)
Jesus, this just triggered another nightmare scenario I should have thought of earlier.
People are going to have it write and/or comment a function whose purpose is non-trivial and not straight-forward.
It will be not or imperfectly checked that the comment actually says what it should and the functionality matches.
There will be no indication of what was written by AI, so the only option is to guess that both were written competently/with the same purpose, but when they don't actually match there's no way to know which is wrong except to check all other code that interacts with it and figure out what has been working right/expecting what was originally intended in ways that subtly break/have been working right depending on the incorrect implementation that existed.
This is absolutely something that already happens with fully human developers, but it seems likely to be much more frequent and not caught as soon with AI assistance.
This also seems like a failure mode that could go pathologically wrong on a regular basis for TDD types.
> function whose purpose is non-trivial and not straight-forward
Nah. It will just be tons and tons of trivial, straight forward code, calling other trivial code, that calls more trivial code.
Yes, some of it will be created manually, but if you go with "the IA created it" you will get it right 99% of the time. And obviously, the more code, the more it will fail; but people always try to fix this with more code.
And then you'll be paying for the AI assistance to help navigate all the code that it wrote. That's in the Enterprise tier. Set up a sales call to learn more.
An AI may be more likely to hallucinate the wrong comment, but also much less likely to forget to update the comment when the code changes. The net result could be better comments.