In my experience, it’s the opposite. Asking GPT4 for help is most helpful when I don’t know how to do something. Once I know what I’m doing, the mistakes become more obvious and annoying. I’ve learned something, but the chatbot makes the same mistakes as before, and it will keep making them.
Ironically, it’s because people can learn and chatbots don’t. (In the short term, that is; new releases will be better.)
The very article you comment on shows an example where copilot teaches you the wrong thing. That's why you shouldn't learn with AI. AI is very useful as autocompletion, search engine, bug detector... Imagine, very pessimistically, that AI hates you and wants to sabotage you; are there still some use cases, where such an entity could help you? If so, that's the workflow where you can safely use an incompetent (rather than malevolent) AI as well.
Let's say there's some not very readable code. You don't know what it does. You ask AI what it does. It tells you. Now you read the code again and it all makes sense, and based on your experience with coding you are certain you're not misinterpreting the code because AI has mislead you. That's one example of a safe usage of AI.
Ironically, it’s because people can learn and chatbots don’t. (In the short term, that is; new releases will be better.)