This argument warrants introspection for "crusty devs", but also has holes. A compiler is tightly engineered and dependable. I have never had to write assembly because I know that my compiled code 100% represents my abstract code and any functional problems are in my abstract code. That is not true in AI coding. Additionally, AI coding is not just an abstraction over code, but an abstraction over understanding. When my code compiles, I don't need to worry that the compiler misunderstood my intention.
I'm not saying AI is not a useful abstraction, but I am saying that it is not a trustworthy one.
I do still write assembly sometimes, and it's a valued skill because it'll always be important and not everyone can do it. Compilers haven't obsoleted writing assembly by hand for some use cases, and LLMs will never obsolete actually writing code either. I would be incredibly cautious about throwing all your eggs into the AI basket before you atrophy a skill that fewer and fewer will have
How is a compiler and an LLM equivalent abstractions? I'm also seriously doubtful of the 10x claim any time someone brings it up when AI is being discussed. I'm sure they can be 10x for some problems but they can also be -10x. They're not as consistently predictable (and good) like compilers are.
The "learn to master it or become obsolete" sentiment also doesn't make a lot of sense to me. Isn't the whole point of AI as a technology that people shouldn't need to spend years mastering a craft to do something well? It's literally trying to automate intelligence.