The answer can and certainly will fill many books, dissertations, PhD thesis etc.
Without going too philosophical, although one is not unjustified in going there, and just focusing on my own small professional corner (software engineering): these llm developments mostly kill an important part of thinking and might ultimately make me dumber. For example, I know what a B tree is and can (could) painstakingly implement one when and if I needed to, the process of which would be long, full of mistakes and learning. Now, just having a rough idea will be enough, and most people will never get the chance to do it themselves.
Now B-tree is an intentionally artificial example, but you can extrapolate that to more practical or realistic examples.
On a more immediate front, there's also the matter of threat to my livelihood. I have significant expenses for the foreseeable future, and if my line of work gets a 100 or even 10x average productivity boost, there just might be less jobs going around. Farm ox watching the first internal combustion tractors.
I can think of many other reasons, but those are the most pressing and personal to me.
Not the GP but we're descending into a world where we just recycle the same "content" over and over. Nothing will be special, there'll be nothing to be proud of. Just constant dopamine hits administered by our overlords. Read Brave New World if you haven't.
I have and I don't see the connection with AI-assisted coding.
If your comment was about "generative AI in general" then I think this is the problem with trying to discuss AI on the internet at the moment. It quickly turns into "defend all aspects of AI or else you've lost". I can't predict all aspects of AI. I don't like all aspects of AI and I can't weigh up the pros and cons of a vast number of distinct topics all at once. (and neither, I suspect, can anyone else)