imo, it's the same reason that Grace Hopper designed COBOL to write programs instead of math notation.
What natural language processing does is just make a much smarter (and dumber, in many ways) parser that can make an attempt to infer the intent, as well as be instructed how to recover from mistakes.
Personally I'm a skeptic since I've seen some hilariously bad hallucinations in generated code (and unlike a human engineer who will say "idk but I think this might work" instead of "yessir this is the solution!"). If you have to double check every output manually it's not that much better than learning yourself. However, at least with programming tasks, LLMs are fantastic at giving wrong answers with the right vocabulary - which makes it possible to check and find a solution through authoritative sources and references instead of blindly analyzing a problem or paying a human a lot of money to tell you the answer to your query.
For example, I don't use LLMs to give me answers. I use them to help explore a design space, particularly by giving me the vocabulary to ask better questions. And that's the real value of a conversational model today.
I think you've nailed a subtly — and a major doubt — I've been been trying to articulate about code helpers from LLMs from day one: the difficulty in programming is reducing a natural language problem to (essentially) a proof. I suspect LLM's are great at transferring style between two sentences, but I don't think that's the same as proof generation! I know work is being done I this area, but the results I've seen have been weird. Maybe transferring style won't work for math as easily as it does for spoken language.
What natural language processing does is just make a much smarter (and dumber, in many ways) parser that can make an attempt to infer the intent, as well as be instructed how to recover from mistakes.
Personally I'm a skeptic since I've seen some hilariously bad hallucinations in generated code (and unlike a human engineer who will say "idk but I think this might work" instead of "yessir this is the solution!"). If you have to double check every output manually it's not that much better than learning yourself. However, at least with programming tasks, LLMs are fantastic at giving wrong answers with the right vocabulary - which makes it possible to check and find a solution through authoritative sources and references instead of blindly analyzing a problem or paying a human a lot of money to tell you the answer to your query.
For example, I don't use LLMs to give me answers. I use them to help explore a design space, particularly by giving me the vocabulary to ask better questions. And that's the real value of a conversational model today.