First it was accessible programming languages (BASIC, Pascal, etc. ~1970s/80s), then it was standard software (spreadsheet software, ERM packages, etc. 1990s), then low code/no code (early 2000s), then model-/requirement driven (Rational Rose etc., late 90s, early 2000s), in between it was visual "programming" every now and again, now it's back to low code/no code again.
As long as the mechanical systems cannot test requirements for contradictions, don't accept non-functional requirements such as performance or security and produce non-correctness proven results that even fail to compile or are syntactically incorrect from time to time, I have no fears.
The current situation might as well be a local optimum in which NLP/ML will be stuck for quite a while. It's really hard to tell, but I don't see any reason for starting to panic just yet.