It will never happen outside of limited industries because it would appear to be a loss of "freedom". I think the current situation creates an illusory anarchist freedom of informality that leads to sometimes proprietary lock-in, vulnerabilities, bugs, incompatibility churn, poorly-prioritized feature development, and tyranny of chaos and tech debt.
There are too many languages, too many tools, too many (conflicting) conventions (especially ones designed by committee), and too many options.
Having systems, tools, and components that don't change often with respect to compatibility and are formally-verifiable far beyond the rigor of seL4 such that they are (basically) without (implementation) error would be valuable over having tools lack even basic testing or self-tests, lack digital signatures that would prove chain-of-custody, and being able to model and prove a program or library to a level such that its behavior can be completely checked far more deeply in "whitebox" and "blackbox" methods for correctness would prove that some code stand the test of time. By choosing lesser numbers of standard language(s), tool(s), and component(s) it makes it cheaper and easier to attempt to do such.
Maybe in 100 years, out of necessity, there will be essentially 1 programming language that dominates all others (power law distribution) for humans, and it will be some sort of formal behavioral model specification language that an LLM will generate tests and machine code to implement, manage, and test against.
I disagree slightly here. There may be one (1) dominant formal language that's used as the glue code that gets run on machines and verified, but it will have numerous font-end languages that compile into it, for ease of typing and abstraction/domain fit.
There are too many languages, too many tools, too many (conflicting) conventions (especially ones designed by committee), and too many options.
Having systems, tools, and components that don't change often with respect to compatibility and are formally-verifiable far beyond the rigor of seL4 such that they are (basically) without (implementation) error would be valuable over having tools lack even basic testing or self-tests, lack digital signatures that would prove chain-of-custody, and being able to model and prove a program or library to a level such that its behavior can be completely checked far more deeply in "whitebox" and "blackbox" methods for correctness would prove that some code stand the test of time. By choosing lesser numbers of standard language(s), tool(s), and component(s) it makes it cheaper and easier to attempt to do such.
Maybe in 100 years, out of necessity, there will be essentially 1 programming language that dominates all others (power law distribution) for humans, and it will be some sort of formal behavioral model specification language that an LLM will generate tests and machine code to implement, manage, and test against.