You are the security engineer, so you certainly know better than me, but aren't those runtime mitigations aimed at malicious programs? Which is to say, even if a better-C was written that didn't allow people to write a program that would bump into those mitigations, the bad guys could still write their programs in assembly or C or whatever, right?
But the presence of a better language to write innocent programs in wouldn't protect the innocent programs from malicious programs written in C and assembly...
Think about web browsers. We're not really that worried about people running malicious web browsers. It's potentially a problem, but as long as people know not to run software that some random spammer sends them in an email then it's easy to avoid.
On the other hand what computer security people worry about a lot is that the web browsers made by reputable organizations and teams of competent programmers nevertheless contain security flaws that can be exploited by a maliciously-written website to cause those browsers to do unexpected and dangerous things.
Many of those security flaws in otherwise well-regarded software are due to memory management errors that just aren't present in safer languages, or they're due to type errors that wouldn't be present in more type-safe languages.
There are some implementation bugs that could be present in any language no matter how many safety features it has, but many security bugs aren't due to, say, an incorrectly specified algorithm, they're due to the programmer asking the computer to do something that's literally nonsense, like asking for the fourth element of a list that only has three elements, or recording that someone's age is apricot. Programming languages with powerful nonsense filters can remove a lot of those kinds of security bugs. (And powerful type systems often give programmers mechanisms to tell the compiler more about the program so that it can filter out more kinds of nonsense than it would otherwise.)
I don't understand this response. Nothing really stops that, regardless of implementation language. It's why we have an entire bodged and mostly ineffective AV industry, as well as a slightly less bodged and partially effective endpoint monitoring/detection industry.
Runtime mitigations exist to mitigate some of the latent risk associated with programming in unsafe programming languages. We use them because they're our best known approach to continuing to use those languages without letting script kiddies own us like it's 1993.
You don't seem to understand that those malicious programs have no way of running on someone's computer if they can't exploit some other program to get installed on the machine in the first place. If the system software on the target machine is written in a better language and has no exploits, then it doesn't matter what language the attacker uses for their software.
It would prevent or at least mitigate some classes of exploitation. Buffer overflows are very common attack vectors: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=buffer+over... (386 results, 11 from prior years, so 375 from this year). 23 of those are in kernels, many leading to privilege escalation.
Yes, a better language would protect innocent programs. Take stack canaries. They are to protect stack corruption due to an application bug, e.g. unsafe input handling. Input handling is perfectly safe in many other languages, but C has a lot of footguns.
The language the innocent programs are written in can remove attack vectors that the malicious programs use; I don’t think it matters what the malicious programs are written in?
Sandboxing techniques can also be used for dealing with malicious programs, but most of those things listed are for programs to use to protect themselves, not to stop them from intentionally doing bad things.