But using a language like Ruby or PHP, for instance, doesn't really lead to fewer bugs, or more advanced software, or cheaper software, or fewer security concerns in practice.
What it often actually leads to is untalented developers creating a lot of bug-ridden and vulnerable code extremely quickly. It's efficiency in all the wrong ways.
Do you remember that Diaspora social networking project that received a lot of hype a couple of years ago? It was a Ruby on Rails app, and the early releases were absolutely filled with some particularly nasty bugs and security flaws. The only reason they were eventually fixed is because the code was made public, and people pointed out these flaws. There is a lot of Ruby code out there, for instance, that isn't public, yet is still riddled with the same types of problems.
That's not to say that the same isn't true for Python, or Java, or C#, or C++, or any other language. But we shouldn't be claiming that using a language like Ruby or Python somehow leads to more secure code. It doesn't, and it's dangerous to think that it does.
> But using a language like Ruby or PHP, for instance, doesn't really lead to fewer bugs, or more advanced software, or cheaper software, or fewer security concerns in practice.
Which domain are we talking about? Of course web-based applications have their own problems, but imagine if idiomatic Ruby or PHP code was vulberable to buffer overflows, double-free/use-after-free or format string vulnerabilities on top of the current problems, would you still say that languages don't matter when it comes to software development problems and issues? Essentially, what you're saying is that modern programming languages aren't any more better in practice in said regards than C. Honestly?
Of course no language can prevent outright bad code, but a language, by it's design, can eliminate issues related to for example type safety and memory safety. Consider Rust as an example of this. What this means in practice is that the language by it's design manages to eliminate these issues. Code is less prone to bugs and has no security concerns related to these issues. More time for validating correct behavior and fixing misbehaving parts.
> But we shouldn't be claiming that using a language like Ruby or Python somehow leads to more secure code. It doesn't, and it's dangerous to think that it does.
What on earth am I reading? What are the equivalents to buffer overflow or format string vulnerability in Python or Ruby? How do I execute arbitrary machine code with Python or Ruby if malicious input is given to the vulnerable program?
Still, there is a whole class of memory errors / exploits that you can stop caring about once you have managed code. The tradeoff is obviously performance. Although as java/.net show us, not necessarily too much of it.
What it often actually leads to is untalented developers creating a lot of bug-ridden and vulnerable code extremely quickly. It's efficiency in all the wrong ways.
Do you remember that Diaspora social networking project that received a lot of hype a couple of years ago? It was a Ruby on Rails app, and the early releases were absolutely filled with some particularly nasty bugs and security flaws. The only reason they were eventually fixed is because the code was made public, and people pointed out these flaws. There is a lot of Ruby code out there, for instance, that isn't public, yet is still riddled with the same types of problems.
That's not to say that the same isn't true for Python, or Java, or C#, or C++, or any other language. But we shouldn't be claiming that using a language like Ruby or Python somehow leads to more secure code. It doesn't, and it's dangerous to think that it does.