Hacker News new | past | comments | ask | show | jobs | submit login

> It's not. The presence of CVEs in software is a sign of maturity, not insecurity.

It really depends, depending on both the complexity and scope of use of the project, and whether the rate of CVEs is decreasing.

Flash had CVEs regularly for years. I would say it was getting more mature, but the rate they were coming out and the fact that they just kept coming indicated to me not that the project was maturing (at least not at a rate I was comfortable with), but that either the programmers were inept, or more likely, that earlier design decision lead to very hard to reason about security and made it extremely hard to harden prior to exploits being discovered. Neither are conclusions that made me want to use that software.

OpenSSL fell into a similar situation a couple years back. Seeing all the CVEs that were coming out, once could reason "this is just a mature project", but the truth (exposed by numerous people doing audits) was that the code base and developer process was a dumpster fire.

> The key element is to realize that security issues are just bugs, especially in the context of C software.

They are, but not all people and projects produce the same quality and quantity of bugs.

All this is probably how you already see it, just not quite spelled out in your prior comment. I just thought it was important to make that distinction obvious. :)

P.S. This is somewhat divorced from the context of whether systemd is a good code base or not. I didn't intend for this to be an implicit assessment of systemd, I'm not making any judgements on it here.




All security issues are not just bugs.

Design is not a bug. Some things just aren’t designed to meet security goals. Telnet is plaintext, in most environments that’s a pretty bug security issue. That’s not a bug in the code, it’s just not designed to protect the data from tampering, evasedropping and hijacking. It just can’t operate any other way.

Configuration errors are security issues, but they are not bugs. Users can setup up things insecurely.

Human beings present their own security issues, and they are definitely not bugs you can code away.

The biggest myth about software security is that’s it’s all just bugs. This leads to after the fact thinking (well just patch it), and a huge blind spot to the fact that security isn’t something you can just build, it’s an entire process that goes way beyond just code.


I think this is mostly an issue of overloaded terms. There are security design considerations, and security issues. Telnet being plaintext is not a security issue for telnet, it's a security issue for those using telnet for something it's unsuited for. HTTP being unencrypted is not a security issue for the HTTP protocol, or an application that wants to support that (a browser), but it may be for an application that makes requests over HTTP instead of HTTPS when those requests require some level of privacy.

If an application has a design goal to be secure in some aspect, but the design they chose doesn't accomplish that, then the design itself is a bug and needs to be fixed (or they need to change their design goals). Buggy designs exist, they're the designs that don't fulfill the desired purpose.

All security issues in a the context of a project which intends to provide security in that aspect are bugs.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: