Also note that s2n links with OpenSSL (or LibreSSL, BoringSSL) for the ciphers and ASN.1 functionality.
At first I was really surprised/impressed/worried that they managed to pull off an ASN.1 parser in C along with TLS is just 6,000 lines of code. Alas, they did not.
So, when they mention the 500,000 lines of OpenSSL, they are probably actually using a good 20,000+ of it for ASN.1 and all of the ciphers.
I think you're being unfair, for they say OpenSSL "contains more than 500,000 lines of code with at least 70,000 of those involved in processing TLS." And the next and last LOC reference is to their s2n, and it's entirely fair to say 6,000 LOC is qualitatively better than "at least 70,000", especially with all the focus, which they cite, on SSL/TLS protocol and implementation bugs.
Still using it for ASN.1 is quite sad, given ASN.1 is where a fair few security bugs have been. If I'm not mistaken, this includes CVE-2015-0286, CVE-2015-0287, CVE-2012-2110, CVE-2009-0590, CVE-2009-0789, and CVE-2006-2937 from the last decade (and more if you go further back). The ciphers are pretty damned solid — the ASN.1 code… not so much. I'd argue that the ASN.1 parsing and the like is one of the areas that sorely needs replacing in OpenSSL, precisely because it has had so many vulnerabilities found in it over the years.
On the other hand, the attack area for OpenSSL is the TLS implementation itself, and not the ciphers. Linking against openssl/whatever for ciphers makes more sense than implementing them yourself, and now we can replace OpenSSL's huge, complete TLS implementation with a small, incomplete one that provides just what most people use in a small, auditable package.
Sadly, we're not so flush with cash that we can significantly up the prize, which was itself a donation from the user community. It was quite amusing when some kind users donated Bitcoins into the piñata though :-)
We really like the idea of continuing the self-service security bounties, irrespective of their size. One of the nice things about unikernels is that it makes it easy to link in logic like this -- in a conventional OS, it would mean faffing around with kernel modules in order to safely seal the Bitcoins away, whereas here it's just normal high-level language code.
Incidentally, we're working on exposing a C interface to the OCaml TLS stack so that it can be used as a normal shared library as well. The approach is to use the OCaml Ctypes library (which is normally used to bind to C libraries from OCaml), but deploy it in inverted mode. This means that we expose a C ABI from OCaml code instead.
See https://github.com/yallop/ocaml-ctypes-inverted-stubs-exampl... for an example that exposes a C parsing interface to the OCaml XMLM library. The TLS stack isn't much more complex, but is pending us looking into libtls that are easier to expose than OpenSSL's. The s2n release here is thus nice and timely...
Self service security bounties seems like a very smart idea.
Would self-service security bounties enable a distributed bounty ? where each site developer puts a relatively small bounty in his site and his bounty offers him a certain qualification in the eyes of customers , but from the hacker standpoint , if you hacked one , you hacked them all and hence you can collect multiple bounties ?
Isn't that pretty much what we have already with things like openSSL? Find an exploit and suddenly you've exposed everyone. I don't think public bounties would change any of the dynamics around this situation.
My point is that I don't believe any of the dynamics would actually change. White hats would still report issues (they're not necessarily doing it for the money) and nefarious types will still trade/sell exploits.
Agreed. I think they offered the bounty with that expectation. A quote form their blog:
"[...] security bounties can be a very effective way to show the presence of vulnerabilities, but they are hopelessly inadequate for showing their absence."
"Yo sn2, I'm really happy for you, Imma let you finish but Ocaml had one of the best TLS stacks of all time ... one of the best TLS stacks of all time!"
Note that this library is currently only providing server functionality, and doesn't do certificate validation (in fact it appears to not do any of the X.509 parts of SSL/TLS). It's certainly interesting, but one of the reasons it's so small is that it's missing critical functionality for many use cases.
There's nothing wrong with client certs (other than insane complexity). However ultimately s2n is likely to need to support operation as a client too at which point things like certificate validation etc. will be needed and the amount of code will increase.
Certainly true if you don't need them. However since code using the library as a TLS client is already partially present, support for certificate verification is definitely going to need to be added.
At the moment, I'm not that impressed with the testing since even making it build actually requires patching it! This is just due to one of the examples, but from the git history it's been broken since January.
Other than the government, nobody trying to do client certs actually runs the CA that issues them that I've seen. Instead, they trust some random set of commercial CAs, ignorant of the fact that openssl s_client -connect will dump out that list to any passer-by. I've even seen them trusting the "domain control validated only" certs, without any indication of "maybe this is a bad idea, because anyone who can buy a cert can auth to us because we don't even check."
So for every case I've seen, they'd have been ahead to issue credentials themselves and just skip client certs.
Thank you for the vote of confidence, but you're wrong. The question of "real world usage of TLS" is one where I would immediately defer to tptacek without question... and you've been around long enough to know that's something I don't do very often.
Eh, just dumping X509 for something easier-to-parse would cut out a huge chunk of code. Being compatible is hard, but we stick with what we have because of switching cost, not because it's the best we can do.
don't throw solutions with confidence like they are silver bullets. spiped is unmaintained, doesn't support revocation, not easy to debug in a big infrastructure, and will require a lot more work compared to https + client cert authentication, and doesn't provide multi platform support.
PKI is used to build a chain of trust. Whether it involves third parties or not is not the point.
I wonder if this could motivate others to try making even simpler implementations of TLS - essentially, an effort toward the bare minimum necessary to be secure.
The fact that they are focusing on the TLS protocol itself and not the actual encryption implementation is a good way to start; the "extraneous complexity" is not really in algorithms like RSA/ECDSA/AES since those are specified mathematically, but in the handling of the protocol messages and states. That is also where most of the bugs tend to be.
It reminds me of this Hoare quote: "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies."
Looks like currently you must set a file descriptor (though the docs mention the possibility of using a pipe). Once an FD is set, you do control pulling/pushing data to s2n.
Can you elaborate more on "not too well-performing"?
At least with OpenSSL you can implement your own BIO objects and do the I/O yourself if you want/need to. It's not the cleanest or best-documented interface in the world, but it's certainly usable.
it is usable. But I think you would be hard pressed to find a more widely used piece of software that has absolute terrible documentation. The only real way to figure it out is to read the code or read the examples.
Oh man, that code is just horrible. No comments on some of the functions, no comments on the input parameters and return values pretty much throughout.
I really thought OpenSSL was in a much better shape.
Depends who "they" are. Bugs in the crypto code will compromise the cryptographic strength of the connection, revealing data or keys. Bugs in the protocol code will compromise the host which is running it.
Both are bad, but I'd say that "remote root" trumps "side channel attack".
Glad to see that some of the big players are starting to get into the habit of giving back to the communities building the bricks their success was built upon. Facebook, Google, Apple, Amazon, Twitter, all of them have contributed major pieces of the web fabric in the past few years. The power and money they can divert to such operations is a key factor in producing mature tools which will help foster the web ecosystem in the end.
> As a result of this, we’ve found that it is easier to review s2n; we have already completed three external security evaluations and penetration tests on s2n, a practice we will be continuing.
Corresponding marketing names and slogans for vulns:
Woodchipper, Bad-scan, NSA has seen your secrets, Hands sewn together, Shin kicker, shunned2death, Sign fail, Skin flogger, Too soon, SownPwn, Span Wham, Spinning in place, Spun out of control, Stunned, Birdcatcher
I think another implementation is not necessarily a bad thing, but it's a shame that this effort couldn't be combined with LibreSSL and BoringSSL or even OpenSSL under a single project. Having more eyes on one thing would be better it would seem.
No! Let's have many implementations all with completely separate code bases. Then when the next security bug is found it won't affect the whole internet.
Sounds great. I hope that Windows support without an additional crypto library dependency is added soon, e.g. using Windows CryptoAPI or Windows Cryptography API: Next Generation (CNG).
For a library that wants to be ubiquitous (and therefore wants to be cross-platform) it's a bug. Can I expect this to build cleanly on GNU/kFreeBSD, or windows SUA? If it were autotools I would.
That's a nice ideal, but reality is that it's really easy to need POSIX APIs and then you're firmly in autotools land. That said, I don't like autotools because it encourages #ifdef style nonsense, but not using it requires a lot more thought and more than a little rolling-your-own code and techniques.
reality is that it's really easy to need POSIX APIs
Sure.
and then you're firmly in autotools land
Not at all. If you need the send(2) POSIX system call, what are you going to do if autotools detects that you're on a platform which doesn't have the send(2) system call?
If you're on a platform that doesn't have send(2), you most likely don't have TCP/IP support either and there's no point trying to run an internet server on it. However it is often stuffed away in an oddball platform-specific header and library, and that's where autotools come in - they detect the appropriate flavour of #include statement and the correct library to use.
Touché! (I think only -lsocket is required for send(2) though -- unless I'm misremembering, -lnsl is just for DNS lookups.)
Still, there's relatively few portability issues these days:
1. Some platforms don't define MSG_NOSIGNAL. (You can work around this via setsockopt.)
2. Some platforms don't define CLOCK_REALTIME. (On platforms which don't provide that, you should be able to use gettimeofday.)
3. Some platforms don't understand -lrt or -lxnet. (This one is awkward since some platforms require those. You have to either taste the compiler or detect the OS.)
4. Solaris needs -lsocket and/or -lnsl for networking related functions. (Either detect Solaris or decide that it's rare enough that you don't really care about supporting it. In spiped I opt for a note in BUILDING telling people how to work around the Solaris standards-compliance bug.)
Compare this to the situation when autotools originated, and we're about 99% better.
Autotools for these pretty simple things is an okay approach, but then people tend to go overboard, and that's when it gets horrible (or autoconf itself breaks libxnet detection cause it gets some quoting or who knows what wrong in some release). My personal pet peeve is people detecting endian in obtuse ways that totally fails during cross-compiling when there's the perfectly fine ntoh family.
You don't need autotools, but autotools is the consistent way to do it; the "installed base" is large enough that any sysadmin knows how to deal with autotools (and will have to for many years). Writing your own custom check, even if that check is much more concise in and of itself, makes it harder for a sysadmin to understand your project build than just doing the same thing as every other project.
https://mirage.io/blog/why-ocaml-tls https://mirage.io/blog/announcing-mirage-25-release