Hacker News new | past | comments | ask | show | jobs | submit login
Introducing s2n, a New Open-Source TLS Implementation (amazon.com)
367 points by ukj on June 30, 2015 | hide | past | favorite | 97 comments



If I counted right:

  OCaml TLS: ~4400 LoC
  OCaml X509: ~1550 LoC
  OCaml ASN1: ~1400 LoC
  OCaml nocrypto: ~5250 LoC
Total ~12600 LoC but you get a fully self-contained implementation, having only some crypto code in C and the rest as pure OCaml:

https://mirage.io/blog/why-ocaml-tls https://mirage.io/blog/announcing-mirage-25-release


Also note that s2n links with OpenSSL (or LibreSSL, BoringSSL) for the ciphers and ASN.1 functionality.

At first I was really surprised/impressed/worried that they managed to pull off an ASN.1 parser in C along with TLS is just 6,000 lines of code. Alas, they did not.

So, when they mention the 500,000 lines of OpenSSL, they are probably actually using a good 20,000+ of it for ASN.1 and all of the ciphers.

Yay marketing!


I think you're being unfair, for they say OpenSSL "contains more than 500,000 lines of code with at least 70,000 of those involved in processing TLS." And the next and last LOC reference is to their s2n, and it's entirely fair to say 6,000 LOC is qualitatively better than "at least 70,000", especially with all the focus, which they cite, on SSL/TLS protocol and implementation bugs.


Still using it for ASN.1 is quite sad, given ASN.1 is where a fair few security bugs have been. If I'm not mistaken, this includes CVE-2015-0286, CVE-2015-0287, CVE-2012-2110, CVE-2009-0590, CVE-2009-0789, and CVE-2006-2937 from the last decade (and more if you go further back). The ciphers are pretty damned solid — the ASN.1 code… not so much. I'd argue that the ASN.1 parsing and the like is one of the areas that sorely needs replacing in OpenSSL, precisely because it has had so many vulnerabilities found in it over the years.


Incidentally, Fabrice Bellard has written a small ASN.1 compiler:

http://bellard.org/ffasn1/

However, he does not want to give it away.

ASN.1 is a rather hairy standard overall, but AFAIK only a part of it is needed for TLS.


On the other hand, the attack area for OpenSSL is the TLS implementation itself, and not the ciphers. Linking against openssl/whatever for ciphers makes more sense than implementing them yourself, and now we can replace OpenSSL's huge, complete TLS implementation with a small, incomplete one that provides just what most people use in a small, auditable package.


The bitcoin piñata recently ended, and they wrote a blog post about it: https://mirage.io/blog/bitcoin-pinata-results


It's interesting, but isn't 10BTC($2500) prize too low to tell us anything about how secure is this ?


Sadly, we're not so flush with cash that we can significantly up the prize, which was itself a donation from the user community. It was quite amusing when some kind users donated Bitcoins into the piñata though :-)

We really like the idea of continuing the self-service security bounties, irrespective of their size. One of the nice things about unikernels is that it makes it easy to link in logic like this -- in a conventional OS, it would mean faffing around with kernel modules in order to safely seal the Bitcoins away, whereas here it's just normal high-level language code.

Incidentally, we're working on exposing a C interface to the OCaml TLS stack so that it can be used as a normal shared library as well. The approach is to use the OCaml Ctypes library (which is normally used to bind to C libraries from OCaml), but deploy it in inverted mode. This means that we expose a C ABI from OCaml code instead.

See https://github.com/yallop/ocaml-ctypes-inverted-stubs-exampl... for an example that exposes a C parsing interface to the OCaml XMLM library. The TLS stack isn't much more complex, but is pending us looking into libtls that are easier to expose than OpenSSL's. The s2n release here is thus nice and timely...


Self service security bounties seems like a very smart idea.

Would self-service security bounties enable a distributed bounty ? where each site developer puts a relatively small bounty in his site and his bounty offers him a certain qualification in the eyes of customers , but from the hacker standpoint , if you hacked one , you hacked them all and hence you can collect multiple bounties ?


Isn't that pretty much what we have already with things like openSSL? Find an exploit and suddenly you've exposed everyone. I don't think public bounties would change any of the dynamics around this situation.


With the current system, if you're using an exploit(especially for gain), you're a criminal.Not so when using pinata.

Also , pinata exposes all hacks in public, unlike today.


My point is that I don't believe any of the dynamics would actually change. White hats would still report issues (they're not necessarily doing it for the money) and nefarious types will still trade/sell exploits.


Agreed. I think they offered the bounty with that expectation. A quote form their blog:

"[...] security bounties can be a very effective way to show the presence of vulnerabilities, but they are hopelessly inadequate for showing their absence."


I don't know the answer to that question, but that's about 2/3rds the price of a single billable day for cryptographic pentesting.


> It's interesting, but isn't 10BTC($2500) prize too low to tell us anything about how secure is this ?

No amount of prize money can ever really tell you how secure something is. We knew this before we announced it (see background at [1]).

[1] http://amirchaudhry.com/bitcoin-pinata/


A small aside: Haskell has a native TLS implementation as well http://hackage.haskell.org/package/tls

I think the dream is there for many but as other comments have pointed out, getting to the battle tested level of OpenSSL is really really hard.


Github says: C 98.2%, Makefile 1.8% https://github.com/awslabs/s2n

Where is the OCaml source / repo?


edwintorok is comparing s2n with the OCaml-TLS stack. See the links at the end of his comment (and the one below)

https://github.com/mirleft


It's a bizarre comment to be the top response.

"Yo sn2, I'm really happy for you, Imma let you finish but Ocaml had one of the best TLS stacks of all time ... one of the best TLS stacks of all time!"


"Have you heard of s2n?"

"I love s2n!"

[edit: oh, c'mon with the downvotes, it was a follow-up kanye ref about beck!]


Note that this library is currently only providing server functionality, and doesn't do certificate validation (in fact it appears to not do any of the X.509 parts of SSL/TLS). It's certainly interesting, but one of the reasons it's so small is that it's missing critical functionality for many use cases.


I think that's kind of the point. If your web server's TLS stack is trying to validate client certificates, you're doing it wrong.


There's nothing wrong with client certs (other than insane complexity). However ultimately s2n is likely to need to support operation as a client too at which point things like certificate validation etc. will be needed and the amount of code will increase.


Insane complexity is exactly why supporting client certs is a bad idea.


Certainly true if you don't need them. However since code using the library as a TLS client is already partially present, support for certificate verification is definitely going to need to be added.

At the moment, I'm not that impressed with the testing since even making it build actually requires patching it! This is just due to one of the examples, but from the git history it's been broken since January.


No, sorry. The insane complexity is on the requirements. If you need client certs, anything you do to satisfy the need will be at least as complex.


Yes, but 99.999% of web servers don't need client certs.


Of course. But they'll end up implementing it anyway, because of the 00.001%.


Source?


Other than the government, nobody trying to do client certs actually runs the CA that issues them that I've seen. Instead, they trust some random set of commercial CAs, ignorant of the fact that openssl s_client -connect will dump out that list to any passer-by. I've even seen them trusting the "domain control validated only" certs, without any indication of "maybe this is a bad idea, because anyone who can buy a cert can auth to us because we don't even check."

So for every case I've seen, they'd have been ahead to issue credentials themselves and just skip client certs.


99.999% of web servers.


I think cperciva's opinion is a sufficiently valid source on this sort of issue, which is probably why you're being down voted.


Thank you for the vote of confidence, but you're wrong. The question of "real world usage of TLS" is one where I would immediately defer to tptacek without question... and you've been around long enough to know that's something I don't do very often.


Eh, just dumping X509 for something easier-to-parse would cut out a huge chunk of code. Being compatible is hard, but we stick with what we have because of switching cost, not because it's the best we can do.


Practical use of client certs usually implies support for renegotiation too.


Just because you haven't come across client cert validation requirement, it doesn't mean it's wrong.

If you have a bunch of micro services that need to communicate with each other securely, client cert validation solves a big problem.


If you have a bunch of small services which need to communicate securely, you should be using something like spiped, not TLS.

TLS is the right solution iff you need to communicate with third parties whom you can't securely share code or keys with in advance.


don't throw solutions with confidence like they are silver bullets. spiped is unmaintained, doesn't support revocation, not easy to debug in a big infrastructure, and will require a lot more work compared to https + client cert authentication, and doesn't provide multi platform support.

PKI is used to build a chain of trust. Whether it involves third parties or not is not the point.


you're replying to the author, and maintainer, of spiped.


I wonder if this could motivate others to try making even simpler implementations of TLS - essentially, an effort toward the bare minimum necessary to be secure.

The fact that they are focusing on the TLS protocol itself and not the actual encryption implementation is a good way to start; the "extraneous complexity" is not really in algorithms like RSA/ECDSA/AES since those are specified mathematically, but in the handling of the protocol messages and states. That is also where most of the bugs tend to be.

It reminds me of this Hoare quote: "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies."


Can you use this library without having to use its IO capabilities?

My biggest issue with OpenSSL is that it also tries to do IO, but does it in a not too well-performing and non cross-platform way.


https://github.com/awslabs/s2n/blob/master/docs/USAGE-GUIDE....

Looks like currently you must set a file descriptor (though the docs mention the possibility of using a pipe). Once an FD is set, you do control pulling/pushing data to s2n.

Can you elaborate more on "not too well-performing"?


You can use OpenSSL without IO APIs. Just use SSL_set_bio ;)


At least with OpenSSL you can implement your own BIO objects and do the I/O yourself if you want/need to. It's not the cleanest or best-documented interface in the world, but it's certainly usable.


it is usable. But I think you would be hard pressed to find a more widely used piece of software that has absolute terrible documentation. The only real way to figure it out is to read the code or read the examples.


Unfortunately this still uses libcrypto from OpenSSL. This isn't a fully self-contained implementation of TLS.


Nobody is particularly worried about libcrypto. There would be little point in reimplementing it's functionality.


libcrypto includes the OpenSSL ASN.1 code, which is worrying as all hell, e.g.: https://git.openssl.org/?p=openssl.git;a=blob;f=crypto/asn1/...

Or any file in that directory.


Oh man, that code is just horrible. No comments on some of the functions, no comments on the input parameters and return values pretty much throughout.

I really thought OpenSSL was in a much better shape.


Code that implements standards should be read with the standard open next to it. This code: https://git.openssl.org/?p=openssl.git;a=blob;f=crypto/md5/m... looks like awful garbage, until you compare it to https://www.ietf.org/rfc/rfc1321.txt , and then you realize you don't really want comments or anything else cluttering up the implementation.


> I really thought OpenSSL was in a much better shape.

Why?


I overlooked that, I was thinking of the crypto primitives there.


They aren't?


Depends who "they" are. Bugs in the crypto code will compromise the cryptographic strength of the connection, revealing data or keys. Bugs in the protocol code will compromise the host which is running it.

Both are bad, but I'd say that "remote root" trumps "side channel attack".


It also supports other libraries. From the README.md:

"Today s2n supports OpenSSL, LibreSSL, BoringSSL".


Glad to see that some of the big players are starting to get into the habit of giving back to the communities building the bricks their success was built upon. Facebook, Google, Apple, Amazon, Twitter, all of them have contributed major pieces of the web fabric in the past few years. The power and money they can divert to such operations is a key factor in producing mature tools which will help foster the web ecosystem in the end.


I wonder why do implement SSLv3 in the new product, while others already deprecating and removing it?


Amazon uses this library on all their AWS api's. They probably still need to support SSLv3.


Amazon disabled SSLv3 on S3 very recently (May 20th), prob as part of moving to S2N.


Plus it is disabled by default along with rc4 and DBE.


> As a result of this, we’ve found that it is easier to review s2n; we have already completed three external security evaluations and penetration tests on s2n, a practice we will be continuing.

"Our pill has been clinically tested."

What were the results?


Very impressive, thanks Amazon!


> s2n is short for “signal to noise”

Anyone else think this was a contraction of the a11y, i18n, a16z or f6s variety?


TIL that the technical term for those is "numeronyms"[1].

[1] https://en.wikipedia.org/wiki/Numeronym


Here are your options:

sawn scan seen sewn shin shun sign skin soon sown span spin spun stun swan

"Yeh, we're not vulnerable, because we've been using the swan library"


> "Yeh, we're not vulnerable, because we've been using the swan library"

Yeah, who would name a crypto implementation something stupid like "swan". Oh wait... https://en.wikipedia.org/wiki/Openswan


"Sign" was the first one that came to my mind.


Corresponding marketing names and slogans for vulns:

Woodchipper, Bad-scan, NSA has seen your secrets, Hands sewn together, Shin kicker, shunned2death, Sign fail, Skin flogger, Too soon, SownPwn, Span Wham, Spinning in place, Spun out of control, Stunned, Birdcatcher


I thought s2n stands for "secure end to end(two ends)".


I think another implementation is not necessarily a bad thing, but it's a shame that this effort couldn't be combined with LibreSSL and BoringSSL or even OpenSSL under a single project. Having more eyes on one thing would be better it would seem.


No! Let's have many implementations all with completely separate code bases. Then when the next security bug is found it won't affect the whole internet.


I can see that side of it, but when N >> M I don't really see how that helps things significantly.

N = number of https sites M = number of tls implementations


You don't think N(M-1)/M sites not being affected by the next heartbleed wouldn't be significant?


Is it me, or is Amazon s2n (signal to noise) logo very similar to our company (signal2meaning) logo?

https://signal2meaning.com/


Sounds great. I hope that Windows support without an additional crypto library dependency is added soon, e.g. using Windows CryptoAPI or Windows Cryptography API: Next Generation (CNG).


I wonder if they will include it in Amazon Linux at some point.


How does this compare to miTLS[1]?

[1] - http://www.mitls.org/wsgi/home


So I bet there will be a rust implementation before long :)


There are some Mozilla security folks feeling out the waters as we speak.


I hope libcurl supports this soon.


s2n is server-only


I don't think that this is true. See https://github.com/awslabs/s2n/blob/master/bin/s2nc.c


[deleted]


Just in case you're serious: Not using autotools is a feature, not a bug.


Could not agree more. I hope somebody write a sane toolchain for open source once.


man, I hope he's not serious


For a library that wants to be ubiquitous (and therefore wants to be cross-platform) it's a bug. Can I expect this to build cleanly on GNU/kFreeBSD, or windows SUA? If it were autotools I would.


Standards-compliant C is more portable than autotools. Autotools is a workaround for non-portable code.


That's a nice ideal, but reality is that it's really easy to need POSIX APIs and then you're firmly in autotools land. That said, I don't like autotools because it encourages #ifdef style nonsense, but not using it requires a lot more thought and more than a little rolling-your-own code and techniques.


reality is that it's really easy to need POSIX APIs

Sure.

and then you're firmly in autotools land

Not at all. If you need the send(2) POSIX system call, what are you going to do if autotools detects that you're on a platform which doesn't have the send(2) system call?


If you're on a platform that doesn't have send(2), you most likely don't have TCP/IP support either and there's no point trying to run an internet server on it. However it is often stuffed away in an oddball platform-specific header and library, and that's where autotools come in - they detect the appropriate flavour of #include statement and the correct library to use.


No - send(2) was often stuffed away in oddball places, and that's why autotools was important 20 years ago; but it isn't any more.


Sadly I think you still need -lnsl -lsocket on solaris :(


Touché! (I think only -lsocket is required for send(2) though -- unless I'm misremembering, -lnsl is just for DNS lookups.)

Still, there's relatively few portability issues these days:

1. Some platforms don't define MSG_NOSIGNAL. (You can work around this via setsockopt.)

2. Some platforms don't define CLOCK_REALTIME. (On platforms which don't provide that, you should be able to use gettimeofday.)

3. Some platforms don't understand -lrt or -lxnet. (This one is awkward since some platforms require those. You have to either taste the compiler or detect the OS.)

4. Solaris needs -lsocket and/or -lnsl for networking related functions. (Either detect Solaris or decide that it's rare enough that you don't really care about supporting it. In spiped I opt for a note in BUILDING telling people how to work around the Solaris standards-compliance bug.)

Compare this to the situation when autotools originated, and we're about 99% better.


Autotools for these pretty simple things is an okay approach, but then people tend to go overboard, and that's when it gets horrible (or autoconf itself breaks libxnet detection cause it gets some quoting or who knows what wrong in some release). My personal pet peeve is people detecting endian in obtuse ways that totally fails during cross-compiling when there's the perfectly fine ntoh family.


You don't even need autotools for those simple things though.


You don't need autotools, but autotools is the consistent way to do it; the "installed base" is large enough that any sysadmin knows how to deal with autotools (and will have to for many years). Writing your own custom check, even if that check is much more concise in and of itself, makes it harder for a sysadmin to understand your project build than just doing the same thing as every other project.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: