Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Let's Encrypt root certificate trusted by Mozilla (bugzilla.mozilla.org)
459 points by _jomo on Aug 5, 2016 | hide | past | favorite | 160 comments


The one thing stopping adoption for a lot of people is wilcard support.

https://community.letsencrypt.org/t/please-support-wildcard-...


It's been discussed in details here the reason why they don't support wildcard: "doing domain validation for wildcard certificates is not currently in the ACME spec because it's a hard problem."[1]

LetsEncrypt CA allows Subject Alternative Names (SAN), the true need for an unlimited sub-domains TLS cert vs. a SAN TLS cert is minimum, given Certbot's automation capability.

[1]: https://github.com/certbot/certbot/issues/66#issuecomment-16...


SAN isn't a practical solution for cases where you don't want to expose which subdomains exist, or where you allocate them dynamically.


Unless all your subdomains are unique (e.g. coming out of a PRNG) AND there are no public DNS entries for them, subdomain enumeration by DNS or IP space is super easy. Not using SAN because of info disclosure concerns is security through obscurity.


Well yes, that's the idea: Say you have a wildcard DNS entry and you cryptographically-randomly-generate hostnames in it, as an added layer of defense against CSRF bugs in the applications running on these hosts.

https://docs.sandstorm.io/en/latest/administering/wildcard/#...


That doesn't prevent domain enumeration for your application though. Once you publish an application, anyone using it can find the address its hosted behind


> publish

Sandstorm is a platform for personal computing; each person runs their own applications, much like in a PC. Also, applications don't get hostnames; each document (or equivalent) in the application gets its own hostname.


It's even finer-grained than that. Each session gets a hostname -- if you open the same document twice, it's at a different hostname each time. This also implies that different users get different hostnames, so you can't discover the host another user sees even if you have access to the same document.


Isn't that insecure because hostnames are sent in the clear? (both in TLS handshake and DNS lookup?)


The hostnames aren't authentication, just another layer of defense, mostly against JavaScript attacks (e.g. from hacked applications): https://docs.sandstorm.io/en/latest/administering/wildcard/#...


Right, this is about mitigating the damage when apps have a bug -- risk management. Instead of being exploitable from anywhere on the internet, the bug becomes exploitable only by attackers who have a passive network MITM, which, while possible, is a very high barrier.


For such applications, there are plenty of relatively inexpensive paid options for wildcard certs. I don't think this is something Let's Encrypt should solve. I'd rather see them invest more time into supporting dynamic dns provider domains better, which imho is a much larger issue for small/hobby/free projects.


I'm ready to pay a reasonable amount for a wildcard cert to use with some hobby projects. Is there a trustworthy cheap wildcard cert provider which is not Comodo?


Have you checked StartSSL? They've worked fine for me.


What's the problem with dynamic dns provider domains?


I mean domains on no-ip, freedns, dyndns and the like.. that are subdomains are more likely to hit the default limits with Let's Encrypt... I'd like to see some auto-whitelisting for some of the more popular ones.


I thought sandstorm was a self-hosted thing. Why do the certs need to be signed by a public CA at all? A self-signed certificate is fine (and in some ways better than a public CA) when you can verify the source yourself because you generated it.


Sure, you can use a self-signed cert, if you don't mind going through the process of installing the cert into every browser that you'll use to access the server.

But Sandstorm is designed for sharing and collaboration. For example, you might write a document in Etherpad which you want other people to comment on. It may be tough to get the right certificate into all your friends' and family's browsers.

(Note that Sandstorm actually provides free wildcard certificates if you are OK with using a subdomain of sandcats.io.)


At which point we're back to my original point of why does the information disclosure matter?


Defense-in-depth against CSRF attacks, which are still way too common. Sandstorm can't security-review the apps for you but it can mitigate most vulnerabilities.


I would like to add that security through obscurity is not bad when it used as layers in the security cake. Delicious red velvet cake with cool fluffy chocolate frosting and a side of ice cream.


If it doesn't count as secure unless there's perfect information transparency, most of cryptography is based on "security through obscurity."


Wildcard certificates have significant constraints of single level of the subdomain matching and security risks[1], putting side the debate of the real benefits of "not exposing subdomains".

[1]: https://tools.ietf.org/html/rfc6125#section-7.2


A good example is the Sandstorm platform, which fits both cases.


Any idea what makes it a hard problem? I see nothing that actually explains the issue.


It's because once someone has a wildcard cert they can use it anywhere, in malicious ways: phishing emails that send you to bankofamerica.banking.io or other nefarious type of things.


If I operated banking.io, I could get a bankofamerica.banking.io cert from LetsEncrypt today.

Unless something has changed, I could also get bankofamerica.com.banking.io, from LE or many other CAs. The wildcard issue has no effect on this situation.


Problem is, that if ypu can prove you have access to someuser.github.com, it does not mean you have access to .github.com. Reversely, if you have access to .cs.mit.edu you might want a cert for it. So you cant give a wildcard using one subdomain test, nor can you simply test every domain. So they are looking in DNS verification. Given you have access to dns. But that also gives problems, as shown above.

Tldr: one verification limits another use, while another verification might expose domains that shouldnt be.


That's not how domain-validated wildcard certificates from any CA have ever worked, and it's not how any proposed wildcard-supporting future LetsEncrypt would work.

I can, today, get a valid certificate for myname.github.io (getting GitHub to serve it is a different issue, but from an MITM, proof of ownership, and CA/Browser Forum standpoint, it's properly assigned.) That's not surprising or weird, and it's exactly how it's supposed to work.

A certificate for myname.github.io does not give me a valid certificate for someone-else.github.io or github.io. Similarly, a certificate for '* .myname.github.io', which can also be procured today from dozens of CAs, doesn't cause panic or change the situation at all. I just can't get it from LetsEncrypt.

If I can only prove control of 'cs.mit.edu', I can get a certificate for '* .cs.mit.edu', but not all of 'mit.edu'. If I can prove ownership of 'mit.edu', I can get a certificate that covers '* .mit.edu'. This is how it should and has always worked. It just doesn't work via LetsEncrypt.

Basically, heavier authority moves right, never left, in wildcard certificates.

(Edit: HN eating asterisks)


Are you saying you're OK with co.uk getting a cert for foo.co.uk, even though they're under different administrative control? I'm not OK with that.

I understand that other CA's provide wildcard certs, but frankly I see them as a giant problem. It's bad enough that I only have to have something listening on port 80 to prove that I control a domain. Let's not make it more attractive for people to start fooling Let's Encrypt into getting certs for domains they don't control.


CAs are forbidden from issuing a cert for * .co.uk. The Baseline Requirements say:

> The CA MUST establish and follow a documented procedure that determines if the wildcard character occurs in the first label position to the left of a "registry-controlled" label or "public suffix" (e.g. "* .com", "* .co.uk", see RFC 6454 Section 8.2 for further explanation).

This basically means that the CA should check the Public Suffix List before they issue a wildcard.

As a 'just in case' measure, most modern browsers also reject certs where the wildcard is directly below something on the PSL.

(sorry for the spaces after the asterisks, HN seemed to like converting big chunks of the post to italics)


So pick a different DNS zone that isn't in the PSL but that is not an administrative boundary. They exist.

The Public Suffix list is an imperfect maintained list. You're relying on Mozilla to maintain it. You also never know if someone is selling names below their own zone. What if Mozilla decides to no longer maintain it? What if the volunteers stop maintaining it?

Maybe the PSL is a good start, but I would rather not rely on it. It's a convenience vs. security balance issue, and I'm leaning towards security.


> Are you saying you're OK with co.uk getting a cert for foo.co.uk, even though they're under different administrative control?

This argument hasn't been made, or hinted at, by anyone, anywhere in this thread. And $50 isn't the difference between that being possible or not.


You can require that the validation DNS record exists on the root domain (mit.edu) to prove you control the entire domain. I think there are ways to fix this.


But you can get these at other providers already. And I haven't seen many phishing pages with SSL, it's usually always HTTP.


Wildcards are in the version of the ACME spec at https://letsencrypt.github.io/acme-spec/:

> A server MAY consider a client authorized for a wildcard domain if it is authorized for the underlying domain name (without the “*” label).

Although this seems to be gone from https://ietf-wg-acme.github.io/acme/, which I think is the later version.


I spoke about this earlier this week at a meetup. The consensus in the group was that wildcard certificates are desirable because they're easier to manage. If your tooling is good, the automation afforded by ACME can invalidate the need for wildcard certificates.


The problem is that Let's Encrypt enforces rate limits on how frequently you can request new certificates. According to the documentation[1], the limit averages out to one certificate (which can include up to 100 hostnames) per roughly 8 hours, per domain.

That's probably good enough for almost everyone who uses hostnames to represent physical machines or services. But it's totally unusable if you want to create certificates on the fly in response to user signups.

[1]: https://letsencrypt.org/docs/rate-limits/


This is the real issue with not allowing wildcard certs. If the rate limits were something that just cut in at abuse-levels, there'd be no need for them. Sites with per-user subdomains (to get security features like cookie isolation, break same-origin effects, etc) run into the limits really quickly, and must currently rely on a wildcard.

I've been calling this "the tumblr scenario", but people seem to think there's a way around it without just using another CA.


I deploy all the different little code projects I make to different domains and I've hit the Let's Encrypt rate limit. If (like me) you waited a couple months to secure these little side projects, you'll hit the rate limit too. That being said, I just waited a week -- now everything is secure.


I'd imagine you might be able to reach out to them in that case.


Even with good tooling the current restrictions on Let's Encrypt make it impossible, you can batch many subdomains into 1 request however if you're requesting subdomains by customer username or similar, you can't exactly wait around to batch them. And you can only make a maximum of 5 requests per week, so unless you're incredibly tiny it's non-viable.

I don't understand why Let's Encrypt can't consider validation of the root domain good enough to produce a wildcard. Email at the root domain is what most providers use, not exactly much worse.

EDIT: It's now 20 per domain per week, better but still not viable for even a mid scale operation. A single wildcard is a much nicer and easier to maintain solution in any case.


This 100% blocked me. I tried to work around it by using lets encrypt to provision me certificates on the fly, but I got rate limited.

Then I started speccing out a way to get single certs for many subdomains in one request using SAN, and the whole thing looked like it would require more development time compared to just buying a wildcard cert. Very frustrating.


I mean it kind of makes sense doesn't it? If you need enough certs (> 20 per week) to hit the rate limit, you're probably running some sort of business -- in which case you probably shouldn't be depending on a free service and can likely afford the cost of wildcard certs.


I'm not running a business. I'm writing an open source library that needs certs because browsers are banning features unless served from https

http://docs.happyfuntimes.net

One cert per game * one hit game = need 10s of thousands of certs. But even without a hit game a single game jam would hit the limits

For reference here is an example of a similar problem and solution but it required $$$$$$

https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...

PS: I know this is not a problem with Lets Encrypt. They are not trying to solve this problem.

It is a problem that needs a cheaper solution at least for open source projects.


    > It is a problem that needs a cheaper solution at
    > least for open source projects.
Maybe: games.example.com/foobarbarz instead of foobarbaz.example.com?


how would this help? Like Plex each game is running a local webserver on a user's home computer/apple tv/android tv/etc. It's those devices that need the certs in order to serve content to browsers on player's smartphones.


While there's reasons for wildcard support, I find the support for SAN and just many certs to be just fine in a lot of ways. I've purchased wildcard domains in the past mostly because it was more cost-effective than buying the individual mail, testing, www, and so on subdomains, especially if there's a cost associated with adding more subdomains later.

Having a scriptable renewal process and no real cost associated with getting additional subdomains on the cert (besides operator time) covers some significant portion of the generalized need for wildcard domains. I know there's other reasons, sure. But it's definitely not stopping me.


I feel like this is one of those "faster horse" moments.

You don't need a wildcard cert! Just get certs for each one of your subdomains, even internally.


I get where you're coming from but I depend on a subdomain for each one of my user signups and we're talking thousands. LE limits us to 100 subdomains on a single cert and would require a restart for each signup. I'll definitely be using LE for other projects but many use cases require wildcards.


And most of those wildcard use cases are business sites/applications that can/should just buy a wildcard cert. That said, I wouldn't mind seeing better support for the freedns/dyndns subdomains.


Some people are using dynamic dns records, so the list of subdomains is not static. This is where wildcard ssl is perfect. And what if you have hundreds or thousands of subdomains? It unnecessarily bulks up your configuration as you have to do a cert per site instead of a single one.


Check out GlobalSigns Cloud SSL product if you want a cert that can mix wildcards with top level SANs....not free but....https://www.globalsign.com/en/cloud/


Or AWS Certificate Manager for free if you use AWS.



Just to be clear, this is important because eventually Let's Encrypt wants to no longer have to cross-sign their certificates for them to be considered valid.

For that to happen they have to be added as a trusted CA in most major platforms (and Firefox which has their own CA store for some reason).


> (and Firefox which has their own CA store for some reason).

Firefox has it's own CA store because it's built for all 3 major (desktop) platforms. OSX and Windows have their own but Linux does not and uses Mozilla's.


More importantly, Firefox has its own CA store because it was derived from Netscape, which had its own because they came up with the whole SSL thing way back. The subsystem is still called NSS, even though there's been backbend changes (pkix).


Does Chrome provide its own CA store on Linux? It's also built for all 3 major desktop platforms but uses the OSX and Windows stores.


They just use a copy of Mozilla's one on Linux. Of course, distro packages of Chromium if they use the system NSS library may well use some system CA store.


Would be interesting if there's a roadmap for who's left to be added to.

I saw their aim is to migrate to their own root CA by the end of the year, but i wonder if this is realistically feasible?


We have also applied to the Apple, Microsoft, Oracle, Blackberry, and Google root programs.

We had hoped to be accepted by all major programs by the end of this year, but even if that happened it would still take years for our root to sufficiently propagate.


Excellent news. The more trust the better. It's still no good using LE for API endpoints as many client libs (java, etc) don't trust it or it's cross-signer.


The most recent Java update added the DST (IdenTrust) root certificate. Of course it's going to take a while until that version is widely deployed, but this gives vendors the option to tell API consumers to just update Java (as opposed to manually modifying the key store), so that should help with adoption.


Oh, that's excellent news!


It is a good practice to use key / certificate pinning for your API endpoints anyway, and there you can provide any trusted KeyStore you want...


Java 8u101 added trust for their cross signer, IdenTrust.

https://bugs.openjdk.java.net/browse/JDK-8154757


Hacker News should switch from Comodo to Let's Encrypt. Scumbags attempted to trademark Let's Encrypt. https://letsencrypt.org/2016/06/23/defending-our-brand.html


HN uses ycombinator's wildcard certificate, and it's not up until August 2019. It's likely that they don't want to go through the trouble until it's really needed.


HN, of all places, may take the trouble to actually make a small statement however. People would actually notice here.


> may take the trouble to actually make a small statement however

Another move in the direction of political correctness and making decisions according to optics. Have you considered valid business purposes for a company doing a particular act or are you just deciding that everyone thinks this was a "scumbag" action?


Supporting a non-profit organization working for a more secure Internet could easily fall under a "valid business purpose" for a site like HN.

HN's entire business mandate, such as it is, seems to be to encourage technological growth in the interest of profiting tangentially off the growing pie.

Edit: as for whether Comodo had a "valid business purpose" for the action, sure, in the same sense that patent trolls have a valid business purpose. Read this statement by their CEO: https://forums.comodo.com/general-discussion-off-topic-anyth...

It's surreal. He tries to shift the blame to Let's Encrypt for choosing 90 days as their default certificate expiration date, somehow implying that 90 days is Comodo IP.


Have you considered valid personal purposes for shoplifting or are you just deciding deciding that everyone thinks this was a "scumbag" action?


With Let's Encrypt, the trouble became "Whoaaa I just ran a command and everything works like magic!"


Sometimes magic isn't a good thing, especially when you're operating a service used by as many people as hn daily. Magic means things happened that I didn't explicitly instruct.


That's called automation and it's a good thing.


Automation is not a universal good, it can be useful and it can be detrimental. Like all tools it should be used with care.


Exactly. Value is a vector: automation increases the magnitude, but the direction depends on exactly what is being automated (and how reliable it is).


"Magic" is automation that is hidden from you. That one is not universally good. The one thing you don't want on the configurations of a server farm is "magic". It is simply impossible to manage.

But it's a good thing LE is available without the magic too.


> The one thing you don't want on the configurations of a server farm is "magic". It is simply impossible to manage.

Are you managing all your TCP connections manually then? It's a miracle that "magic" works without you having to manage it.


Yeah until two hours later when you notice it messed with random shit it wasn't even supposed to touch. At least, that was my experience; I suppose it depends on how common your setup happens to be.

I still love Let's Encrypt for its principle, but I don't dare running it in full auto mode anymore. A few custom shell scripts get the job done easily enough.


The auto mode just confused me. Every setup is different. Some use Apache, nginx, or both -- and proxied behind Haproxy or varnish. Then there's stuff like cpanel or virtualmin. So you got to expect any combination of those -- one or more, or combined. Their scripts would have to accommodate for so many different things. How could I anticipate what it would do?

Am I missing something that would make this magically work?

Installing a SSL certificate is relatively easy anyhow. It's one of the most common things you do with a http server.


Depends on where you want SSL termination, and if you want it federated out... The default Let's Encrypt project(s) integration tooling afaik isn't used by many people, but there have been a lot of tools to do more simple ACME integration into various web servers, reverse proxies and other configurations. It's pretty cool.

I'm overall, very happy that it works at all... Some things I'd like to see...

Namely, automatically allow higher thresholds for domains used/provided by dynamic dns providers such as freedns, that have more domains that may want/need to register than limits allow.

Have a more transparent interface for requesting higher thresholds, or for submission of virtual tlds for those domains that offer subdomains to others.


Dynamic DNS providers that are on the Public Suffix List are essentially treated like TLDs in terms of rate limiting, meaning each client subdomain has a separate counter. They should probably be on the PSL anyway; browsers rely on it for cookie scoping.


True enough, but would be nice if it detected that the SOA IP corresponds to a public suffix dns provider.

Also, not sure where to put public suffix list additions for such a provider... I was going to add bbs.io, as well as say the top 25 domains for freedns.afraid.org, but wasn't sure where to add them.


The process is described here[1]. It needs to be performed by the domain owner.

[1]: https://publicsuffix.org/submit/


Use lego. It works brilliantly in DNS mode


There's always the option of running certbot in certonly mode - in fact, that's what I do for the majority of my setups (mostly because certbot doesn't support nginx).


Well, last time I tried, the compilation failed. I didn't understand why it had to do compilation.

Instead of magic, I would rather prefer simple and clear steps.


If you're running in a standard config. If your config is nonstandard (you have multiple servers that need a cert to stay in sync, you're not running a common web server, you're on a private network, you're pinning a client to a public key etc.), it's still easy but not a single command.

For sufficiently nonstandard setups, it's often easier to do the commonplace email-based verification than make Let's Encrypt work. I'd love to switch all of our internal services at $dayjob over to LE, but emails to webmaster@ already go somewhere useful, and setting up externally-visible DNS and fake servers is much more involved. (Either we write some code, or we do it by hand each time, and if we're doing it by hand, it's easier to just handle an email.)


Why not terminate public TLS/SSL at the proxy level, then use internal PKI for proxy to backing servers... It's be easy enough to have a single ACME server that handles all acme requests forwarded from the firewall(s), then federate that configuration out as needed.


The configs I was describing in my first paragraph don't involve proxies. Adding proxies doesn't really solve the problems, and even if it did, writing an in-house ACME server is a lot more work than "run this magic command".

The config I'm describing in my second paragraph is for internal web services within a corporate network, that aren't public-internet-facing at all. I don't want to have all my clients (including people's phones) add an internal PKI because that's just bad security practice.


not for wildcards, also if you don't want to take your website down during the process, the command line becomes slightly more convoluted


There are multiple ways of using Let's Encrypt that don't have to take the website down.


Yes which are more convoluted that the one liner to generate one automatically


True, that was really ugly.

However, I think some nuance is important here. Comodo has some really good people (like Rob Stradling) that are doing a lot of good work for PKI.

eg, the https://crt.sh tool is fantastic.


Have any other browsers announced their intent to do the same?


Other browsers do not have their own certificate stores but use those provided by the OS. Next interesting things are whether Windows and Mac OS X add the root certificate. I think Linux distributions tend to follow Mozilla's trust.


RE: Linux, You're correct, they're provided through the ca-certificates package[0]:

"It includes, among others, certificate authorities used by the Debian infrastructure and those shipped with Mozilla's browsers. "

RE: OSX - If you can get into Mozilla's trust stores, it's the same steps (and pro forma, more or less) [1]

[0] https://packages.debian.org/wheezy/ca-certificates

[1] https://www.apple.com/certificateauthority/ca_program.html


Historically ca-certificates has included certificates that Mozilla had declined to include due to lack of audits, most notably the SPI root certificate because some Debian infrastructure relied upon it. They've also generally been relatively slow in removing root certificates after they're removed upstream, despite any removal essentially being a security issue. I think as of six months or so ago there is no longer any difference between ca-certificates and upstream.


Since Google is already cracking down on OEMs modifying the Android root store for various countries, I think it would make sense for Chrome to have the same root store as Android does (especially in light of Lenovo's Superfish, Dell's eDellroot and so on).


Chrome already has the ability to de-trust things trusted at the OS level, FWIW. It might be interesting to see how many OEM-installed certificates they could de-trust by default.


Question: any possible case of bad apples that make let's encrypt suddenly lose their trust? Eg bcoz it's free, it's used by "bad guys" just like .info tld.


The purpose of Let's Encrypt, and the SSL certificate infrastructure in general, isn't to prevent "bad guys" from getting certificates. It's to ensure that if you own the box, the certificate verifies that a web client is speaking directly to that box with nothing in between. (Or more generally, directly to an authorized end point by the owner of that DNS entry. Authority can be delegated.)

In other words, it keeps bad guys out of the middle, not the end point. That's all SSL can do, even if it works perfectly. Bad guys will still own end points, in both the conventional sense of the word own and the pwning sense of the word own. SSL can not (directly) do much about that. If you speak SSL to a bad actor, well, there aren't any other actors between you and the bad actor, but you're still speaking on an encrypted, authenticated channel to a bad actor.

This is in contrast to the DNS infrastructure in which it is sensible for a TLD owner to attempt to prevent "people they don't want on their TLD" (more generally than "bad guys" since a lot of the restrictions enforced are far beyond that).


Source?

I remember reports in the past decrying CAs for issuing certificates for phishing sites in the style of "gooogle.com" etc.


Some in the industry, including some CAs (Certificate Authorities), believe that issuing certificates to "malicious" websites should be against the rules of the CA/B Forum, the industry body that sets guidelines for CA behavior.

You are right that some news articles and reports continue to chastise CAs who issue to sites in the style of "gooogle.com". Do not let them trick you - that is only their opinion on the matter. It is NOT against the industry rules to issue those certificates.[1]

What IS against the rules is to issue a certificate for "domain.com" to someone who has not proven ownership of "domain.com". That is the BIG no-no that leads to consequences such as being un-trusted. There are standardized methods for meeting the burden of proof, and every CA uses more or less the same mechanisms to do so.

Let's Encrypt, or any CA, may issue a certificate to "paaypal.com". Even if that site was a Paypal phishing site, a CA is under no obligation to revoke the certificate or prevent that user from getting another certificate.

Some CAs CHOOSE to do this. To some extent, I think it is sensible to try to thwart malicious use. However, the case is often made that CAs and SSL certificates are not meant to "police content", and furthermore, that they are not very effective at doing so.

Flagging a malicious site through a tool like Google's SafeBrowsing is significantly more effective than revoking their SSL certificate.

[1] Except for a more recent stipulation that Microsoft added to their root program. If they request the revocation of a certificate they believe is malicious, the CA is expected to comply. If they dont, they are only at risk of being punished by Microsoft.


That's properly understood as a variant of getting a certificate of a domain you don't own, for practical purposes. And the point there is still that the "bad guys" shouldn't be able to get a cert that appears to identify them as Google, not that the bad guys can't get a cert. It's two different things. It is not a bug for Let's Encrypt to hand out certs to "bad" people.


That's fair and makes sense. Still, do you have a source for that type of CA policy? With all due respect, I can't tell if this is just your opinion or a codified threat model.


If Comodo and Symantec are able to retain their trust status, then Let's Encrypt most certainly can.


Do you mean .tk? AFAIK .info costs money.


Yes, but it's usually cheaper.


what's wrong with the .info tld?


People distrust it because it's 99% spam?


.info is really not all that bad, in the grand scheme of things. .biz is much worse (and has been since its launch in 2001!), and some of the new TLDs like .top and .xyz have been abused pretty heavily as well.


I actually like the .info tld... perfect domain for "informational" websites, wikis etc. Though there may be better ones with the recent goldrush of new tlds available.

Though I also liked .io, and felt the higher pricing and harder registration kept a lot of the squatters away.


I think there are 2 factors. 1) Do LE do a good job of ensuring that they only grant certificates to domain holders. 2) Do they do a good job of representing what they're signing (authentication versus identification http://imgur.com/a/fAaYH)

The certificate is a kind of "encryption only" certificate, it's treated as a second class citizen (you might get a grey lock for example) so it's encrypting the communication but it's not very useful for convincing you that you're talking to your bank when you aren't.

Of course if LE don't do a good job of (1) then we're f*ed because they'll issue certificates to bad actors and LE have a hard job because now they're trusted they're a good target for DNS cache poisoning etc.


I find it a little hilarious that the cert for the Test URL, https://helloworld.letsencrypt.org is a 90-day certificate that is expired as of a long time ago.

https://i.imgur.com/1bQLHuF.png


If you read through the linked bug inside the bug, I think they had to do that as part of the review process. They had to wait 90 days after the initial cert was issued to have it reviewed. Since LE only issues 90 day certs this means they'd have to review an expired one.

The whole Mozilla CA review process is a little crazy and the thread talks about ways they could reform it in the future. (overview: https://wiki.mozilla.org/CA )


It's funny because LE's stance has been that 90-day certs are cool because it is easy to automate renewal. However, no one bothered to set up renewal on their example server.


You're missing the point here. helloworld.letsencrypt.org was used as the Test URL in the root inclusion ticket[1]. The certificate was renewed multiple times[2], but the expired certificate had to be restored at one point in order to satisfy some of the requirements of the root inclusion process.

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1204656

[2]: https://crt.sh/?q=helloworld.letsencrypt.org


The use of an expired certificate on that domain is deliberate due to the circumstances around the procedures required to get trusted. Here's the full explanation:

https://www.reddit.com/r/programming/comments/4wb7c2/lets_en...


if you read the posters comment he explains why, they had to leave the original cert with the hash in place while the CA was being validated, despite it being expired.


I think comodo had the idea to be added as a root CA by Mozilla first. They should sue.


Classic Let's Encrypt, stealing all of Comodo's "innovations."


/s


Comodo also had the idea to trademark "Let's Encrypt" [0].

[0] https://letsencrypt.org/2016/06/23/defending-our-brand.html


That's the joke


More specifically, Comodo's defense of that included "We did a 30 day free SSL certificate first! Let's Encrypt is copying our business model!"

The free certificate they were referring to was a time-limited free trial that you could use once and then start paying for.


They are truly amazing. Every time I think they've scraped the bottom of either the incompetence or the sleaze barrels, Comodo manages to get even worse.

One of their sales droids hassled me a while back with some deeply slimy tactics, so I started grilling him about this and the various hacks they've had. Flat out lied about ever having had unauthorized certs made, and claimed he'd never heard of LE, but he just knew they'd never do that, and I must have bad information. (The first part of the second part I can believe.)

Who knows, maybe Comodo could come back after some strategic executive-ectomies. Microsoft seems to be trying hard to rejoin the ranks of the not-outstandingly-terrible. But as of now, I have serious doubts I'd ever choose their services over someone more trustworthy, like, say, Bernie Madoff.


It's about time that HN switches to Let's Encrypt.


The processes for handling LE cert renewals is quite different than for traditional certs with manual trusted-operator procedures.

It's a lot more work to switch an existing cert environment to LE than to start from zero.


Why? Is there something wrong with HN's current cert?


1. Funds Comodo's shady behaviour, and arguably increases their brand recognition

2. Ridiculous expiry time


You know that Comodo won't issue a refund if they decide not to use them any more right? They've paid the money.

I can understand moving things to show support for better alternatives but let's not kid ourselves - moving now or moving when the cert expires, doesn't hurt Comodo any differently.


> Ridiculous expiry time

Looks like a cert is similar to a President. Should be around for 5 years.


Suffices to say that it has been issued by Comodo.


It appears that HN's cert is a wildcard, so they can't switch yet anyway.


Does it need to stay that way? I'd assume there are only a couple of subdomains (www, news, maybe a handful more?), so replacing them with SANs or separate certificates shouldn't be too much work.


Let's Encrypt is pretty great, but if you have the money get a paid SSL. Not all SSL certs are created equal.


> Let's Encrypt is pretty great, but if you have the money get a paid SSL. Not all SSL certs are created equal.

Say what? Besides the faux security of the green bar for an EV cert, what's the difference between a LetsEncrypt and a paid one? (non-EV)


Heads up: I work for a company that speeds up the background checks used for EV.

Tying real world identities to public keys is very much a part of crypto. Windows does it with package signing and EV, Debian does it with people holding up their passports at Linux events, and web sites do it with EV HTTPS.

And yes, we (CertSimple) are looking at Certbot support for EV.


FYI: Your server's vulnerable to CVE-2016-2107 and is getting an F from ssllabs.

https://www.ssllabs.com/ssltest/analyze.html?d=certsimple.co...


Now back to our previous A+. Thanks for the heads up.

I had been logging in to update but the rate of new, severe openssl vulnerabilities - and the risk of missing one, fairly obviously - is high enough I'd rather just apply them immediately. yum-cron's also now enabled to apply openssl updates as soon as they're issued.


Lol


> Heads up: I work for a company that speeds up the background checks used for EV.

How much faster? The one time I've gotten an EV cert it took a couple hours to get verified. Didn't seem too long at all and compared to the time to plan the swap out of the cert in production, the wait was a non issue.

The verification itself was a joke though. It was basically just a phone call asking "Are you X? Ok great! Here's your cert!"

> Tying real world identities to public keys is very much a part of crypto.

Joe User isn't going to look at the details and validation chain of a certificate. The whole idea of the green bar for "more trusted" is a scamola by the cert providers as they saw the writing on the wall for their margins going to zero for domain validated ones (granted they saw it early enough to get traction on it!).


> How much faster?

https://certsimple.com/about

> The verification was basically just a phone call asking "Are you X? Ok great! Here's your cert!"

Congratulations, you have an active registered company that was already well known to qualified third parties. Before that phone call happens, the CA has to verify your existence and status by government records and a qualified third party. There are additional steps for certain company structures. They don't just call you and you get the cert - and people are often rejected.

> Joe User isn't going to look at the details and validation chain of a certificate.

Nobody is expecting users to look at the cert details or verification chain. Just the name in middle of the address bar.

From the front page of HN right now:

https://hackernoon.com/this-is-what-apple-should-tell-you-wh...

https://d262ilb51hltx0.cloudfront.net/max/800/1*DzlpfS4cesC6...

"The green text on the address bar shows the site really belongs to Apple Inc."


> Tying real world identities to public keys is very much a part of crypto. Windows does it with package signing and EV, Debian does it with people holding up their passports at Linux events, and web sites do it with EV HTTPS.

This would be a legit argument if EV HTTPS actually achieved that goal. They don't, though: the identity verification around EV HTTPS is a joke.


Can you elaborate?


The identity verification is essentially a phone call for most CAs, which verifies nothing. Some CAs do better, but it only takes a few bad apples, and in this case it's not a few bad apples--it's mostly bad apples.


Before that phone calls happens a bunch of other work has to be happen first - see the other answer for details.

All CAs are audited against the same guidelines: they should be requiring the same levels of proof. From what I've seen (our tech works with different EV providers) that's generally the case.

While EV is certainly more than a phone call, there are certainly flaws. The EV guidelines change over time and I'd like to get them tightened with additional requirements in particular circumstances.


One of them is tied to a root CA and works on all devices. The other is not, and does not.


Certificates issued by Let's Encrypt are cross-signed by IdenTrust and are trusted by all major browsers[1]. This is just about their own root certificate. Being cross-signed by an existing, trusted CA is a common practice for new CAs, as it would take years for the CA to become usable in practice otherwise.

[1]: https://community.letsencrypt.org/t/which-browsers-and-opera...


LE is tied to a root CA (IdenTrust's). The support is almost universal, with only obsolete OSs not trusting them: https://community.letsencrypt.org/t/which-browsers-and-opera...


Despite a fairly large number of users on XP still (2.5% of total users on some sites I manage), I'll give you that it works on non-obsolete OS browsers. However, those are not the only pieces in the world of security.

Java, for example, only started support as recently as 3 weeks ago (2016-07-19)


XP is supported. There was an issue due to some schannel bug in XP choking on the issuer certificate, but that was fixed earlier this year.


Lot's of people care about and make their money off users with "obsolete OS's and browsers".


And they're just as likely to have problems with any other CA.


Including HN / ycombinator?


compatibility, at least in the short term?

I put a LE cert on a project, and some folks calling the API with old Java clients couldn't trust the cert. They could have upgraded, but it was easier to get a commercial cert and be done with it.


Wildcards.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: