DNSSEC in 2014 is a liability, not a benefit. Essentially, all you're signing up for when you use a 3rd-party DNSSEC server is that someone's misconfiguration is silently going to break your queries.
Virtually no important services on the Internet rely on DNSSEC. Using it now is pure downside.
What's especially funny about this is that your DNS queries to these third-party servers are not themselves encrypted; in other words, you're sending your DNS UDP packets halfway across the Internet for the pretense of cryptographic security.
> Essentially, all you're signing up for when you use a 3rd-party DNSSEC server is that someone's misconfiguration is silently going to break your queries.
Isn't this literally true of any service?
> What's especially funny about this is that your DNS queries to these third-party servers are not themselves encrypted;
This would be relevent if people were aiming to protect the confidentiality of their DNS queries. They're not though. They're trying to protect the integrity of the DNS queries.
Anyhow, I find it useful in that I can store SSHFP records in my DNS zone and then use `VerifyHostKeyDNS` when I'm sshing into servers. This becomes especially useful if I've got servers sshing into other servers and I don't want to have to lug around a known_hosts file on every server.
Saying things like "using DNSSEC is pure downside" is so easily demonstrably false and hyperbolic that it may cause people to ignore the rest of your argument. Which is a shame, because I know you have very insightful and interesting things to say about a lot of things.
I believe DNSSEC is, across the board, pure downside. But that statement is especially true here, in the case of a third-party resolver with DNSSEC enabled. And that's the case I was talking about. I don't think we have to reach the wisdom of SSHFP records to refute your rebuttal.
I'm just going to leave this here and let people make up their own minds. I don't really have an opinion but I think you are being excessively negative without presenting any evidence.
Downsides of DNSSEC are pretty well known, actually.
If you're worried about security yet still want to use a third-party DNS resolver, might much better off with OpenDNS.com and their DNSCurve client (called DNSCrypt), which encrypts and authenticates all communications.
"They're trying to protect the integrity of the DNS queries."
Then why are they using a DNS cache run by a third party (who they do not know) on the open internet, and shared with other users they do not know? The entire focus on DNS insecurity that brought about the renewed interest in "DNSSEC" can be traced to a "cache poisoning" scare back in 2008. The rational response to that aspect of DNS insecurity is to move away from shared DNS caches on the open internet, aka "open resolvers".
If "integrity" were the goal, then I would think downloading a trusted copy of the root.zone and running your own root and DNS cache on localhost would be a better approach. Additionally, if you have many double-digit GB's to spare, you could add in copies of the com, net, or org zones. Allowing users to download trusted copies of these is required by the ICANN rules. Alas, Country Code TLD's are not governed by these rules. Imagine if the telephone company would not allow you to have a copy of the telephone book and said you could only call the operator when you needed to look up a telephone number. That's what it's like when public registries withhold zone files. But I digress...
Setting up and maintaining a local DNS cache is a lot easier than setting up DNSSEC.
Really, the only queries you need to be sending out over the open internet are ones to the third party authoritative nameservers that are responsible for the information you are after. That means non-recursive queries only. I have been doing this for years and it works beautifully. In some cases, it can be faster to make only a series of non-recursive queries to authoritative nameservers than to make a recursive one to a cache. I also store the DNS information for hosts I know I might revisit in HOSTS and zone files. This minimizes my need to use DNS. And looking up information in HOSTS and zone files (in cdb format) is always faster than third party DNS.
Once you have achieved this sort of localized setup, and you no longer rely on third parties for DNS (e.g. such as OpenDNS, GoogleDNS, DNS.Watch, etc.), then you may come to the conclusion that the more important DNS security issues are 1. how to verify that the authoritative nameservers you query are the right ones (not imposters), and then 2. how to secure the packets you exchange with the authoritative nameservers.
There is a finished solution available for 2, but not for 1.
Unless you think delegating 1 to a third party is a "solution" (e.g., PKI, CA's managed by third parties like Verisign, etc.).
Lastly, even if you do not care much or at all about DNS security, there are gains in reliability and sometimes performance to be had by using a local cache (and ideally as much locally stored DNS information as possible) versus using a third party cache on the open internet. You get the additonal security of not using shared caches (i.e. minimized risk of cache poisoning) "for free."
Recursive DNS queries have the crucial property of speed. Unless I'm running a (mostly) dedicated server to DNS, I'm not going to be able to cache any meaningful number of records.
Performing a recursive query to a server that has the well traveled internet in its cache, and likely located in a data centre close to other servers, is going to be faster than waiting (at minimum) 2 round trips to resolve it myself.
Fast page loads is something that some people go to great lengths for.
"that has the well-traveled internet in its cache"
I have the well-traveled internet in my HOSTS file.
No DNS lookups, recursive or not, is always faster than 1 or more.
When this sort of discussion comes up, I always ponder the number of IP addresses that constitutes the "traveled internet" for any given user. I believe it is a relatively small number. The truth is, unless you are running internet-wide scans or something, you will only visit a small fraction of the total IP address space in your lifetime. And in my case I believe I can store all those IP addresses, in additon to a large number of addresses I might visit, on my own personal storage. I'd imagine I visit more addresses than the average user, so the number becomes even smaller in that case.
Another thing I'll mention is an article on the Cisco website from a number of years back where someone claimed that the optimum number of users for a DNS cache is only, if I recall correctly, about 10.
I'd say it's healthy to check your assumptions. You might find that large shared DNS caches, like the ones offered by ISP's or "open resolvers", are not always faster than the alternatives for all queries. Believe me, I do a lot of experimenting with caches in additon to using authoritative queries, local authoritative servers and /etc/hosts. Despite what theory and logic suggest, caches are not always faster across all lookups.
If there were hard and fast rules about how to configure DNS to minimize the number lookups and make DNS as efficient as possible, and everyone followed them, then maybe things would be different. As a simple example, try counting the number of queries required to resolve a host when Akamai is used. I don't care what the large, shared cache of the "well-traveled internet" has, because I already have all the fully resolved Akamai IP numbers where the content resides in my local storage. And if one location is slow I can manually choose another.
DNSSEC is designed to prevent DNS spoofing attacks, or DNS cache poisoning, not to prevent malicious actors from sniffing your DNS queries.
I'm curious as to what you think the downside of deploying DNSSEC is. I use DNSSEC Validator(https://www.dnssec-validator.cz/) for Firefox and I like seeing when a webhost's DNS entry validates. Certain sites I visit regularly do validate, while others don't. If it ever happens that I'm on a public wifi network and I visit a site which I know regularly validates, but fails this time, then DNSSEC will have done its job.
As more operators deploy DNSSEC more sites will validate, thereby allowing me to trust more DNS responses when I'm on a public network.
* It's complicated to deploy and misconfigurations cause outages, and those outages get more severe the more people deploy DNSSEC.
* It sucks up all the oxygen from the effort to actually mitigate flaws in the DNS. The most important DNS security flaw is the last-mile problem between browsers and nameservers, and DNSSEC has practically nothing to say about that. DNSCurve, as a counterexample, does solve this problem, and it solves it regardless of whether 1 person deploys it or 300 million do. But all the oxygen has been stolen by DNSSEC.
* It provides a setting for us to transition the CA system from untrustworthy companies directly to world governments, with the most commercially important domain names giving CA-like authority to (wait for it) the US government.
* Any way you try to project the math out, it will be ludicrously expensive to deploy (the deployment numbers we see today are, effectively, trial/pilot deployments, since virtually no end-user software cares about DNSSEC).
* Speaking of expensive: since virtually no modern networking software is built on the assumption that there can be (a) transient (b) security failures for DNS lookups, actually deploying DNSSEC is going to require forklift upgrades to huge amounts of already deployed code. Just to make that clear: imagine you're still using gethostbyname() to look up names, like lots of code does. How does your lookup code change to accommodate the fact that a query can, under DNSSEC, (a) fail (b) despite the fact that there was a response to the query with a usable record? TLS solves this with a pop-up dialog. Where does the dialog go?
* The most common mode of deployment for DNSSEC leaks hostnames; it essentially re-enables public zone transfers. To avoid this problem, you can theoretically deploy minimum-covering NSEC ("whitelies"), but despite the fact that this is the only "safe" way to deploy DNSSEC, it's not the default. Why? Because whitelies requires online keys, and the original premise of DNSSEC was to keep keys offline. Net result: many many deployers of DNSSEC --- should we be so unfortunate as to have many deployers of DNSSEC --- will accidentally leak the contents of their zones to the Internet.
This is a subset of the reasons I don't like DNSSEC (a more significant one to me is that I believe it's cryptographically obsolete); it's just the subset that, off the top of my head, I think demonstrates the harm DNSSEC would do beyond simply not solving the problem it ostensibly solves.
> It's complicated to deploy and misconfigurations cause outages, and those outages get more severe the more people deploy DNSSEC.
It needs to be maintained of course, just like any deployment, but is it worse in this regard to other protocols? I would say no. If you can get a TLS certificate for your web server, you can get your DNS records signed.
The whole infrastructure is in place. Your operations team should know this already. There are books, courses, certifications. The whole shebang
> DNSCurve, as a counterexample, does solve this problem, and it solves it regardless of whether 1 person deploys it or 300 million do. But all the oxygen has been stolen by DNSSEC.
DNSCurve solves none of the problems that DNSSEC solves.
That's the problem with DNSCurve adoption. Not some conspiracy by the DNSSEC cabal.
> It provides a setting for us to transition the CA system from untrustworthy companies directly to world governments, with the most commercially important domain names giving CA-like authority to (wait for it) the US government.
This is just false. DANE shifts the authority from some 300 individal organisations, private and otherwise, into your parent zone, which you must trust anyway under every sane DNS model.
The authority for the system as a whole is delegated to a transparent organization, currently indirectly appointed by the US government. If the word "government" here makes you see red, you're missing out on the bigger picture.
> it essentially re-enables public zone transfers
This is a true problem with DNSSEC. But it has been discussed over and over again for 15 years and no one really thinks it is a showstopper. It's well covered in the literature.
> a more significant one to me is that I believe it's cryptographically obsolete
Well, show that then. There is a whole working group who'd love to hear something more concrete. That's the way standards should be set, not by personal beliefs.
"Operations teams should know by now how to handle DNSSEC" isn't a rebuttal to "DNSSEC is complex".
DNSCurve converges to the same protection DNSSEC provides; the difference is that during the decade or two in which DNS security isn't fully deployed, DNSCurve actually does something useful, and DNSSEC doesn't. Don't get hung up on the tactical value of DNSCurve just because DNSSEC has no such value. It's a long-term win too.
"The CA system is worse" isn't a rebuttal to "DNSSEC/DANE gives the USG direct control of certificates".
> "The CA system is worse" isn't a rebuttal to "DNSSEC/DANE gives the USG direct control of certificates".
Again, please stop the overly broad statements and explain precisely HOW "DNSSEC/DANE gives the USG direct control of certificates". I've not yet been able to have someone explain to me how this is true. I put my cert (or a fingerprint) in a TLSA record signed by my DNSSEC key in my DNS zone. Where does the USG get involved?
Maybe it's just that I'm deliriously tired, but I'm still missing how "the DNS root" can alter the data in MY zone sitting on my little authoritative server somewhere. Can you please walk me through the actual attack?
As someone else pointed out, all the root can do is potentially redirect a TLD to a controlled TLD registry, which could then conceivably serve out NS records for my domain pointing to an owned authoritative DNS server... which could then serve out bogus data. BUT... I have to think someone would notice the redirected TLD!
> "Operations teams should know by now how to handle DNSSEC" isn't a rebuttal to "DNSSEC is complex".
It's not meant to be. "Is it worse than comparable protocols? I would say no." is the argument. This is all a matter of personal opinions of course, but it's not more complicated than the TLS/CA system.
The operating procedures is modelled against regular DNS, on purpose, and should fit well into your existing workflow. But this all besides the point since the standard works and is in place all around the world.
> DNSCurve converges to the same protection DNSSEC provides
Please don't. This discussion has been had a million times on IETF mailing lists and I don't know why it keeps popping up. Perhaps djb has some sort of fan base out there who wants him to "win" some imaginary discussion.
No, DNSCurve is designed to secure your DNS questions and answers from prying eyes. DNSSEC is designed to authenticate and proof the extisting DNS system from tampering both by resolving and authoritative servers.
One of the early design goals of DNSSEC was to be backwards compatible with the existing DNS system, protocols and implementations.
Any divergence from these principles will be very difficult to get implemented. Any reason why DNSCurve or any other standard in the DNS space is without implementations should be sought here, not in a conspiracy.
> DNSCurve actually does something useful, and DNSSEC doesn't.
That's simply not true. DNSSEC is implemented around the world and solves a problem. If it is a useful problem or not depends on your point of view.
But there is no reason to come up with all these straw man arguments against a technical standard. Please join the relevant mailing lists and speak your mind in more technical terms instead. There is no cabal in your way, but you must be prepared to argue design goals, operating procedures, implementation, and Internet governance.
> "The CA system is worse" isn't a rebuttal to "DNSSEC/DANE gives the USG direct control of certificates"
That's not what I said.
No, DANE does not in any way give the US Government direct control over certificates. It does, however, make them dependent on the DNS root, which is administered by an organization indirectly appointed by the US Government.
However, that is not a problem given that the US Government has not much actual say over daily operations -- and even if they did, that would not be a practical attack vector due to the reasons laid out above.
"The CA system is worse" is a rebuttal to not doing anything. If you want the CA model gone, you need to get your TLSA records out there and you need to get them signed, now.
The reason for that is that there is no alternative. DNSCurve does not solve this problem. TACK does not solve this problem. No other system has support in any popular DNS software, and designing any contender takes ten years to get implementation, support and operations right.
Please keep attacking this problem, and please keep testing new ways to solve this. Not by rehashing old arguments against DNSSEC that has already been rebutted, but by embracing it and see what can be done better.
That's because you haven't actually responded to anything in this thread. You just repeat your standpoint that you'd personally prefer DNSCurve and DNSSEC adoption holds it back.
You write that as if there weren't thousands of words of my comments, none of which involve a mere personal preference for DNSCurve, that you literally need to wade through to get to this comment. Which says more about the strength of your argument than mine.
> How does your lookup code change to accommodate the fact that a query can, under DNSSEC, (a) fail (b) despite the fact that there was a response to the query with a usable record?
Hmm... I'm not clear how a failed query could have a "usable" record. If DNSSEC validation fails then the response to the query can't be trusted. So that would not be "usable" to me. Or am I missing something here?
What percentage of failed TLS certificate validations are the result of attacks, versus benign operational failures? My guess is the one with the majority has it to the tune of 99.9999%.
That doesn't answer the question I asked. If you issue a DNS query to your resolver and DNSSEC validation fails, then the resolver can't trust the result it received as that info could be bogus. I can't see how there is a "usable result". Am I missing something?
If TLS had used the same model, it would never have succeeded. You are missing something: 99.9999% of the time, DNSSEC resolver failures will be benign, and the data returned in the query not only useful but required for connectivity.
When TLS was implemented, it had the benefit of being entirely new; every piece of software that TLS secured had to be modified to accommodate it, and so almost everything that implements TLS has some kind of policy switch for how to handle verification failures.
But part of the pipe dream of DNSSEC is that it's a switch server operators can flip on behalf of all their millions of users. And of course that's not going to work, because different users and different sites are going to have radically different policies for when to "click through" a resolver failure. But because the code for all this stuff was written back in the 1980s, none of it supports any kind of policy lever for this problem.
When a TLS certificate fails to validate, you know something went wrong with TLS. When a DNSSEC resolution fails, gethostbyname() just returns NULL, and the host falls off the Internet. You're a programmer, right? What are the implications of this problem for you users?
This, incidentally, is something DNSCurve got entirely right. Unlike DNSSEC, where virtually every failure is going to be benign (because DNSSEC is even harder to administrate than TLS, where 99.9999% of failures are also benign), DNSCurve fails when attackers fuck with connections and in basically no other situations.
Ah, so you don't like that DNS resolvers just return a regular SERVFAIL when DNSSEC validation fails, without some kind of indication as to why the failure occurred.
I agree. I wish a failure of DNSSEC validation returned a different code or set a bit somehow so that resolvers could then know that the failure was because of DNSSEC validation. The resolver could then take action based on that knowledge.
A couple of us (who were not involved in the early discussions of DNSSEC) were discussing re-introducing this idea to see what kind of traction it might get now that we've seen a good bit of DNSSEC deployment and have more operational experience.
> * It's complicated to deploy and misconfigurations cause outages, and those outages get more severe the more people deploy DNSSEC.
I would dispute that. Pretty much all of the authoritative name servers (Bind, NSD, Windows, Knot) have made the signing service a few lines in a configuration file. YES, there are operational steps you need to put in place, primarily related to KSK rollovers, but the actual deployment is pretty simple these days.
Similarly, on the validation side, deploying DNSSEC validation is basically one line in a config file for BIND and Unbound and a little bit more in Windows Server 2012. It's simple to deploy.
> * It sucks up all the oxygen from the effort to actually mitigate flaws in the DNS. The most important DNS security flaw is the last-mile problem between browsers and nameservers
That may be the most important issue to YOU, but to others ensuring the integrity of the DNS info is more important.
> * It provides a setting for us to transition the CA system from untrustworthy companies directly to world governments
Huh? I keep hearing DNSSEC critics bring this up and I've yet to have anyone truly explain how this can happen.
> * Any way you try to project the math out, it will be ludicrously expensive to deploy (the deployment numbers we see today are, effectively, trial/pilot deployments, since virtually no end-user software cares about DNSSEC).
There are 18 million customers of Comcast in North America who are receiving the integrity protection of DNSSEC. Every Comcast user is having every DNS query validated by DNSSEC. Please take a moment to look at the stats off of this page: http://gronggrong.rand.apnic.net/cgi-bin/ccpage?c=XA&x=1&g=0... (which I know you've seen because we've discussed this on Twitter). There are very real deployments of DNSSEC validation happening around the globe.
I've run out of time right now to answer... and the reality is that you and I could probably just go back and forth on this for quite some time. You don't like DNSSEC. I like DNSSEC for solving the problems it does (and also for providing additional capabilities such as DANE).
Trust in DNSSEC rolls up the DNS hierarchy, the top of which is overwhelmingly controlled by governments. I'm at a loss to see why this isn't an obvious problem to you.
If you don't want to trust DNS then don't trust DNS. All DNSSEC does is verify that the reponse you get from a DNS server is the response the domain owner wanted you to receive.
It's also a bit of a stretch to say that DNS is controlled by governments. I don't even know how to unpack that statement as it seems overly broad and unnuanced. When the USG wants to seize a domain they can seize a domain. DNSSEC has nothing to do with that. The USG seizes domains now, and they'll most likely seize domains if DNSSEC reaches full deployment. Whether or not DNSSEC is deployed is entirely irrelevant to any seizure of domain names by any government.
I'm all for working on non-hierarchical naming systems for the Internet, but DNS is already rooted in hierarchy. We might as well have a hierarchy we can trust, so why not DNSSEC?
DNSSEC is a hierarchical PKI, like the CA system. Just off the root of the DNSSEC hierarchical PKI are a series of branches that are controlled entirely by world governments. I don't know how to say this any more clearly without sounding patronizing. In a DNSSEC/DANE world, the Libyan government can successfully publish a fake certificate for BIT.LY.
I absolutely do not accept the premise of the question in your last sentence. No, let's not bake a trusted hierarchy into the core of the Internet, please.
> DNSSEC is a hierarchical PKI, like the CA system. Just off the root of the DNSSEC hierarchical PKI are a series of branches that are controlled entirely by world governments.
I am assuming you are talking about the country-code TLDs (ccTLDs) here, correct?
I agree that certainly many of those ccTLDs are directly controlled by governments while many others are operated on behalf of governments.
The generic TLDs (gTLDs) are different and are mostly operated by private companies operating registries under contact with ICANN. With the "new gTLD" program there are now MORE gTLDs than there are ccTLDs.
> I don't know how to say this any more clearly without sounding patronizing. In a DNSSEC/DANE world, the Libyan government can successfully publish a fake certificate for BIT.LY.
Simple answer - if you don't trust the Libyan gov't, don't use a .LY domain! I would argue that in DNS in general you do need to trust your parent zone. If you don't trust them, don't use them. Period.
If you are a service provider, then yes, you can avoid using any TLD. If you're a user, you can't use bit.ly services with DNSSEC without trusting the .ly TLD, right ?
> If you're a user, you can't use bit.ly services with DNSSEC without trusting the .ly TLD, right ?
True, although you can remove the "with DNSSEC" part. You can't use bit.ly (or any other .ly) without trusting the .ly TLD.
Interesting, I wasn't thinking about it from the user point of view - I was thinking about it from the domain registrant who is publishing a DNS zone under .LY. But you're right that it equally applies to the end user from the client perspective.
DNSSEC doesn't create or diminish any trust in ccTLDs that wasn't already there. The Libyan government has all the authority to mess with .ly, it's their ccTLD. Without DNSSEC the Libyan government can mess with .ly, and with DNSSEC the Libyan government can mess with .ly. The main difference is that with DNSSEC, if someone does mess with .ly you know it was the Libyan government. Without DNSSEC attribution of the messing-with becomes much more difficult.
I think we both agree that there are innate problems with hierarchies of trust. Unfortunately, for better or for worse, we're stuck with hierarchies until something better comes along. Let's also not make perfection the enemy of good, Namecoin, or other massively distributed naming systems, might eventually develop into really interesting technologies. However, for the immediate future, we're stuck with DNS and we should make the most of it.
No. The Libyan government does not have the authority to surreptitiously control BIT.LY. That's not how the Internet trust model works. Even in the badly broken implementation we have today, there are things BIT.LY can do to override Libya; for instance, they can have their certificate pinned to a specific trusted CA.
The general belief that TLD managers "already" control sites is probably behind a lot of otherwise- inexplicable DNSSEC boosterism. Because if you really believe that, then sure, giving still more control to the operators of those TLDs is just a cosmetic change.
But, thankfully: no. No, no, no. The operators of .COM don't get to monitor Google mail. The government of the British Indian Ocean Territories doesn't get to patch Redis.
The schemes we have now to fence Internet trust off from TLDs are imperfect. The right response to that is to make them better. DNSSEC advocates act like this is a pipe dream, but there are surprisingly simple things we can do right now to massively improve the situation, like adopting TACK or HPKP to turn everyone's (or at least everyone running Chrome and Firefox's) browsers into a surveillance system for attempts to compromise Internet trust.
"But we can do this in a DNSSEC world, too", the DNSSEC advocates say. That's not quite right; they mean to say, "but we still have to do this in a DNSSEC world". Two problems. First, if we're going to rely on HPKP/TACK and CT as a bridge to a decentralized reasonable trust system, why waste the time and effort on DNSSEC in the first place? Answer: there is no compelling reason to do that. Secondly: DNSSEC actually makes it harder to do those things; among the reasons, when TLD operators misbehave, there is no recourse at all for rectifying the situation. How long do you think it takes the Chrome team to flip the switch to remove a rogue CA? Now, how long will it take them to remove .COM?
>In a DNSSEC/DANE world, the Libyan government can successfully publish a fake certificate for BIT.LY.
They can also do this in a non DNSSEC/DANE world, at the crudest lowest level by hijacking the domain, and then buying a new cert and clicking the confirmation email.
DNSSEC adoption is growing each year. Constantly dismissing it does not make that fact nor DNSSEC itself magically disappear. You mention that the queries themselves are not encrypted, but know (I hope as a security guy) that it is not the goal of DNSSEC, so why bring this up?
In a perfect world, perhaps every query will be done over TLS. But until every piece of the chain guarantees this, DNSSEC is necessary to ensure the answer wasn't spoofed. You can think of a million better alternatives, but it's extremely hard to get the whole world to change...
Actually, on a percentage of zones signed basis, or queries issued, DNSSEC penetration shrinks each day.
He mentions encryption since the vast majority of people believe DNSSEC provides some level of privacy, encryption, or security. Most of us in the industry who aren't financially aligned to support DNSSEC recognize it does little to nothing in this area; while simultaneously adding complexity and brittleness.
The only benefit DNSSEC offers (imho) right now is DANE-like services. And guess what? There are much easier ways to accomplish that.
Who in the industry, whether they are "financially aligned" or not, claims that DNSSEC provides you with privacy or encryption? The DNS data is sent in the clear.
No-one who uses DNSSEC is arguing it keeps your DNS records private. It is a straw man argument to suggest otherwise.
DNSSEC is designed to provide tamper-evidence, not privacy.
It's not a straw man to point out that the protocol that ostensibly "secures the DNS" does not in fact encrypt DNS queries or responses. I wasn't merely saying that DNSSEC records are plaintext (though they are). I was pointing out that the last mile from server to resolver has no cryptography whatsoever. Resolvers don't speak DNSSEC to servers --- it's a server-to-server protocol.
This is a common tactic among DNSSEC advocates. Observation oft he inexplicable lack of last-mile security --- the one place where the Internet could actually benefit from DNS security --- is dismissed as a "straw man". Yes, it's very inconvenient to arguments for DNSSEC adoption. No, that doesn't make it out-of-bounds.
Hmm... as a "DNSSEC advocate", I can say that I've never dismissed the lack of last-mile security as a "straw man". It's a real issue - but NOT one that DNSSEC seeks to solve. Because of this issue, many of us who advocate DNSSEC also advocate that the DNSSEC validation happens as close as possible to the end user, including even within the applications used by the user. If not in the apps then in the operating system. And if not there then on the local network... but then you start expanding the zone of risk. For the DNSSEC integrity validation to be useful it needs to happen as close as possible to the user - OR have a secured connection between the user and their DNS resolver.
From a DNSSEC advocacy point of view, I'm always glad when a provider of DNS resolvers turns on DNSSEC validation... but public DNS servers are farther away from the end user than I would personally like to see. It's a good first step... but we really need the integrity validation happening close to the user.
A protocol already exists that does a significantly better job both of providing query integrity and confidentiality on the "last mile" for DNS: it's DNSCurve. DNSSEC on the other hand is almost perfectly unsuited for last mile security, requiring as it does every enabled resolver to act as it's own fully recursive cache. DNSSEC almost petulantly fails to provide confidentiality, too.
2014 advocacy for DNSSEC seems to me like distilled sunk-cost fallacy. It sucks that people spent 2 entire decades working on this protocol (most of the design is still traceable to TIS!), but they did, and it didn't work out, and now they should move on.
The manifold weaknesses of DNSSEC are completely unnecessary. The protocol has no meaningful deployment. Inflicting it on the internet in 2014 would be a grave and unforced error. Do better.
Thomas, DNSSEC has nothing to do with last-mile security, nor does it have anything to do with confidentiality. You know that. Neither of those is the problem DNSSEC solves.
> 2014 advocacy for DNSSEC seems to me like distilled sunk-cost fallacy.
2014 advocacy for DNSSEC is merely a continuation of advocacy that has been going on for years now. The root zone of DNS was only signed 4 years ago. It's taken those 4 years to get almost all the gTLDs signed and 2/3rds of the ccTLDs signed. It's taken those years to get the DNSSEC-related tools to the point where the configuration and deployment is as simple as it is now. It's taken those 4 years to get the pieces in place where it can work well... and there's still more work to do.
> The manifold weaknesses of DNSSEC are completely unnecessary. The protocol has no meaningful deployment.
I've quoted many deployment statistics in other parts of this thread. You dismiss them. That is your right. But to say it has "no meaningful deployment" is to dismiss the great amount of work that has happened all over the world by developers and network operators who DO see DNSSEC as something useful. To go back to one I've quoted here - there are 18 million Comcast customers in the USA that have all of their DNS queries validated by DNSSEC. To me that is a meaningful deployment.
> Inflicting it on the internet in 2014 would be a grave and unforced error. Do better.
We are doing better. In the view of myself and many others, DNSSEC makes the Internet more secure. We're implementing it. We're deploying it.
If you've got a better idea bring it to the IETF and lets have the debate in the standards mailing lists.
I agree that DNSSEC has nothing to do with last-mile DNS security! Where we differ is that you don't seem to care that it doesn't, whereas I am completely bewildered by the idea that anybody could advocate investing tens of millions of dollars into a forklift upgrade of DNS infrastructure that would solidify government control of Internet privacy without doing anything to solve the most urgent problem with the DNS --- if there is an urgent problem with the DNS at all, which is debatable.
The root zone of DNS was signed 4 years ago, but the DNSSEC protocol has been under development since the early nineties, back when the forum for discussing it was dns-security@tis.com. That's approximately how long I've been following DNS security. The record names changed during the typecode roll in 2003, but if you blur your eyes, DNSKEY, RRSIG, and NSEC are just KEY, SIG, and NXT, and the core DNSSEC protocol --- authenticated denial, offline signing, no confidentiality --- is the same as the one TIS worked on under US federal government contract back in 1994.
I'm not simply "dismissing" the statistics you're providing. I'm challenging them. One US provider runs a DNSSEC-compatible resolver system. So what? Google DNS enabled DNSSEC years ago. You're talking about decisions that perhaps tens of people at those companies made and executed. So a cabal of hobbyists enabled DNSSEC at Comcast. What percentage of DNS queries at Comcast return DNSSEC-authenticated names? I don't know, but I'll put money on my guess: an infinitesimal fraction, because DNSSEC isn't deployed across the Internet.
DNSSEC makes the Internet less secure. It's a government contract that morphed into a standards committee boondoggle --- don't take my word for that, just read any of the mid-2000s Vixie mails to namedroppers begging the group to just please standardize something --- that has been beset for two decades by grievous technical flaws, ranging from the delegation signer debacle through the Unix-password-file-cracking NSEC3 scheme. It hands control over a critical piece of security infrastructure to world governments. It does all this without solving the most important DNS security flaws. Meanwhile, those of us who work in security can't help but notice that despite the lack of any DNS security over and above randomized source ports and query IDs, the Internet is not falling apart... because it's been designed and built over the last 15 years not to rely on the DNS for security.
It's taken two decades to get to this point because signing individual DNS records and treating zones as the fount of all truth on the Internet is the wrong design. There is too much data in the DNS to boil the ocean and sign every record individually, and even if you could, if the last 10 years have taught us anything at all, it's that Internet trust doesn't work like a tree with a single root. Global hierarchical PKI is dead. Let it rest in peace.
Thanks for writing that response. I certainly agree DNSSEC could be improved (which is a large part of why I'm at IETF90 in Toronto this week).
On whether DNSSEC makes the Internet more or less secure, I think you and I will just have to continue to disagree.
Thanks, though, for the discussion. I enjoy having my views challenged and this has been very helpful. I have to sign off now for the night, though, but thanks.
Percentage based statistics are just as misleading. To me this data only says that personal/adspam/squatter zones are growing at a faster rate than those with integrity considerations. In raw numbers, adoption is growing.
The consensus is that DANE requires 2048 zsk, which none of the big players are using to my knowledge. In that regard, I think DANE is easier to replace at this point.
The number of DNS resolutions that occur in the US and Europe that use DNSSEC is, as a fraction of all DNS resolutions, below the noise floor. That's the correct metric, by the way: DNSSEC in use.
DNSSEC realists are curmudgeonly about this stuff in part because DNSSEC advocates have been saying things like "DNSSEC is growing" since nineteen ninety-nine.
DANE is even worse than plain-old DNSSEC. For those of you who don't know what it is: DANE allows the DNS to vouch for your certificates. In other words: whoever controls the DNS roots effectively controls your certificates.
You can in some sense sum up the cryptographic competence of DNSSEC by observing, as you have, that many of its largest deployers still use RSA-1024. If you have any actual interest in cryptography, here's a fun little research project: go find out the padding format those deployments use. :)
> You can in some sense sum up the cryptographic competence of DNSSEC by observing, as you have, that many of its largest deployers still use RSA-1024. If you have any actual interest in cryptography
... you could also help improve the DANE protocol. The DANE Working Group within the IETF is open to all. The mailing list is at:
I was replying to your reply and quoted part of your message and just snipped it off without really indicating that I had just ended it. The way it came out on the HN page it is hard to see that it was your message versus mine. :-(
> DANE is even worse than plain-old DNSSEC. For those of you who don't know what it is: DANE allows the DNS to vouch for your certificates. In other words: whoever controls the DNS roots effectively controls your certificates.
Please explain to me HOW the person who controls the DNS roots controls my certicate? I take my TLS certificate (either a CA-signed cert or a self-signed cert) and I load either a fingerprint of the cert or the complete cert into a TLSA record in my DNS zone. I then sign that with my DNSSEC key (create an RRSIG record) and publish it in the zone. It's all under my control.
The DNS root can alter your records to be whatever they want.
Sure, they can already change your MX now and get a CA to sign a new cert by intercepting the email. But a replacement protocol should have much better design goals.
Say my domain is "example.com" and I serve the zone out of my own authoritative DNS server running on my own host. I create a "www" record that I serve out pointing to my site's IP address. Leaving DNSSEC out for a moment, how can the DNS root alter my records?
All the DNS root does is provide pointers to the .com authoritative name servers. The .com name servers provide pointers to my example.com name server. My example.com name server serves out the zone data.
I think you're playing a semantic game with the word "alter". The roots can't "alter" your records. They can prevent other people from seeing those records, though, and replace them with an entirely different set of records.
I was not playing a game. I was honestly trying to understand because it didn't make any sense to me.
I don't see the actual root of DNS being able to do much at all. If the root tried to give altered NS records for one of the TLDs I believe that would be noticed by many out there.
The TLD registries could give altered NS records pointing to a controlled authoritative name server for a domain, and then THAT controlled name server could provide an entirely different set of records. (And they could all be DNSSEC-signed.)
But to me this is kind of DNS security 101 - you have to trust your parent zones and you have to trust your registrar. If you think that a TLD registry or registrar could do this kind of change... don't use them.
Ah, gotcha. Sure, the .COM registry could return NS records for example.com pointing to a controlled name server that could then return an A record pointing to a controlled web server that wasn't mine.
And those false NS records could be modified by the registry (unlikely in the case of .COM but could conceivably be for a ccTLD) or more likely the registrar (after a compromise there).
> whoever controls the DNS roots effectively controls your certificates
This is misleading at best.
He who controls the DNS root servers controls absolutely nothing.
He who controls the DNS root zone could, in theory, re-delegate a whole top domain to a malicious third party and thereby control your domain via this new top domain, thereby get control of your DANE certs.
Don't you think the Internet at large would notice, and immediately route around the problem?
When was the last time you saw an illegitimate top domain transfer? Never, that's when.
Even if it was true that targeted changes were impossible this would still be a bogus argument. But it isn't true; attacks can be carried out surgically against small targets.
Please elaborate further. You've been alluding to that is a series of comments here. If it's too much for a reply do it in a blog post somewhere. You will find that people take reasonable arguments a lot more seriously.
Extraordinary claims require extraordinary evidence, so I think perhaps you ought to start by naming some important service that does rely on DNSSEC. How about a bank, or a major email provider?
I'm not really backing the DNSSEC horse but I figured I'd point out that they exist and are not super uncommon. Also as the person starting the argument, you should be the one providing evidence.
You cite Pagerduty, which managed to glean a significant outage from an attempt to deploy DNSSEC, a service virtually none of their customers can actually use.
Incidentally: if we want to use words like "bias", let's get mine on the table. DNSSEC is a terrible idea that will harm the Internet without solving any meaningful security problems. I am not open-minded about it.
That does, I suppose, make me "biased" against it. :)
I'm not speaking to the validity of whether or not DNSSEC is actually a good idea. Your contention is that it's not. But saying the equivalent of "no one uses it" is demonstrably false.
I'm sorry, but no, it's not the same thing. (I'm in Toronto, right now, so all messages should start off being polite!) Enterprises DO buy all sorts of crazy stuff but this is not one of them. I know it annoys the critics, but there ARE very real deployments of DNSSEC happening out there. As I cited in another comment, there are 18 million Comcast customers now getting DNSSEC validation. There are millions of users in many European countries that are getting DNSSEC validation.
(smiling) You already know the answer - "it depends". What websites were they visiting? If they were going to a lot in .SE, then they might have had a good bit of DNSSEC validation happening. If they were going to .COM websites they probably would have had very little.
While indeed slower then Google ones for example, like double the speed, still I would argue that it doesn't really change that much when you are loading the page.
For most part, entries a cached locally and in single visit, you are not visiting more then one domain.
I think we should welcome this line of thinking and acting, internet is becoming less free place every day. Not the same, but I was reading this article about a guy who is on Verizon and gets better Netflix when he uses his VPN.
And DNS.watch isn't solving anything. It only moves your dependency from one third-party (your ISP, Google, OpenDNS) to another one (DNS.watch).
If we really want to keep internet the way it was intended, we must run our own resolvers. That will most certainly defeat the cacheability of DNS but it is very much needed.
It doesn't defeat cacheability when you're doing it on routers or any other DHCP-shared LAN-only resolver; there's no reason you can't pool all your household devices on a dedicated caching resolver.
OpenWRT/Tomato/etc provide "dnsmasq" for exactly this reason. :-)
Is there a good reason to not have a streaming update protocol for DNS? Surely it wouldn't be that difficult to prefetch the top N million DNS queries, then subscribe to a stream to keep them updated as TTLs expire? This would give you both privacy and speed (for the majority of queries)
I wonder if it is anycasted. It appears to be on the Accelerated IT Services ASN - if they do not explicitly own the IP addresses and they lose control of them, that would be a disaster.
It doesn't seem to use anycast. I've done a traceroute from different worldwide locations using http://lg.he.net/ and they all get routed to Frankfurt, Germany.
Here in Germany I get a very good latency to this service (22 ms, almost as good as my ISP's resolver's 18 ms), but if you don't live in or close to Germany this service probably isn't for you, at least as long as they haven't implemented anycast with multiple world wide locations.
I have no idea about people complaining about high latency, but for me it has around the same ping of google dns (from Portugal) - google 40ms, this 43ms. Lately I've been using google ones because the previous I was using got more latency.
I just want to note that the servers have a ping of ~150 here from India, which is not that fast, but OK. Hopefully some DNS servers will be placed in Asia.
I ain't changing my dns server to a server in Germany
6 50.242.148.34 (50.242.148.34) 11.825 ms 23.165 ms 11.936 ms
7 vlan60.csw1.sanjose1.level3.net (4.69.152.62) 167.163 ms 173.912 ms
vlan80.csw3.sanjose1.level3.net (4.69.152.190) 179.634 ms
8 ae-61-61.ebr1.sanjose1.level3.net (4.69.153.1) 166.071 ms 166.300 ms
ae-81-81.ebr1.sanjose1.level3.net (4.69.153.9) 166.493 ms
9 ae-2-2.ebr2.newyork1.level3.net (4.69.135.186) 166.700 ms 167.425 ms 168.243 ms
10 ae-62-62.csw1.newyork1.level3.net (4.69.148.34) 168.031 ms 167.538 ms
ae-82-82.csw3.newyork1.level3.net (4.69.148.42) 168.819 ms
11 ae-71-71.ebr1.newyork1.level3.net (4.69.134.69) 168.592 ms
ae-81-81.ebr1.newyork1.level3.net (4.69.134.73) 166.472 ms
ae-61-61.ebr1.newyork1.level3.net (4.69.134.65) 167.677 ms
12 ae-41-41.ebr2.london1.level3.net (4.69.137.65) 167.766 ms
ae-43-43.ebr2.london1.level3.net (4.69.137.73) 167.754 ms
ae-42-42.ebr2.london1.level3.net (4.69.137.69) 167.882 ms
13 vlan103.ebr1.london1.level3.net (4.69.143.93) 167.297 ms
vlan101.ebr1.london1.level3.net (4.69.143.85) 167.796 ms
vlan103.ebr1.london1.level3.net (4.69.143.93) 166.458 ms
14 ae-23-23.ebr2.frankfurt1.level3.net (4.69.148.194) 165.179 ms
ae-24-24.ebr2.frankfurt1.level3.net (4.69.148.198) 168.929 ms
ae-23-23.ebr2.frankfurt1.level3.net (4.69.148.194) 165.134 ms
15 ae-72-72.csw2.frankfurt1.level3.net (4.69.140.22) 166.229 ms 171.583 ms
ae-62-62.csw1.frankfurt1.level3.net (4.69.140.18) 166.542 ms
16 ae-3-80.edge4.frankfurt1.level3.net (4.69.154.136) 168.808 ms 167.824 ms
ae-2-70.edge4.frankfurt1.level3.net (4.69.154.72) 177.082 ms
17 accelerated.edge4.frankfurt1.level3.net (212.162.25.6) 167.854 ms 182.773
Because of the latency? Doesn't that traceroute indicate that most of the delay was just getting to Level 3 in San Jose, through your ISP? (Isn't that 50.242.148.34 node right between level3 and your ISP? Sadly it lacks a name, so hard to say who owns it.)
"Prove" is a pretty strong word. I looked for and didn't find a lawyeresque privacy agreement, but even if I did that's not proof either since they could lie. They could invite you to their data center, show you their code, etc, but that is still not proof since they could be hiding things from you, or simply wait until you leave to flip the "log all data" switch.
It always eventually comes down to trusting a company, or trusting strangers. You're already doing this, because you are viewing this on some form of a computer that you didn't completely hand-build, and are running software that you didn't build from scratch using your own, self-built compilers.
I say a potentially better way of looking at this problem is: I know Comcast logs my DNS queries and who knows what else. I however don't know that this site does. From a pure privacy standpoint, I have nothing to lose by switching over and everything to gain.
I expected the 'why?' page to explain how (or by who) they are funded/how they make money. I've never heard of these guys before and until they explain that, I have no reason to trust them more than other non-ISP DNS provider...
Nowadays, I often find ordns.he.net to be quite useful. HE.net is getting to be a pretty big carrier nowadays, and they generally have better latency and locality than other resolvers. Will never use 8.8.8.8 ever again.
The browser doesn't need to do anything. The resolver will simply not pass an unsigned or incorrectly signed record from a signed zone. You can test yours here[1]. Google DNS[2] supports DNSSEC.
ISP resolvers are significantly faster for me due to being close by. Yet, indeed, some ISPs may do evil things.
Wonder whenever there's a piece of software that'd retroactively verify every query done by my resolver against a set of supposedly-uncensored public third-party resolvers (like this DNS.watch) and raise an alarm in case of inconsistencies. (Although I have no idea how to deal with multitude of false alarms that would be there due to DNS load balancing)
Yes, but which one do you think is censored/manipulated first?
There are times, when it's better to use something not that popular ;)
Couple of other arguments: The internet is built around decentralization... now all people use the same resolvers. Great. It's good to have proper alternatives.
Virtually no important services on the Internet rely on DNSSEC. Using it now is pure downside.
What's especially funny about this is that your DNS queries to these third-party servers are not themselves encrypted; in other words, you're sending your DNS UDP packets halfway across the Internet for the pretense of cryptographic security.