I've implemented DNSSEC twice now, I've also implemented IPv6 several times - at various levels of the stack (including directly at layer 3) and have had a hand in two different SSL/TLS implementations. Having implemented each, I don't think that DNSSEC is particularly difficult or error-prone compared to IPv6 or SSL - it's probably far less complicated than both. The trickiest part of DNSSEC to get right is probably NSEC3 (on both the authoritative side and the validating side) and it's not so bad if you brush up on trees and hashing.
But DNSSEC can still be very difficult to implement, not so much because of its inherent complexity, but because of a mismatch between assumptions the designers made and what real-world implementors require. Here's a simple example; many DNS implementations for large service support "m of n" answers. E.g. you might have 100 IP addresses for a service, and the DNS implementation would choose say 8 IP addresses to hand-out. The dogma of DNSSEC is that answers should be signed-offline, which is great for security, but it means that in cases like this that we'd have to deal with a combinatorial explosion and sign all 186 billion potential answers. We could also sign online, but doing expensive crypto in response to a DNS request isn't very smart if you'd like to survive DDOS attacks. So implementers have to make trade-offs and add complexity to make things work.
DNS is also commonly used as a routing mechanism - but DNSSEC does nothing to validate that this routing is being handled correctly. Signed answers are replayable - so the answer that says "You should go to the Dublin node" can be fed to resolvers/clients that should really be routed to a different node. This can aid attacks higher up the protocol stack that depend on routing traffic along a path it can observed. Similarly, because DNSSEC queries happen in the plain, intermediaries can easily filter and drop particular queries which they don't like.
Most bizarrely perhaps, is that DNSSEC provides no end-to-end security. Your browser or OS still communicates with your resolver using regular queries, relying on a single unauthenticated bit to "request" validation. It's as if LAN/wifi-level abuse wasn't a concern at all, or that public resolvers (like OpenDNS and Google Public DNS) didn't exist.
End-to-end security of the channel isn't necessary. DNS is public. There are clues that DNS is broken in those situations that, as with SSL, ultimately your browser would have to present to you. Local validation is all you can count on. E.g. http://www.bortzmeyer.org/dns-swisscom.html
Personally, I'm now twice as interested in relying on DNSSEC and running my own DNS resolution rather than relying on third-parties. Not least because Google's DNS doesn't always give me the closest servers for CDNs that provide different answers. Frankly, the biggest problem DNSSEC solves is securing that first HSTS response, but every bit counts. That DNS services on open WiFi stop lying... That might never happen. But perhaps future async DNS resolution baked into browsers will warn about DNS as much as SSL for those that care.
End-to-end security isn't the same thing as encryption or privacy. The goal of DNSSEC seems to be to authenticate that DNS answers are correct. In an end-to-end security model, then as you suggest - your browser should be involved, and I'd say that your browser should be performing the validation - it would need to have a recursive nameserver built in.
But that's not the DNSSEC model. The DNSSEC model is to implicitly trust your resolver, which is usually hosted remotely, accessed via an unauthenticated channel, and to ask it to perform the validation for you. If the validation fails, all your browser gets is "SERVFAIL" :
So it doesn't have the capacity to report a meaningful error.
Solving the wifi-lying problem is tricky too. A wifi network could always falsely-claim that the root zone is not signed, or fake the keys, and take things from there. Unless you keep all resolution on your laptop/tablet/phone, and synchronize the root public keys periodically (a problem equivalent to getting up-to-date root CA certs) - there's nothing to build trust on. Even more simply; the wifi network can just block DNS and disable dnssec resolution on its own resolver.
> it would need to have a recursive nameserver built in. But that's not the DNSSEC model. The DNSSEC model is to implicitly trust your resolver, which is usually hosted remotely, accessed via an unauthenticated channel, and to ask it to perform the validation for you.
If you do not wish your upstream resolver to do validation then set the CD flag - you will get an RR set that you can validate locally.
dig +cd +qr +dnssec dnssec-failed.org @8.8.8.8
...
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59002
;; flags: qr rd ra cd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 1
;; AUTHORITY SECTION:
dnssec-failed.org. 1799 IN SOA dns101.comcast.org. dnsadmin.comcast.net. 2010101624 900 180 604800 7200
dnssec-failed.org. 21599 IN RRSIG SOA 5 2 86400 20140324165107 20140317134607 28833 dnssec-failed.org. R/stn+84i0qDGa7mMcJn00+/L1z/aj4kfCg1DiUPxokd8HK/FTwIfQcy 8oh+wsSFSYAvem3H3zZ8iVlwIHqmESEPwnkoGolI5BtnEPs7cT3kO1/i CA9DT18r4fdbJrXavWz5Z991gUOhfkpIPi1TmRC4/iZcNFwgBVhZsDEO uAc=
dnssec-failed.org. 7199 IN NSEC www.dnssec- failed.org. NS SOA RRSIG NSEC DNSKEY
dnssec-failed.org. 7199 IN RRSIG NSEC 5 2 7200 20140324165107 20140317134607 28833 dnssec-failed.org. P0s0825v9FxTYoLYqYrJMLmqfuiDvBOGhYbT2ZmypZN1GKwWfEX7TaoJ TE5RB70HNUWFE4Moi+hfRP9wye61tupT75p7Szqn53pBQ58kO73YiYiz MBWB1RreRABRbSwInvWNR9DNsVwBr/6z6/h3fDpGz5O+m8+E64xWv2T8 OgE=
The proposition that it has failed due to the passage of 17 years since the first RFC neglects the fact that there were certain key events that inhibited adoption that didn't happen until recently:
1. The DNS Root Zone was not signed until mid-2010. As the anchor of trust for the DNS, deployment prior to this date was basically experimental. This was the commencement of production use.
2. Registries and registrars, as a whole, did not support it until recently. However, ICANN has required them to do so in the last few years so this roadblock is being removed.
The assertion that it is a failure may ultimately prove to be true, but looking at its experimental phase and declaring it as proof positive is not convincing.
Four years after the signing of the root zone and we don't have decent documentation or easy tools for average engineers. This is a requirement to make it a success.
I did put things more bluntly to get attention, and that has succeeded. I hope to have better news later next week.
> Another security mechanism in widespread use is HTTPS. This has been available since around 1994, and has gotten a refresh with TLS in 2000. Like SSH many web masters don’t really care how this work, they just know to get certificates from Certificate Authorities, put some lines into their configuration files and it works.
No, it doesn't work[1]. There's only the illusion of CA certification security and nothing more. So if the argument here is that DNSSEC failed because it's not like OpenSSL CA I'm not really buying it - that said I reckon that it's better for software to be easier to configure but as man with white hair and weird look in his eyes said things should be as simple as possible, but not more.
While we're cataloging security technologies that have failed, we should add S/MIME-based email encryption to the heap. (Well, except for those organizations that have centrally-managed keys and infrastructure for internal email encryption.)
I know that some savvy users encrypt email, but in the age of the NSA, payload-encrypted email should be the default case.
We should thank MUA and browser vendors for that - they made everything to keep UIs as scary and unusable as possible.
It could've been different if MUAs had allowed (or even suggested) users to generate keypairs and send CSRs in a simple streamlined way. Maybe even cooperate with StartSSL and alikes (like Thunderbird cooperates with file hosting services to send large attachments, huh) to automate the request sending and validation.
As another recently posted article points out, missing $180 billion per year in foregone revenue isn't motivation enough to get US tech companies to make verifiable clients, easy s/mime, key exchange and web-of-trust features of their systems. How big a clue-stick do they need?
I suspect it will only catch on once the big email providers find a way to make it transparent and cost-effective. And we start seeing verification icons in email clients above our messages. That way humans know it matters ;-)
The thing is, you can do that today with Apple's Mail.app, and have been able to do so for years. I sign my email using S/MIME, and while that wound up causing problems with a few outdated mail clients 5 or 10 years ago (sometimes, the fact that it was signed would make certain clients with poor MIME support show the body of the email as an attachment, which confused people), it doesn't cause much problem these days.
Cool. Still not as convenient as it could be, though. "If the intended recipient is outside the sender's Exchange environment or if the sender is not using an Exchange account, the recipient's certificate must be installed on the device."
What that means is by default any email you send would never appear "trusted" so... It's not a great marketing device. A green address bar does more to market SSL than its own advantages, sometimes. I'd argue that certificate trust -- even to say that the email address belongs to gmail.com, for instance, would do wonders to promote the technology.
Sadly, no support on Android, since apparently on Gmail everyone only emails within Google services and never for businesses? ;-) Microsoft should promote SMIME in its online Exchange offerings more, to compete with Gmail.
For what it's worth, I was planning to write this already for a few weeks but did not get around to it. It struck more of a nerve than I thought. I know the draft was earlier, but implementation really should have started after the draft was published.
Anyway, I now have a lunch appointment with Olaf Kolkman et al on Thursday to hopefully improve matters.
Some registrars, particularly Network Solutions, offer poor support for DNSSEC records, or anything beyond A and CNAME records really. For a long time it was officially unsupported; lately there is a footnote at the bottom of the advanced DNS options page to call tech support if you want to discuss using DNSSEC---although they'll happily sell me an extended validation certificate....
I run my own DNS server. I contacted my registrar to enable DNSSEC for my domain on the domain root server. All they could give was a package where they would take over my domain hosting, that was the only way to get DNSSEC for my domain. Switching registrars next year when my domain expires .. can anyone recommend one that actually supports DNSSEC for .info domains with private key management?
For my private domains I use zonesigner from dnssec-tools. No database, no additional daemons, doesn't need to be run on the same machine as the DNS server. It takes the input zone file and keys; spits out a signed zone file. Couldn't be any simpler.
DNSSEC will IMHO be like IPv6 and it will see some adoption after a long march.
For me one of the most important insights: The reason for the (claimed) failure: It is to difficult to use.
That is one trouble I see so often in the computer business. Some invention is made with bright ideas and good intentions, but it is doomed to fail or at least to produce unnecessary additional costs, just because of one reason: It is unnecessary complicated.
Some bright guy (I don't remember who) said: Make it as simple as possible, but not simpler. That is the art, to find the right measure of complexity and make it as simple as possible. Somebody else said, that something is only than finished, when there is nothing left to delete from it.
The author compares DNSSEC with other protocols like SSH and HTTPS.
DNSSEC suffers from some of the same problems as SSL PKI, namely
1. it requires placing authority in people who you may not trust and
2. it's too easy to make configuration mistakes.
But I suggest those alone are not the reasons DNSSEC has "failed".
The reason it was a failure from the beginning (cf. the other protocols the author mentions) is in my opinion because it's aimed at authentication, and _only_ at authentication. And of course, like PKI, it tries to do this by delegating authority to some mysterious group of people.
If it were aimed at _both_ encryption and authentication like SSH or SSL, then it could "fail" at authentication but still appear useful for encryption.
In an ideal world, you could _authenticate_ with 100% certainty that you are connecting to a given remote computer and you could eliminate the MITM risk.
We know that this is pure fantasy in today's world, though we try our best.
Moreover, with respect to DNS, ICANN's and other third party DNS is usually anycasted, so what exactly are we authenticating with DNSSEC?
Certainly not "the" remote computer with the zone file. Who is in charge of assigning a "root server" IP address to some interface on some computer in some datacenter sitting between you and the rest of the internet? Do you even know?
Perhaps we are authenticating that the domainname is "legitimate"?
According to who? Some third party? ICANN? It sounds a lot like PKI and SSL certificates.
Personally, I would not rely on DNSSEC to confirm the existence/nonexistence of any domainname.
If I can get a publicly available zone file from a trusted source, then I can serve that "root" and "TLD" information locally on the device. No need for the network.
What I'm more interested in is confirming that the answer I receive from the authoritative nameserver (not the root or the TLD) contains the correct information, e.g., the correct IP address.
Why shouldn't DNS traffic be encrypted, like, e.g., TCP/HTTP traffic?
Going forward, I find dnscurve much more appealing than DNSSEC.
It might not be a foolproof solution to authentication, but at least it offers encryption.
The same can be said for SSH and SSL.
To the author: Maybe that's why they have not been complete failures like DNSSEC.
Please provide evidence that it – deployment – failed, not that it is hard to understand (69 pages of training books and multi-day training courses are understanding issues, not deployment).
I mean, it became possible to fully secure things end-to-end since 2010[1], so in about 10 years from now by your length of time theory, we should know how it works out.
I guess, you are missing a source for "[1]". Why should it only be possible since 2010 to "secure things end-to-end"?
Off-the-record messaging (pidgin-otr) is available since 2005, PGP since 1991 and the theory behind it (RSA (1977), Diffie-Hellman key exchange (1976), etc.) is much older.
Adoption can be modeled to depend on ease-of-use driven by apparent need, influenced by a user's risk-appetite. But also popularity, being a differential equation, where it's rate of change is proportional to the number of users. So if the apparent benefit isn't there, most people being reactive, probably won't use something better, especially if it's as hard to get working as old Postgres... They use MySQL until Postgres demonstrates advantageousness.
If you have domain registration and DNS hosting from the same company, then it should ideally be possible to log in, check a box, and have all the keys and records computed and updated automatically.
Is there a single registrar in the world that supports this?
But DNSSEC can still be very difficult to implement, not so much because of its inherent complexity, but because of a mismatch between assumptions the designers made and what real-world implementors require. Here's a simple example; many DNS implementations for large service support "m of n" answers. E.g. you might have 100 IP addresses for a service, and the DNS implementation would choose say 8 IP addresses to hand-out. The dogma of DNSSEC is that answers should be signed-offline, which is great for security, but it means that in cases like this that we'd have to deal with a combinatorial explosion and sign all 186 billion potential answers. We could also sign online, but doing expensive crypto in response to a DNS request isn't very smart if you'd like to survive DDOS attacks. So implementers have to make trade-offs and add complexity to make things work.
DNS is also commonly used as a routing mechanism - but DNSSEC does nothing to validate that this routing is being handled correctly. Signed answers are replayable - so the answer that says "You should go to the Dublin node" can be fed to resolvers/clients that should really be routed to a different node. This can aid attacks higher up the protocol stack that depend on routing traffic along a path it can observed. Similarly, because DNSSEC queries happen in the plain, intermediaries can easily filter and drop particular queries which they don't like.
Most bizarrely perhaps, is that DNSSEC provides no end-to-end security. Your browser or OS still communicates with your resolver using regular queries, relying on a single unauthenticated bit to "request" validation. It's as if LAN/wifi-level abuse wasn't a concern at all, or that public resolvers (like OpenDNS and Google Public DNS) didn't exist.