Note: can bring down a DNSSEC-validating DNS resolver. Not any authoritative DNS server.
Most people already use a centralized resolver like Google or Cloudflare anyway, and you can be sure those will be patched. People who use their ISP’s resolver might be disappointed if their ISP is slow to patch things, but what else is new?
> People who use their ISP’s resolver might be disappointed if their ISP is slow to patch things, but what else is new?
Bear in mind that if you're using a regular port 53 resolver like Google's, you might be using your ISP's resolver anyway -- a lot of ISP's hijack port 53 and redirect it to their own server(s) to reduce load. That's why DNS over TLS / HTTPS is typically recommended over unencrypted DNS.
In the US, Comcast does this on all business copper connections. They call it "Secure Edge". It frequently breaks DNS, VPNs, some voip, torrents (or any P2P connections), and probably other stuff. It's enabled by default on all new accounts and will randomly be at the account level.
Isn't the ads-instead-of-NXDOMAIN attack the one thing DNSSEC is actually good at protecting against? So I would expect the guilty ISPs to avoid providing any form of DNSSEC support whatsoever, and thereby avoid consequences of any of DNSSEC's downsides.
No, DNSSEC does nothing about ads-instead-of-NXDOMAIN. DNSSEC is server-to-server. If you rely on your ISP's resolver server, they can ignore DNSSEC and feed you whatever they want. If you run your own validating recursing resolver, you don't have the ads-instead-of-NXDOMAIN problem in the first place.
Who relies directly on their ISP's DNS servers? Isn't the ordinary use (for residential networks, who are the targets of the attacks in question) that your router is running dnsmasq and handing out DHCP leases that instruct hosts to query that dnsmasq instance?
Right, but it has a DNSSEC option that when switched on can protect against this attack (provided the ISP isn't stripping all DNSSEC info from their replies), at the cost of suffering the other consequences of attempting DNSSEC validation. Not a great solution overall, but it does exist.
Look, you're either recursing or you're not. If you're not recursing, you're not really validating DNSSEC records. If you are, you're not vulnerable to NXDOMAIN ads. The NXDOMAIN thing is in fact not a good use case for DNSSEC. That's all I'm saying here, I'm not making any broader claim than that. Of course you can just install a recursive resolver on your router. But then you don't need DNSSEC!
> If you're not recursing, you're not really validating DNSSEC records.
Nope, validating in stubs is a thing. systemd's resolved does it. Apple's high-level network frameworks do it if you ask as of a couple of years ago (they've been back and forth on DNSSEC in their lower level API for longer than that). I'm not sure how well they work but they're there.
A validating stub resolver is effectively a recursive resolver proxying through another recursor. At the point where you're going to do that, you might as well just run a recursive server. Either way: you don't have the NXDOMAIN problem. I really don't think there's a way to get around this. It's not dispositive of DNSSEC (other things are!), it's just not a real use case?
> A validating stub resolver is effectively a recursive resolver proxying through another recursor. At the point where you're going to do that, you might as well just run a recursive server.
Every iPhone on the planet might as well be a recursive resolver? Yeah, nah.
Stub resolver: A resolver that cannot perform all resolution itself.
Stub resolvers generally depend on a recursive resolver to
undertake the actual resolution function.
They literally have an animation of the client resolver making repeated requests up the tree. While saying they're making multiple requests. And calling it recursive.
Never heard of a consumer router you can't switch off as a dhcp server for the local network. Also never seen one where you can't configure the ip of the nameserver being handed out for clients to use. Which ISPs actually do this?
That said friends don't let friends do anything else than install pi-hole as the dhcp server using unbound for dns. It's never too late but do it today. I'm 100% sure this is possible in Australia.
Telstra, the only company to service my address for over 2 years post fraudband rollout, does! They sent me a stupid box I couldn't even change any of the passwords or SSIDs on as well. I was never going to use it, but my in-laws wanted to for some reason. I threw it out. The only people I know who don't use their ISP provided hardware are all techies in some way. Most of the people I know are techies, though haha
I love my pi hole and have been encouraging friends lol
Not a consumer router, but the cable modem / gateway provided to me by Shaw Cable (now Rogers) in Canada does not allow me to shut off DHCP or customize the DNS servers it provides DHCP clients.
I can and do put it in bridge mode and use my own router, but I was quite surprised and miffed to see this. First time I've ever come across that.
The router is a Comcast / Xfinity XB6 and I've been told that this is a "feature" of all of Comcast's routers / gateways.
so you connect $something to the cable modem. Make a dhcp request and get back your public ip (and some dns servers). But it isn't doing nat or handing out local addresses to your local network. Right?
_Or_
it's a combined modem-and-router-in-one which manages your local subnet for you and you can't configure how it does this at all beyond switching the router part off and using a separate router. That sounds nuts?
99% of everyone who's just a consumer, I'd guess. Probably the only time people change over is when there's a DNS block on something from their ISP (work, school, commonly do that).
> Most people already use a centralized resolver like Google or Cloudflare anyway, ...
This is HN. I run unbound on a Pi (mentioning the Pi I don't know if those running PiHole are running a vulnerable resolver or not). My Pi shall need patching when I get back from vacation!
Same. I've been running my own DNS servers (caching and authoritative) since the mid 90's, back when my home network was on an ISDN line. I still use BIND.
At this point, someone should just file a high severity / high impact CVE against DNSSEC in its entirety, and ensure the box-tickers all across the world mandate its eradication.
While I'm sure (even stronger: I know) that at some point DNSSEC was a good idea, it's just a liability-slash-infrastructure-tax these days (a footgun at best, an additional DoS vector in most cases), and it's time for it to go.
99% of DNSSEC deployments come down to "I trust the entity I have outsourced my DNS management to with my private key management as well", which makes no sense whatsoever, and the reasoning behind the remaining 1% doesn't inspire much confidence either.
DoH satisfies the DNS privacy/security requirements of most Internet users just fine, and with or without DNSSEC, the management of the underlying infrastructure remains the same as it was before, i.e. "trust us"...
DNSSEC doesn't have encryption, and does nothing for end-user privacy, so that's not even an alternative to DoH or DNSCurve.
The only thing it deems a potential privacy issue is possibility of enumeration of subdomains, which is security by obscurity for the domain owner, from a time before certificate transparency logs ruined that anyway.
> from a time before certificate transparency logs ruined that anyway.
Isn't it possible under WebPKI rules to get an intermediate cert for all of your subdomains so that only the intermediate cert needs to get logged in certificate transparency? Or at the very least, you could use a wildcard underneath the domain you own...
I don’t think CAB would prevent that kind of intermediate, but I don’t know of any CA that issues them. The domain set would need to be enforced with name constraints, which are notoriously buggy in a lot of validators.
But yes, a wildcard accomplishes the same thing, and I think that’s the route (almost?) everyone goes if a service’s subdomains really need to be kept out of a transparency log.
I was under the impression you could flag your domain to not be in cert transparency logs. Security through obscurity is generally considered a bad idea (to which I think exceptions or nuance exist), but the likelihood of dns names being burnt via other mechanisms (isps/‘security’ products and platforms logging dns requests and selling them being a reasonable assumption).
It doesn't, at all. NSEC3 is crackable like a 1990s password file, and several tools exist to do it. The "standard" solution to this is "whitelies" (RFC4470), which requires your DNSSEC server to be an online signer so it can generate chaff records; the supposedly upcoming solution is NSEC5, which fixes the broken cryptography in NSEC3.
It's important to understand why one might care about trusting the zone's content.
From the perspective of something like a browser, the mapping from the domain name to the IP address (i.e., the A or AAAA records) need not be secured via DNS because the connection to the server is secured via TLS and the WebPKI. So DNSSEC isn't helping here, especially because the clients don't check DNSSEC, so it doesn't protect them from attack on the local network, which is one of the main loci. If the DNS server provides the wrong IP, this is mostly a DoS attack.
I am aware that the situation is somewhat different in email but I'm not sure how different really. Note that it is the case that DNSSEC could help secure ACME validation queries, but as tpacek observes, CT has greatly reduced the risk of misissuance.
Of course there are other mappings in the DNS, but as a general matter those are being designed under the assumption that the DNS is untrustworthy, due to the low level of DNSSEC deployment. For instance, if you look at the HTTPS RR, it's fine to put a key for Encrypted Client Hello in it because the resolver already knows the desired domain name, so lying about the key doesn't really help. However, we can't safely publish the server's public key to be used for TLS 1.3 early data ("Zero RTT priming") because that would require trusting the DNS. So, any features which require the DNS to be secure have the usual chicken-and-egg deployment problems (this is also to a great extent what happened with DANE).
Taking the opportunity to reference myself, the following longer writeups might be useful on this topic:
> However, only DNSSEC prevents hijacks between the resolver and the authoritative servers
It does no such thing. An attacker on the resolver or between the resolver and authoritative is able to strip DNSSEC.
DNSCrypt was the solution to all the problems we had, but at the time DNS software vendors and operators also had a financial interest in passive monitoring so DNSSEC being clear on the wire won.
A recursive will drop unsigned responses from root servers. So that if a zone should be signed, then all unsigned answers are boggus and dropped.
On unbound, for instance:
harden-dnssec-stripped: <yes or no>
Require DNSSEC data for trust-anchored zones, if such data is
absent, the zone becomes bogus. If turned off, and no DNSSEC
data is received (or the DNSKEY data fails to validate), then
the zone is made insecure, this behaves like there is no trust
anchor. You could turn this off if you are sometimes behind an
intrusive firewall (of some sort) that removes DNSSEC data from
packets, or a zone changes from signed to unsigned to badly
signed often. If turned off you run the risk of a downgrade at-
tack that disables security for a zone. Default is yes.
DNSSEC is also a huge foot-gun in that the end user just sees "the internet is broken." This leads to many recursive operators turning off DNSSEC validation when big zones break and eyeballs complain. Perfect homelab quality configurations are rarely deployed to the real world.
It really is hot garbage and should be taken out behind the woodshed.
No, I think that is very important which is why I fight against DNSSEC.
Have a look at DNSCurve, it solves many of the problems DNSSEC was attempting to address but with proper transport security. We implemented this at OpenDNS back in 2010 so it would opportunistically use it if available.
I don't think that's really true at all, re the incentive story. DNSSEC (or, at least, the classic IETF DNS stack) has a story for encrypted transport; it's just that nobody ever wanted to deploy it.
I don't want to dig up private discussions from a dozen years ago, but the DNS vendors that mattered at the time also had passive DNS/security products that paid the bills.
Not as it's deployed today. Since over 80% (conservative guesstimate) of the zones that most people care about are not signed, it's pretty much useless.
The 'evil ISP or IT MITMs my DNS traffic' scenario is much more effectively addressed with DoH (since it only requires clients and some resolvers to coordinate, not the entire Internet, and it looks like regular HTTPS, which implementers already understand how to deal with, as a bonus).
And the 'evil government redirects some or all zones' play is still very much possible even with DNSSEC, since, guess who ultimately controls the keys.
Even if you think that making the latter scenario more difficult to implement is worth it, getting more people to sign their zones is a losing battle, since they're very much likely to self-DoS themselves in the process, significantly reducing their enthusiasm...
Unlike IPv6, everybody who deploy dnssec gets the full benefits, regardless of what others are doing : you just need to get a fully-DNSSEC-supported chain from the roots to your zone.
Yes, I'm aware of the .nl situation in particular, and... it only underscores my point?
If those .nl domains dropped off the Internet tomorrow, that (while of course very inconvenient for lots of people) wouldn't cause more than a tiny dip in global traffic. The benchmark here is how much of, say .com and .net combined, is signed. And that's around 4% (4.5M signed zones out of a total 170M, give or take).
Because of a well-intended mandate, most .nl domains are signed by their registrar, which generates and holds the private keys. So: the very same entity responsible for enabling domain delegation also secures that delegation. Virtually nobody generates and supplies their own DNSSEC keys when registering their .nl domain. So, a government looking for someone to lean on to modify a delegation, doesn't need to do that much extra work for such 'secure' zones, do they now?
From a consumer perspective, visiting digid.nl (signed, probably with their own keys, kept in a nice HSM somewhere!) vs. ah.nl (not signed) offers no meaningful extra privacy or security: when not using DoH, their ISP and anyone else in the middle will still have a pretty good idea what they're up to, and can strip off the 'hey, tell me about your signature' bits in any requests, leaving the client unable to tell the difference in the first place in most situations.
In case of a DNS hijack, the consumer security implications are exactly the same for both login (digid.nl) and shopping (ah.nl): their browser or app will refuse to connect, because DNS is already a negligible factor there, and it's the TLS certificate that makes the difference. That, combined with the very real danger that turning on DNSSEC makes a zone unresolvable for hours or even days on end, makes most people a bit wary about doing do. And they're absolutely right.
DNSSEC doesn't provide privacy. It was never designed to. If you want privacy, use ODoH.
What it does offer, is tamper detection. I don't trust Cloudflare enough to always provide me with the right DNS data even if resolving the domain happens over TLS, and that's the part of the chain DNSSEC covers. DNS servers don't use DoH to resolve records, so the MitM risk remains.
In my experience, the practical risks of enabling DNSSEC are minimal. Some broken (often Big Tech) DNS providers have had issues in the past (Amazon, notably) but every major DNS provider I've used has never let me down, and that includes some of the cheapest domain servers on the market.
You may be as wary if you want to be, but .nl proves that the risks of DNSSEC are quite minimal in practice if the TLD registrar is competent.
It sounds good as long we apply it to all outsourcing where security relies on a third-party. End-to-end encryption was never designed with the concept of clouds and anycast providers, including the whole idea of virtual servers. They all depend on "I trust the entity who owns this hardware" which can be any partner which the cloud provider deemed worthy.
> As of December 2023, approximately 31 percent of web clients worldwide used DNSSEC-validating DNS resolvers
That doesn't seem right. Ever since enabling DNSSEC validation on my system, YouTube and every other Google product except basic searching is broken for me. The percentage of internet users who enforce DNSSEC must be much much smaller.
I'm not sure the logic, since you have problems everyone else must have them too? Google DNS / Cloudflare DNS are both DNSSEC validating revolvers, two of the most popular DNS services on the planet. Most US based ISP resolvers are DNSSEC validating (Xfinity's 75.75.75.75 is). Google Chrome ships with DOH going to Google DNS on by default now, Firefox does DOH by default as well.
So I can absolutely see where this would easily be the case, if not more.
> Ever since enabling DNSSEC validation on my system, YouTube and every other Google product except basic searching is broken for me.
Enabling DNSSEC wouldn't change resolution of those properties unless you somehow set your system to treat insecure delegations as bogus. Is that what you've done?
DNSSEC evaluates answers as being one of secure, insecure, or bogus. Secure means there's proof the answer is correct going all the way back to the root. Insecure means at some point there was a delegation that either opts out, or requires use of an unknown signature algorithm or unsupported NSEC3 parameters (unknown/unsupported by the validating software). Bogus means there was no proofs or the proofs didn't check out.
Of course there's not much point to that evaluation if you're only looking up IP addresses and then relying on WebPKI to see that the other end is what you expected it to be.
EDIT:
I'm not allowed to reply for some reason, so in answer to tptacek:
> Right, which leaves open the question of what the point is.
No, I don't think it does. I think my summary reasonably conveys the functionality DNSSEC offers and how it is practically useful. (This is not a flippant response, the spade is a spade.)
Right, which leaves open the question of what the point is.
A more pointed critique implied by the thread you're replying to is: if virtually nothing on the Internet is signed, what's the point?
The ATHENE team's Black Hat talk from last year surveyed the "Tranco Top 500k", whatever that is, but I'll just say that 500k is more hosts than the 500 top hosts I use from the Moz500 for the same stat, and found that (wait for it) less than 5% of hosts in that dataset worldwide were signed, and a substantial number of those hosts are just signed by their registrars.
If you were going to make a case for an ordinary Internet user, like, the modal American user, to enable DNSSEC --- what would it be? What benefit would they get?
Not so long ago less than 5% of hosts were using HTTPS. But we're now in a world where being HTTP-only (or being HTTPS but not having a valid chain of trust signing your certificate) is unusual and suspicious, and presumably we believe that while that was something that had to be pushed out by browser makers etc., it ultimately benefits users. I would hope DNS will eventually go the same way for the same reasons.
The adoption curve, in addition to being a decade and a half back from DNSSEC at that point, was also the inverse of DNSSEC's: the most popular sites on the Internet generally used HTTPS, and the least popular sites on the Internet dominate DNSSEC, with only 4.5% of the Tranco Top 500k being signed. And DNSSEC is the older protocol! Respectfully, the comparison is risible.
I've had DNSSEC validation enabled for years and Google's services never broke for me. I don't think the problem lies with Google on this one, it's probably a bug in your resolver (or worse, your ISP may be intercepting DNS traffic to Google).
DNSVIZ is a bit unclear in this case. It doesn't warn that the initial delegation from .com to google.com is insecure, but it does warn that the delegation from google.com to l.google.com is.
Google doesn't use DNSSEC, unfortunately, so this shouldn't be a problem. If your resolver breaks on this, I think that may be the result of a bug or misconfiguration, because there's no DNSSEC to validate here.
I am "forced" to allow "our" domains to be DNSSEC because... an auditor suggested it as a possible improvement and some manager thought it a good idea to do whatever said auditor proposes.
The argument that absolutely nothing that the world relies on, is not being singed (google Facebook reddit Cisco MicroSoft etc) holds no clout with the believers, unfortunately.
Consulting and working for MSPs over the last 10 years I've probably been exposed to a couple hundred environments and I've never once seen DNSSEC validation used.
As far as I can tell this is a BIND bug. There are typically a dozen or so of these each year. The BIND 9 codebase (which is where this bug arose) dates back to when Bill Clinton was President of the US.
Rather than "Ooh, DNSSEC is bad" the correct take is that we need to wean ourselves off ISC BIND. Maybe ISRG can pay somebody to make a less awful DNS server implementation in Rust.
It MIGHT originated from ISC, but at the moment it's the behavior standard specifies and the fix all DNS software vendors have applied, breaks the standard.
IMO every operation should have a maximum time to execute, and rejecting an update / response for failure to process fast enough SHOULD be valid, even for a an otherwise well formed packet. A protocol must be resilient to valid communications failing for arbitrary reasons including operator error or cosmic bit-flips that corrupt the transmission.
You might be interested in Hickory DNS (recently we rebranded it from Trust-DNS). The ISRG project is sponsoring work on this, announcement here: https://www.memorysafety.org/initiative/dns/
The researchers and other DNS server maintainers explicitly say this is a bug in the DNSSEC standard, and technically all patched servers are now standard-incompliant.
No, but admit it, you felt a certain glee in submitting it... (which is not to say you're wrong about that, and I write that as someone who once penned a public standard mandating DNSSEC -- which just goes to show we all make mistakes...)
I think the case against it, while extraordinarily strong, is actually pretty subtle, and I generally don't look at advocacy for it in the early days as a mistake at all. We learned a lot from the experience of trying to get it to work. I was a vocal advocate for key pinning, and I don't think I was mistaken to push for it, despite its ultimate non-viability and the success of the competing CT model.
Good point. I guess that any technology with a huge built-in "ah, yes, that one simple mistake you made, now means you'll have to completely rename your endpoint before anyone will be able to talk to it again" foot-gun requires a bit of a re-think prior to general availability.
The only way to get it right is to try. An issue I think people sleep on with DNSSEC is that the service model was designed in the mid-1990s, as a TIS Labs project for DoD, and was premised on the idea that cryptography would be far too expensive to do "live" on servers. That's why we have a DNSSEC architecture built around offline signers, which is one of the original sins of the protocol.
So I think a more precise meta-criticism of DNSSEC would be that it should have been obvious by the early-mid oughts, when the entire Internet was running off online TLS cryptography, and DNSSEC still wasn't even deployable (because roots weren't signed, and because we hadn't gotten to the typecode roll and DNSSECbis) let alone mired in the single digits where it is now, that it was time to scrap the original design and come up with something new.
> The academics who found this flaw – associated with the German National Research Center for Applied Cybersecurity (ATHENE) in Darmstadt – claimed DNS server software makers briefed about the vulnerability described it as "the worst attack on DNS ever discovered."
This seems a bit hyperbolic, the txid[0] randomization bug found by Kaminsky was objectively worse as it would allow you to poison the cache and in the days before encryption was considered standard.
This is really a design flaw in DNSSEC itself. It isn't so much a "vulnerability" in any particular software. BIND and Unbound, and subsequently everyone else that follows internet trends, intentionally added DNSSEC support even though DNSSEC was flawed from the outset.
Why was DNSSEC revived from the RFC graveyard^1 and implemented by authors of BIND and Unbound. Cache poisoning. Why does cache poisoning exist. Because people use third party shared caches. Why do people use third party shared caches. That one I will leave for the reader.
There are other, varied options. I use stored DNS data from a variety of sources that I can cross-check against each other. I use iterative resolution instead of recursive. I am happy to discover when websites occasionally change their IP addresses. I want to know. But most websites I visit keep the same addresses for years.
Almost all DNS lookups I do are _not_ on a "phone". The number of IP addresses I need for the "phone" is relatively small.
1. According to djb, DNSSEC was first conceived in 1993. And the charlatans behind it started taking money from the USG sometime in or after 1994.
That is a long time to be awaiting release in some version of BIND. Gee, I wonder why. With this "KeyTrap" demonstration, one might guess that DNSSEC sat around for 15 years because it was never ready for prime time. One might even conclude it still isn't.
DNSSEC hasn't been adopted at all, and was designed before the first cache poisoning attacks (both the bailiwick-style extra-records attack and the XID prediction attack, which were exploited separately from 1996 through 1997; Kaminsky's attack that combined the two wouldn't come out for another decade). It isn't motivated by any real threat model, but rather by a DoD effort to synergize all the Internet protocols with IPSEC.
I am not an expert in DNS but isnt' DNSSEC sort of like IPV6 when it comes to utility vs adoption vs overhead ? In other words, hasn't really lived up to the expectations or am I missing something ?
It has massively underperformed expectations in a way that far outstrips any disappointment anyone could have in IPv6. IPv6 has adoption approaching 50%. If you have a modern ISP and a modern router, you might be using IPv6 right now without even noticing it. Meanwhile: less than 5% of the major US TLDs are signed, and an even smaller fraction of the "top domains" (given any list of top domains, like the Moz 500) are. The major prevailing use case for DNSSEC, which is DANE, has no browser support --- browsers actually tried it and removed that support.
My usual alternative viewpoint whenever you try to claim that DNSSEC is dead:
From where I sit, I work at a registry and DNS server host (among other things) where about 40% of all our domains have DNSSEC (and that number is constantly climbing). Every conference I go to, and in every webinar, people seemingly always talk about DNSSEC and how usage is increasing.
From my perspective, your continuous claim that “nobody uses” DNSSEC is simply false. DNSSEC works, usage of DNSSEC is steadily increasing, and new protocols (like DANE) are starting to make use of DNSSEC for its features. Conversely, I only relatively rarely hear anything about MTA-STS.
Start by convincing Geoff Huston, from who I could shoplift all the rebuttals I'd need to this argument. TLD DNSSEC signing stats are easy for people to look up, and people can quickly see I'm not making any of this up.
fwiw, i live in a country where rolling out dnssec apparently had some financial incentives for providers, which apparently helped adoption: https://stats.sidnlabs.nl/en/dnssec.html
adoption still growing, more than 60% of .nl domains currently dnssec-signed, and around 60% of queries are from validation resolvers.
i looked up global stats. world map + tables about current validating resolver queries: https://stats.labs.apnic.net/dnssec. and this is the graph over time of for the world: https://stats.labs.apnic.net/dnssec/XA (30% validating, don't know what the additional 10% mixed means).
i did a quick search, but didn't find global stats about domains being dnssec-signed.
dnssec is not easy/simple (especially when getting into the details), but i think the contemporary dns servers make it relatively easy to enable dnssec on a zone, managing the signing themselves.
i do like the idea of being able to set up secure connections without relying on CAs.
Couple things here. First: the European provider DNSSEC adoption stuff is all based on the providers managing and custodying keys for their customers, which is security theater. We'd have universal DNSSEC adoption if nobody ever managed their own DNS! Second: the prospects of DNSSEC replacing CAs are slim-to-nil, since browser experiments with actually using DANE failed. Third: even if a browser did enable DANE, so that some fraction of users could "rely" on it, they'd still be honoring the WebPKI, and you'd be a trivial downgrade attack away from being put back into the WebPKI CA infrastructure. Fourth, and finally, the roots of trust in DNSSEC are themselves commercial entities, and they're less trustworthy (if that's possible for you to imagine) than commercial CAs, because they're not required to participate in a public ledger like CT.
Really, though, LetsEncrypt was the death knell for the DANE use case. Certificates are free now, and always will be.
I was going to say that the DANE PGPKEY stuff was another interesting use case for DANE, but on doing a search for clients who supported it, it seems the answer there has just been to use a .well-known https: url instead.
If even the PGP enthusiasts can't be bothered to use the approach, it must be well and truly dead.
> providers managing and custodying keys for their customers, which is security theater
isn't that similar to providers managing/having access to the tls keys? well, dns is a bit more essential. but plain dns isn't more secure. ideally we can all only access our own keys, but that doesn't seem to be the standard.
i have been wondering how common it is for dns operators to just serve (the signed) zones that get sent in by domain owners. and if that has technical hurdles for the operators (i'm sure the reason for many domain owners is they want to outsource responsibility).
> DNSSEC replacing CAs are slim-to-nil, since browser experiments with actually using DANE failed
seems like it for browsers. it seems dane for mail has a future. the alternative to dane, pki-based mta-sts, has downsides like tofu for policies and needing long-lived cachable policies, especially not great given the decentralized nature. currently, many domains have no protection for deliveries (unverified mx records & tls certs).
found this just now, interesting: https://ripe86.ripe.net/presentations/51-2023-05-23-dnssec.p...
mentions tls with dane records inside. but not necessarily hopeful for dnssec, it is hard to change infrastructure. i always wonder to what extent new protocols are lobbied and their rollout preplanned vs hoping for the best.
> you'd be a trivial downgrade attack away from being put back into the WebPKI CA infrastructure.
i often wonder why not more "signals" about how to connect are in dns. perhaps lack of dnssec. with dnssec that becomes an option, preventing a downgrade. probably bad middleboxes that don't let certain dns requests through (a reason to encrypt). the trend seems to be https on a subdomain that serves a policy.
> Fourth, and finally, the roots of trust in DNSSEC are themselves commercial entities, and they're less trustworthy (if that's possible for you to imagine) than commercial CAs, because they're not required to participate in a public ledger like CT.
i should update my knowledge exactly on the good/bad about the dnssec trust roots. if there are good docs out there, i'm interested.
> Really, though, LetsEncrypt was the death knell for the DANE use case. Certificates are free now, and always will be.
true, it doesn't seem like we're in bad position with LE. and they keep getting better (shorter-lived certs). though perhaps we're getting a little bit reliant on this one CA.
> isn't that similar to providers managing/having access to the tls keys
Yes. Cloudflare-like solutions has access to all the TLS keys. In addition, any cloud solution with with anycast will have partners that has access to all keys, and there is no validation that all partners will respond with the exact same information. The only non-security theater is self hosting of everything with hardware and software under the user's full control.
> i have been wondering how common it is for DNS operators to just serve (the signed) zones that get sent in by domain owners.
Very common. As a service it is usually called slave zone or hidden master. Practically all DNS providers have this as a product. The solution for public key roll over varies between providers, and some TLDs have started to use CDS/CSYNC which removes the registrar from the whole chain. CDS/CSYNC is however a bit more rare so the more common method is to either use long lived keys or a registrar API for uploading new keys.
Whatever else you think about registrars and DNS providers managing DNSSEC, it simply is the case that none of this counts as "adoption". It's not "adoption" if domain owners aren't adopting it, and it's especially not adoption if the largest and most important domains fastidiously avoid it. You could get anything adopted Internetwide instantly if you just had ISPs quietly turn it on for everybody.
Does HTTPS adoption somehow not count if your web site provider adopts it, but not you? That is what you are essentially arguing. And just as most people do not run their own web server, even fewer people run their own DNS servers.
HTTPS adoption is universal. I don't have to pick apart the different modes and qualities of adoption. Most major websites won't even let you not use it anymore. The comparison is risible.
> I don't have to pick apart the different modes and qualities of adoption.
Actually, you do have to make proper counter-arguments. That is how a debate works. Simply declaring your opponent's arguments as "risible" is not cool.
If you please, explain how DNSSEC adoption is different from HTTPS adoption. They seem to have quite close analogs: In the usual case, both are done by the server operators (authoritative DNS server and web server, respectively), not by the end customers themselves, and the server operators also handle and hold all related public and private keys.
You seem to be arguing upthread that DNSSEC adoption somehow does not count since the end customer does not hold the keys themselves. But the same is the case for typical web hosting. So how is this different?
HTTPS is universally adopted. Meanwhile: the DNSSEC root keys could land on Pastebin tonight and nobody would need to be paged. The distinction is so clear that trying to pick apart the description seems disingenuous.
I don't think you're experiencing me being evasive so much as that I simply don't accept your premises.
> the DNSSEC root keys could land on Pastebin tonight and nobody would need to be paged.
I disagree with this ludicrous assertion, but you are answering a different question, so I will not pursue this question at this time, in favor of the issue at hand:
Upthread, you wrote that with “DNS providers managing DNSSEC”, “none of this counts as "adoption”. Why not? You also wrote that “providers managing and custodying keys for their customers […] is security theater”. How is this different from HTTPS keys? Why are HTTPS keys not “security theater”, but DNSSEC keys are?
I'm genuinely interested, because I've asked this question a bunch of times to a bunch of different audiences and never gotten an answer. So that thought experiment again: the DNSSEC root keys are fatally compromised. What are some specific entities that will require an immediate security response? Put differently: what specific entities depend today in any significant way on DNSSEC?
Remember when you're thinking about this that most (virtually all, really) of the largest and/or most important organizations on the Internet don't use DNSSEC, so it wouldn't make any difference at all to them. And, in case this needs saying, it doesn't really count (in the spirit of this thought experiment) if the entity you think of is, like, a DNSSEC provider; stipulate, DNSSEC providers themselves would freak out. But who else would?
I’ll note that you completely ignored the main topic and have jumped to another question. What guarantees do I have, if I engage with you on this new (and admittedly somewhat interesting) topic, that you won’t just again jump to something else mid-debate? You do not seem to be arguing in good faith.
I don't know that I've done that at all but do feel that I expressed this question directly, straightforwardly, and in easily falsifiable terms. It seems like it would be remarkable if you couldn't answer it, right?
Tell you what; I’ll answer your new question if you answer my original question which you evaded: Why are HTTPS keys not “security theater”, but DNSSEC keys are? (Details in my comment upthread.)
Because people generally do manage their own TLS keys. Everybody who has ever set up Certbot and LetsEncrypt has done so. You've misconstrued my argument about this, which says only that people who have domains autosigned by their registrars aren't a meaningful contributor to DNSSEC deployment, not that the huge share of DNSSEC deployment that those people represent is a sign that all DNSSEC key management is performative security theater. My argument is simpler and more limited than you've taken it for.
Now, to my question? Again: it seems like a very broad, very easily falsified argument. Who, other than DNSSEC providers themselves, would need to be paged if the DNSSEC root keys ended up on Pastebin? Be specific, if you can? Seems like this should be easy to answer!
What? No, most people do not run their own web server. Most people have their web site on a web host, and lets the web hoster manage it, including the TLS keys. Just like with DNS and the DNSSEC keys.
> Who, other than DNSSEC providers themselves, would need to be paged if the DNSSEC root keys ended up on Pastebin?
I freely admit that I don’t know. Beside ICANN, I’m guessing all the TLD operators, since their records can now be spoofed with impunity. But, I guess you could also ask: What would happen if, say, the keys for the X.509 certificates for google.com was leaked?
The fraction of validating queries is misleading because practically all validation happens in the recursive resolver, which doesn't provide security all the way to the client. At least as far as browser clients goes, we're as far away from setting up connections based on DNSSEC-validated keys as ever.
> The fraction of validating queries is misleading because practically all validation happens in the recursive resolver, which doesn't provide security all the way to the client. At least as far as browser clients goes, we're as far away from setting up connections based on DNSSEC-validated keys as ever.
fair point, perhaps partially compensated through tls-protection for recursive dns. don't know how common that is.
i wonder whether there is any move by e.g. linux distro's towards including a dnssec-verifying resolver by default. whether they think it's not worth the trouble, or what the inflection point will be. i usually install unbound on new machines. i believe openbsd comes with unbound by default? assuming that's with dnssec-validation enabled.
i used to dislike the unhelpful generic error messages for dnssec-related problems (servfail). perhaps that was holding adoption back. but extended dns errors (ede) seem to solve that, though i don't know how commonly the detailed errors make it up the stack in old software.
> fair point, perhaps partially compensated through tls-protection for recursive dns. don't know how common that is.
Yes, but this isn't a replacement for the WebPKI because it would allow the recursive to impersonate any site, which is obviously unacceptable.
> i wonder whether there is any move by e.g. linux distro's towards including a dnssec-verifying resolver by default. whether they think it's not worth the trouble, or what the inflection point will be. i usually install unbound on new machines. i believe openbsd comes with unbound by default? assuming that's with dnssec-validation enabled.
A year or so ago we took some measurements of DNSSEC success rates with Firefox on known-good domains that we controlled and the results were very bad. I don't have the data to hand right at the moment, but requiring DNSSEC validation would have produced unacceptably high failure rates.
I.e. precisely nothing? Arguments form from two things, reason and experience, both of which can be respected. Mere authoritarian titles, however, should not. Also, you keep not actually making any actual arguments about DNSSEC.
You should not be so comfortable making up opinions and ascribing them to what you consider to be your opponents. I said arguments from mere authority mean nothing. I.e. your arguments, where you refer to other people without making any argument yourself.
I'll also note that you dismissed my references as being by "randos", without any further argument or even reading them.
If you’ve read them before, surely you can bring some counter-arguments which are not a simple ad hominem?
If you insist on dismissing arguments outright from persons whom you do not consider authoritative, consider Geoff Huston, a person you referred to before: <https://www.potaroo.net/ispcol/2023-02/dnssec.html>.
I have read it. He presents both the case for and against DNSSEC, and seems to conclude that DNSSEC is necessary for a distributed PKI, and seems to advocate for DANE (which uses DNSSEC) to solve the many problems of the traditional CA system.
Now, have you read my reference, and can you argue against it?
I have read both of them. My rebuttal is: look at who wrote them, and compare that to the author I presented. "To find a detailed argument in favor of DNSSEC in 2024, you need to consult the blog posts of random people". Even if they were good blog posts, that would be damning.
That's as far as I feel like I have to go on this very old, very dead thread.
Submit either of them to HN as a new story, and if they hit the front page, you can be quite sure of getting a take from me on them.
All I can infer from your refusal to rebut any argument, and instead insist that your ad-hominem arguments are acceptable, is that you are in this debate for the fame, not to argue any actual point.
I'm usually on your side in these perennial debates, but I worked at a registry also. Tons of inquiries, requests for assistance with it, and requirements for zone signing.
Maybe the registry side views things a lot differently than the end user side.
At what percent adoption will you give up? It's been marching up non stop and with the inclusion of DNSSEC in FedRAMP requirements any hope of it being a "failure" has long since past.
It's been a FedRAMP requirement for something like a decade (when Trump took office, one of the first OMB actions rescinded it, though its current status as a federal IT requirement is uncertain; notably: CLOUD.GOV doesn't do DNSSEC, or didn't last I checked). The corresponding period is one in which .COM lost signed domains. It is very much a failure.
The core problem with DNSSEC adoption has always been what happens when your ZSK/KSK expires, which it ought to for the same reason SSL certs expire.
Rolling this over in an automated fashion is desirable, as if this just happens to slip your mind, too bad, NXDOMAIN
This is obviously a non-starter for most people; otherwise this would just be automagic like letsencrypt is now.
CDS and CDNSKEY records basically solve this problem, but last I checked only a tiny minority of registrars implement them. Even then, some of them require things like 3-day windows in which the CDS/CDNSKEY must not change before they obey. It's basically a recipe for raising your blood pressure 10mmHg.
So, everyone ignores it for this very good reason. As long as it's essentially installing a landmine in your office chair nobody will touch it.
> The core problem with DNSSEC adoption has always been what happens when your ZSK/KSK expires, which it ought to for the same reason SSL certs expire.
For most users there's really no reason for a ZSK/KSK split or rolling keys, much the same as there's no need for rolling SSH keys for most users.
DNSSEC struggles to justify its existence, because it's mostly useless without transport security, and for transport security we use HTTPS (TLS+WebPKI) that doesn't rely on DNSSEC. This leaves only few scenarios where DNSSEC makes a difference.
The current Web's CAs system has obvious flaws, and periodic clown shows, but it gets just enough workarounds to maintain the status quo. DNSSEC would like to replace Web's CAs with its own (DANE), but that isn't a fundamental departure from trusting CAs, only a different arrangement of authorities.
> for transport security we use HTTPS (TLS+WebPKI) that doesn't rely on DNSSEC
The Web PKI relies heavily on DNS, and thus without DNSSEC it's vulnerable to spoofing. Of course people like Thomas have worked very hard to get people not to enable DNSSEC, and so a great many domains can be spoofed. Are they? Maybe†. It's hard to tell, after all there would be no sign of a problem, everything looks fine, there's no validation step because you said you didn't want one...
† At a previous employer in 2019 I looked into this. Military intelligence definitely perform attacks on foreign country Internet infrastructure, but at that time they mostly seemed to rely on the fact that users don't use TLS and/or dismiss dialogs saying the certificates didn't match. The UX got better since, so I'd expect today they routinely ask a CA to issue for these unvalidatable domains once they control the packet flow.
DNS transaction spoofing is not, in fact, a common cause of domain hijacking; generally, when domains are stolen and have certs misissued, it's through registrar phishing. DNSSEC does not in fact solve a real problem for the WebPKI, which is why virtually none of the Internet's most sensitive WebPKI users (for instance, Google Mail) use it.
I've been out of that game for more than four years now, but I'd be astonished if it's less prevalent now than it was when I left. You seem very hung up on these famous brand names, I have no doubt that works out for your career, but in practical terms those aren't targets for such attacks. Too much by-catch.
It is, but as I understand it, it's all just phishing. I haven't talked to anybody at a CA (maybe excepting Nick here?) who has told me direct cache poisoning attacks on the DNS have been a thing for domain hijacking. Like, full-on BGP4 attacks are more common (in that they actually happen).
If he wants to correct me on this, he should be probably do so more clearly and less evasively? I'm being pretty specific and making claims that I think are pretty easy to falsify.
Assuming properly issued certificate, Web PKI on the browser<=>server side is quite well protected against DNS spoofing of all kinds, and this is where DNSSEC is pointless. Users don't need to have DNSSEC-supporting resolvers, and not supporting DNSSEC only improves reliability of their browsing.
CAs are deterred from misissuing by CA/B forum and CT logs. This is actually stronger than DNSSEC where browsers have no power to catch and punish malicious/incompetent CAs.
Web PKI is vulnerable to DNS spoofing when issuing domain-validated certificates, and this is kinda crapshoot. It is getting "patched" with things like querying from multiple vantage points, push for RPKI to make BGP attacks harder, and certificate transparency to catch spoofing after the fact. This is a weakness, but somehow TLS hasn't collapsed yet because of that.
> by exploiting a 20-plus-year-old design flaw in the DNSSEC specification.
If it's an error in the specification, how can patches already be available without breaking the specs?
Can anyone shed light on this? I'm asking for I'm running unbound which is affected (because it's following the spec IIUC) and yet a patch for unbound is already out.
In a nutshell, the vulnerability is stuffing a lot of broken signatures from different keys in a response so the validator wastes a bunch of time retrieving keys and then validating signatures that'll never validate. The fixes just limit the amount of time before validators yield to another task and/or give up. It's a big deal if you run a public resolver but otherwise you can probably fix it at your leisure.
It's similar to how compilers often have limits on compile time constructs that are maybe not explicitly in the standard, but that no one really cares about.
Say, if you design some recursive template in C++ that resolves after a million steps, that might technically be a valid program according to the spec but no compiler will actually accept it (I think the C++ spec actually allows recursion limits, but that's beside the point).
This vulnerability is a bit like not having that limit. So, maybe an RFC to explicitly call out the need for some limit on DNSSEC processing time will be issued, but in practice no one except for attackers should ever come anywhere near the newly imposed non-standard ones.
I run DNS servers behind CurveDNS forwarders at home and it works fine. Of course that's not enough to convince anyone except me that it works. There are some nameservers that offer DNSCurve on the internet. This proves that it works.
The query to 192.5.6.30 is not encrypted because the .com nameservers do not provide a DNSCurve public key prefixed by "uz5" in the subdomain.
The query to 104.248.15.206 is encrypted using DNSCurve. Each packet is encrypted separately. Packets are exchanged via UDP just like regular DNS. DNSCurve predates QUIC.
There is also a free secondary DNS service that will let anyone offer DNSCurve without having to set up CurveDNS forwarders. Assuming they are still in business. I have not tested it. This should put to rest any doubts that DNSCurve actually works. But I know it won't.
The whole point of CurveDNS (and now DoH) is that it works right away, and doesn't depend on the rest of the Internet cooperating with you. It's a bottom-up design, contrasted to DNSSEC's (failed) top-down model. The only problem with DNSCurve is that it's been effectively superseded by DoH. It's the Betamax of secure DNS protocols. Doesn't matter if it's better.
DNSCurve is used to encrypt queries to authoritative DNS servers. DoH is only used to encrypt DNS queries to third party DNS caches. Using third party caches can open the door to cache poisoning. Cache poisoning can be and has been used as a "justification" for deploying DNSSEC.
Right, but (1) most of the value of secure transport for DNSSEC is in the "last mile" between the resolver and the stub resolver on the laptop or whatever, and (2) the same model that secures that hop can secure authoritative lookups for resolvers --- neither protocol is widely deployed for authority queries for recursors, but DoH already has huge deployment numbers for the other use, and seems the more likely bet for how this will play out going forward.
Most people already use a centralized resolver like Google or Cloudflare anyway, and you can be sure those will be patched. People who use their ISP’s resolver might be disappointed if their ISP is slow to patch things, but what else is new?