> by exploiting a 20-plus-year-old design flaw in the DNSSEC specification.
If it's an error in the specification, how can patches already be available without breaking the specs?
Can anyone shed light on this? I'm asking for I'm running unbound which is affected (because it's following the spec IIUC) and yet a patch for unbound is already out.
In a nutshell, the vulnerability is stuffing a lot of broken signatures from different keys in a response so the validator wastes a bunch of time retrieving keys and then validating signatures that'll never validate. The fixes just limit the amount of time before validators yield to another task and/or give up. It's a big deal if you run a public resolver but otherwise you can probably fix it at your leisure.
It's similar to how compilers often have limits on compile time constructs that are maybe not explicitly in the standard, but that no one really cares about.
Say, if you design some recursive template in C++ that resolves after a million steps, that might technically be a valid program according to the spec but no compiler will actually accept it (I think the C++ spec actually allows recursion limits, but that's beside the point).
This vulnerability is a bit like not having that limit. So, maybe an RFC to explicitly call out the need for some limit on DNSSEC processing time will be issued, but in practice no one except for attackers should ever come anywhere near the newly imposed non-standard ones.
If it's an error in the specification, how can patches already be available without breaking the specs?
Can anyone shed light on this? I'm asking for I'm running unbound which is affected (because it's following the spec IIUC) and yet a patch for unbound is already out.