I once worked for an organization known for providing good documentation.
This is how it worked:
They had a documentation-writing branch. And you (developer) knew that if you don't write documentation, they will. And then if will cost you _more_ time and frustration to review and correct what they wrote than to write it yourself and give to them.
So you did write it (and they proofread it, corrected grammar etc.).
Small company in Netherlands?
My guess: management realized that they don't have money, and axed the position. They cannot say it openly, so had no choice but to ghost the interviewee. Which is of course still very frustrating!
CAA is about preventing certificate mis-issuance, which is what happened in this attack. DNSSEC and CAA could have prevented this attack from being performed the way it was, by thwarting the MITM on ACME.
DANE is about changing the way certificates are authenticated. DANE makes it possible to authenticate certificates without getting them issued by a well-known CA. So CAA records are not particularly relevant to DANE. You can use DANE with certificates issued by a CA, which gives you two ways to authenticate the certificate; in this situation CAA secures one path and DANE the other.
I am one of the co-authors of the DANE SRV RFC https://www.rfc-editor.org/rfc/rfc7673 which is what XMPP would use. I don’t follow XMPP development so I don’t know if it has been deployed. I would like it if DANE were more widely used, but it’s not pertinent to this attack.
Yeah. I used to be 100% in on DANE and against CAs. I'm still 100% for DANE but I now think DANE using existing CAs is the better option in many cases because it means things get CT logged. We don't have a DNSSEC transparency situation right now. OTOH there is one undersung issue with CAs, which is that Let's Encrypt isn't as universally available as people think (see the US embargo list) and that does potentially make access to the internet harder for some.
There are some use cases where DANE is actually winning real victories and is actually more viable than the existing CA infrastructure - site-to-site SMTP, for example.
I feel like packet size was and continues to be a major obstacle for DNSSEC.
Do you know why the DNSSEC/DANE world hasn't simply acknowledged this and switched to requiring ECC?
It is trivial to fit several compressed curve points (i.e. signatures) in a single packet, whereas you can't even fit two RSA signatures in a minimum-safe-to-assume DNS UDP reply packet after accounting for padding and ASN.1 overhead.
I get the feeling that there is some faction that really hates UDP and they sort of hijacked the DNSSEC situation to use as a lever to force people to allow DNS-over-TCP.
That seems to be backfiring, however, and DNSSEC has wound up taking a bullet for the UDP-haters.
Many very-large networks simply can't afford for their DNS traffic to be exposed to TCP's intrusion-detection malleability and slowloris (resource exhaustion) attacks. These networks appear to be simply ignoring the "thou must TCP thine DNS" edict. DNSSEC is not a good enough carrot for them. I think ditching RSA would have been a more pragmatic choice than ditching UDP or skipping DNSSEC.
When I query vjhv.verisign.com I get a response containing four 2048 bit RSA-SHA-2 signatures in 1049 bytes which is well within the EDNS MTU for unfragmented UDP, so I’m not convinced the problem is as bad as you paint it. There have been problems with EDNS trying to use fragmented UDP, but that has been reduced a lot by newer software being more cautious about message size limits for DNS over UDP.
DNS needs TCP even in the absence of DNSSEC, because there are queries you cannot resolve without it. Some operators might convince themselves they can get away without it, but they will probably suffer subtle breakage.
> four 2048 bit RSA-SHA-2 signatures in 1049 bytes which is well within the EDNS MTU for unfragmented UDP
I was referring to the non-EDNS 512-byte limit.
Yes, you get ~2.5 times more with EDNS. Still, four records is not a lot.
> DNS needs TCP even in the absence of DNSSEC, because there are queries you cannot resolve without it.
Theoretically? Perhaps. Some would argue that connectionless DNS is valuable enough that people should not create those resource records. Before DNSSEC that was a working consensus. And with ECC it could be once again.
The bloaty key/signature size is only a problem with the PQ encryption systems.
For signing only there are much more efficient PQ cryptosystems, with signatures around the same size as ECC.
If DNSSEC ever adopts PQC it will be one of those systems.
Here are two of the earliest, and easiest to understand. There are much better ones now.
Unfortunately the DANE SRV RFC is kind-of mismatched with how SRV and TLS work in practise. It requires the server to serve a certificate matching its own hostname (the hostname of the SRV target) rather than a certificate matching the expected host (the hostname that the SRV record was on). This is fine and secure if you use only DANE but if you want to use DANE with CA-issued certs it makes it somewhere between hard and impossible.
Note the owner of a SRV record is a service name not a host name.
There are a few reasons for this oddity: partly so it matches with DANE for MX records, partly to support large scale virtual hosting without reissuing certificates.
You should be able get a cert with subject names covering the server host name(s) and the service name(s).
Why not? You could use "certificate usage" value 1 and (if the implementation does not neglect it) immediately notice that validation by CA disagrees with validation by DNS. That should be good enough, no?
DANE assumes we can successfully deploy this to the entire Internet. It is unclear that's ever possible, and it's certainly not possible today. Lots of things would be great if you can deploy them, for example you wouldn't build QUIC on top of UDP since you can "just" deploy a new transport protocol - except nope, for the foreseeable future that's undeployable.
A public CA generally has a more sophisticated relationship with their network transit provider or (hopefully) providers and can get DNSSEC actually working as intended for them.
So this means mything.example's DNS services and some public CA both need working DNSSEC, but the visitors to mything.example, such as your mom's sister or some guy who just go into mything but isn't entirely clear whether Apple make Windows - do not need DNSSEc, for them everything works exactly as before, yet the certificate acquisition step is protected from third parties.
> The order [...] is quite different to english but very similar to functional programming.
The most widely accepted (imo) order of function composition is right to left:
send(makeUrl("http://..."))
just like in English "blue fish": transformation stands to the left to the object that is being transformed (*). Whereas "transformation follows the object" is an OO tradition, as shown in your examples. "Take object, apply transformation (method) yielding another object, apply transformation to that new object, etc."
(*) In quintessentially functional Haskell, you can compose functions both ways, but right-to-left is more traditional:
{-
- Find numeric value of the first figure of the decimal representation
- of a number:
- 1. convert it to a string of decimal characters (show)
- 2. take the first character of the string (head)
- 3. convert the character to a string containing one charcter (:[])
- 4. convert the string of one decimal character into an integer (read :: Int)
-}
main = do
let
firstfigure1 :: Int
firstfigure1 = read . (:[]) . head . show $ 413
print firstfigure1
{-
- Reverse the order of composition. Define "right-pointing" versions for
- (.) and ($)
-}
let
(.>) = flip (.) -- it will become infixl 9 by default
($>) = flip ($)
infixr 0 $> -- We need value lower than the above. Use the same as $
firstfigure2 :: Int
firstfigure2 = 413 $> show .> head .> (:[]) .> read :: Int
print firstfigure2
People are mostly motivated by gratification. You've written _functioning_ code - you can instantly see how it solves the problem at hand. You've written _beautiful_ code - you can stare in satisfaction at the negative total in `git diff --stat`.
You've written secure code - you reward comes in the form of nobody talking about your code for the next twenty years ;)
More likely the reward is people complaining about how inconvenient the new security process is for the next 20 years. "Oh I have to rotate keys now..." "I liked it better when it told me my password was wrong outright" "entering an MFA token from my phone is annoying" etc.
Heh that may be true sometimes. Though in my perception, "writing secure code" is more about sanitizing input and preventing buffer overflows than about enforcing secure practices on the user...
Most developers don't get to choose development priorities. Their gratification comes from a salary which may or may not rise by unknown proportion based on deltas in performance. The "real" incentives behind company performance are generally not owned by developers to any emotionally substantive extent.
https://thephd.dev/_vendor/future_cxx/technical%20specificat...
Discussion:
https://thephd.dev/_vendor/future_cxx/papers/C%20-%20Improve...