> The key is to acquire resources in places that can fail and use them in places that can't. I'm amazed and disappointed that the LibreSSL people aren't following this basic principle.
To be fair to the LIbreSSL devs, the Linux-specific /dev/urandom code is currently encapsulated rather nicely behind an interface that's compatible with the OpenBSD getentropy() syscall. Following your suggestion would create a layer violation and move LibreSSL closer toward the (much maligned) OpenSSL approach to cross platform compatibility. I don't think this is a great excuse for the current design, but it's an explanation.
> Following your suggestion would create a layer violation and move LibreSSL closer toward the (much maligned) OpenSSL approach to cross platform compatibility.
The OpenSSL approach to portability is doomed: it can only deal with cosmetic differences between platforms. I appreciate the principle of using compatibility functions instead of #ifdef, but at some point, you need to incorporate the panoply of architectures into your design. It galls me to see the OpenBSD people claim that Linux is broken merely because it is different. That's incredibly arrogance.
Isn't this the same way that they do porting for OpenSSH? Why do you say that method is doomed when it seems to have been working fine for over 10 years?
To be fair to the LIbreSSL devs, the Linux-specific /dev/urandom code is currently encapsulated rather nicely behind an interface that's compatible with the OpenBSD getentropy() syscall. Following your suggestion would create a layer violation and move LibreSSL closer toward the (much maligned) OpenSSL approach to cross platform compatibility. I don't think this is a great excuse for the current design, but it's an explanation.