NSS does something similar since NSS will not be able to access /dev/urandom via file system after the sandbox activates the chroot so they reserve it first and fails with a log warning if no descriptors are available.
Warn? Hell, I'd hard-fail. Libraries need resources to do their jobs. The key is to acquire resources in places that can fail and use them in places that can't. I'm amazed and disappointed that the LibreSSL people aren't following this basic principle.
> The key is to acquire resources in places that can fail and use them in places that can't. I'm amazed and disappointed that the LibreSSL people aren't following this basic principle.
To be fair to the LIbreSSL devs, the Linux-specific /dev/urandom code is currently encapsulated rather nicely behind an interface that's compatible with the OpenBSD getentropy() syscall. Following your suggestion would create a layer violation and move LibreSSL closer toward the (much maligned) OpenSSL approach to cross platform compatibility. I don't think this is a great excuse for the current design, but it's an explanation.
> Following your suggestion would create a layer violation and move LibreSSL closer toward the (much maligned) OpenSSL approach to cross platform compatibility.
The OpenSSL approach to portability is doomed: it can only deal with cosmetic differences between platforms. I appreciate the principle of using compatibility functions instead of #ifdef, but at some point, you need to incorporate the panoply of architectures into your design. It galls me to see the OpenBSD people claim that Linux is broken merely because it is different. That's incredibly arrogance.
Isn't this the same way that they do porting for OpenSSH? Why do you say that method is doomed when it seems to have been working fine for over 10 years?