No. read(2) is one syscall, and the cost of the initial open is negligible. One real motivation seems to be wanting programs to work without a /dev. I don't think that's a reasonable requirement.
> Access to /dev is DOS'able in many situations by exhausting file descriptor / open files limits.
That's why you open /dev/urandom in advance of performing operations that require randomness. If that open fails, you don't go on to perform the operation that requires randomness.
Even if you have opened it, you have no guarantee that the file descriptor has not been closed since. Yes, that would be stupid of the user of the library, but many security lapses happens because people makes stupid assumptions. Code to close all file descriptors on fork for example is fairly common, so you can not safely assume that the file descriptor remains valid.
> Even if you have opened it, you have no guarantee that the file descriptor has not been closed since.
You can absolutely rely on internal file descriptors not being closed. A program that closes file descriptors it does not own is as buggy as a program that calls free on regions of memory it does not own. A library cannot possibly be robust against this form of sabotage. The correct response to EBADF on a read of an internal file descriptor is to call abort.
The "close all file descriptors" operation is most common before exec. After exec, the process is a new program that can open /dev/urandom on its own (since, as I've mentioned previously, it's a broken environment in which /dev/urandom does not exist).
> You can absolutely rely on internal file descriptors not being closed.
I've explained several times why you can't. The program that closes all file descriptors may be broken, but the big problem is that as long as the library has no safe way of reporting this to the caller without breaking the OpenSSL API, they are faced with either breaking a ton of applications or finding an alternative. And they've explained why this is not an alternative (in the copious comments in the soure):
> The correct response to EBADF on a read of an internal file descriptor is to call abort.
They have no control over whether or not this will result in an insecurely written core file that can leak data, and this is a common problem. If the person building the library knows that the environment it will be used in does not have that problem, it's one define to disable the homegrown entropy.
> The "close all file descriptors" operation is most common before exec.
I've seen it in plenty of code that did not go on to exec, to e.g. drop privileges for portion of the code.
OpenSSL is crufty in part because it's full of workarounds for ancient, crufty code. LibreSSL shouldn't repeat that mistake. LibreSSL does have ways to report allocation failure errors to callers. It shouldn't even try to work around problems arising from applications corrupting the state of components that happen to share the same process. That task is hopeless and leads to code paths that are very difficult to analyze and test. You're more likely to create an exploitable bug by trying to cope with corruption than actually solve a problem --- and closing file descriptors other components own is definitely a form of corruption.
> [LibreSSL has] no control over whether or not [abort] will result in an insecurely written core file
The security of core files simply isn't LibreSSL's business. The mere presence of LibreSSL in a process does not indicate that a process contains sensitive information. LibreSSL has no right to replace system logic for abort diagnostics. If the developers believe that abort() shouldn't write core files for all programs or some programs or some programs in certain states, they should implement that behavior on their own systems. They shouldn't try to make that decision for other systems. LibreSSL's behavior here is not only harmful, but insufficient, as the library can't do anything about other calls to abort, or actual crashes, in the same process.
> I've seen it in plenty of code that did not go on to exec, to e.g. drop privileges for portion of the code.
Of course getentropy would be better. But the current mechanism is not wrong or broken: at best, it's inconvenient. And it's certainly no excuse for the LibreSSL authors to write a library that calls raise(SIGKILL) on file descriptor exhaustion. That behavior, in many cases, amounts to a remote DoS. As long as this code is in the library (even if off by default), I'm hesitant to recommend LibreSSL.
Without a way to getentropy(2) [hint] that doesn't use file descriptors, it has no other secure choice but to raise(SIGKILL) in my opinion; a mere error might be overlooked, but continuing to run could expose secrets and keys, which is much worse than a DoS condition (anything in file-descriptor exhaustion when under attack is already being DoSsed). (It's turned off because coredumps could also do that locally.)
It's behind a define so there's no problem, don't turn it on if you don't like it. You'd never execute that code anyway, because you're the smart admin who knows better and always has the right devices in all the chroots. And who cares about other users! If they don't know better, they're doing it wrong. Put the blame on them. There is no problem.