Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Nobody expects LibreSSL to be able to do everything it promises if the system runs out of memory, so why should people expect LibreSSL to do everything it promises if the systems is misconfigured or out of file descriptors?

The issue is not that anyone expects it to do everything it promises if the system is misconfigured, but that if it should fail, it should take care to try to avoid failing in ways that could open massive security holes.

This is the issue here: The developers believe that as the existence of systems with unsafe core files is well established, their options are limited, as there is a risk of exposing enough state to less privileged users with a core dump to leave the system vulnerable. Someone building for a system they know has properly secured core files, can disable the homegrown entropy code, and the code will fail hard if /dev/urandom and sysctl() are both unavailable, and the problem goes away.

But what do you suggest they do for the case where they do not know whether failing will expose sensitive data? They've chosen the option they see as the lesser of two evils: Do as best they can - only as a fallback, mind you - and include a large comment documenting the issues.

If they had full freedom to design their own API this would not be an issue. They could e.g. have put in place a callback that should return entropy or fail in an application defined safe way, or many other options. But as long as part of the point is to be able to support the OpenSSL API, their hands are fairly tied.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: