Hacker News new | past | comments | ask | show | jobs | submit login

I wonder what the added effectiveness is of such elaborate setups vs an off-the-shelf HRNG dongle plugged into the server (which I assume is what every other company with such requirements uses). Are the lava lamps and pendulums actually that much more functional or just marketing?



The use of lava lamps as a source of randomness is, to use your term, "just marketing" -- it is not fundamentally more secure than other sources of randomness.

The use of a group of trusted randomness generators, a majority of whom would have to collude in order to trick a consumer into thinking an input was random when it was actually staged, offers genuine functionality that cannot be dismissed as "just marketing".


It's also the type of fun/cool marketing that potentially opens up new attack vectors


As long as it's done correctly, mixing new entropy sources into an entropy pool will never _decrease_ the entropy. So in the case of LavaRand, even if it only ever returned a string of zeros, systems that mix it's output into their entropy pools wouldn't be any worse off than before. Perhaps we could have made this point more clearly in the post. (I'm one of the authors.)


So, if your source of random data does not reduce entropy to the pool, but generating random numbers with it does reduce entropy from the pool, along a long enough time line, aren't you going to deplete the entropy anyways?


Randomness from a CSPRNG (cryptographically-secure random number generator) never really gets "depleted," since as long as the seed contains enough entropy and isn't compromised, then it's computationally infeasible to learn anything about the internal state of the CSPRNG from it's outputs. See https://research.nccgroup.com/2019/12/19/on-linuxs-random-nu... for a nice overview.

The Linux random number generator did used to have a notion of entropy depletion, but that is no longer the case (at least for x86-64 systems: https://wiki.archlinux.org/title/Random_number_generation).

On older systems that have a notion of entropy depletion, you would eventually deplete the entropy counter and /dev/random would start blocking if you aren't feeding new entropy into the system.


Compromising a single seed source for randomness doesn't mean you have control over the output. That's the whole point to using multiple sources.


Most servers have HRNG in the cpu itself


maybe relevant ...

https://lwn.net/Articles/961510/

(CPUs may have hrngs but there are certain observations on their behaviour that create concern)


I don't think that's what the post says. I understand it as: Hardware rng is not like csrng which is designed to pass validation. If you measure a truly random hardware rng for a long enough time, you'll get a string of 9s, however unlikely that is. It doesn't mean that that hrng is broken in any way.


That isn't it. When hardware rng is initialized it first runs tests on the randomness to prove that the hardware for getting entropy isn't broken. Then it feeds that entropy into a CSPRNG which gets exposed to code trying to use hardware rng.

That person's point was that it's impossible to know for sure if entropy collection is broken. In practice it isn't an issue as you can make the false positive rate very small even if it can never be 0.


I wasn't criticising hwrngs "as such" merely saying that actual evidence exists for some that their entropy pool can be exhausted and that they can degrade. "just use the cpu hwrng" is too-simple advice.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: