The use of lava lamps as a source of randomness is, to use your term, "just marketing" -- it is not fundamentally more secure than other sources of randomness.
The use of a group of trusted randomness generators, a majority of whom would have to collude in order to trick a consumer into thinking an input was random when it was actually staged, offers genuine functionality that cannot be dismissed as "just marketing".
As long as it's done correctly, mixing new entropy sources into an entropy pool will never _decrease_ the entropy. So in the case of LavaRand, even if it only ever returned a string of zeros, systems that mix it's output into their entropy pools wouldn't be any worse off than before. Perhaps we could have made this point more clearly in the post. (I'm one of the authors.)
So, if your source of random data does not reduce entropy to the pool, but generating random numbers with it does reduce entropy from the pool, along a long enough time line, aren't you going to deplete the entropy anyways?
Randomness from a CSPRNG (cryptographically-secure random number generator) never really gets "depleted," since as long as the seed contains enough entropy and isn't compromised, then it's computationally infeasible to learn anything about the internal state of the CSPRNG from it's outputs. See https://research.nccgroup.com/2019/12/19/on-linuxs-random-nu... for a nice overview.
On older systems that have a notion of entropy depletion, you would eventually deplete the entropy counter and /dev/random would start blocking if you aren't feeding new entropy into the system.
The use of a group of trusted randomness generators, a majority of whom would have to collude in order to trick a consumer into thinking an input was random when it was actually staged, offers genuine functionality that cannot be dismissed as "just marketing".