> In 2018, Daniel Lemire found an algorithm that avoids the divisions nearly all the time (see also his 2019 blog post). In math/rand, adopting Lemire’s algorithm would make Intn(1000) 20-30% faster...
I recently found a super simple algorithm that appears to produce a number in the interval [0,N] with a branchless expression with a single multiplication in an extended number size. (Sorry I don't have a reference.)
Say you want to generate a number, G, in interval [0,N] where N<=UInt32Max. The algorithm is:
G = uint32( uint64(N)*uint64(rand.UInt32())>>32 )
It seems like this should select a number in the range with no bias. Is there something I missed?
You have written a deterministic function.
If you test this function with all 4 billion uint32 on one odd interval, go and count the number of times you get each result. Now look at your results, are all of the numbers equally likely or is there bias towards some outputs?
> It seems like this should select a number in the range with no bias. Is there something I missed?
Yes. There are many values of N that aren’t divisors of UInt32Max.
As the article says: “However, no algorithm can convert 2⁶³ equally likely values into n equally likely values unless 2⁶³ is a multiple of n: otherwise some outputs will necessarily happen more often than others. (As a simpler example, try converting 4 equally likely values into 3.)”
UInt32Max (i.e. 4294967295) is divisible by 3, so your code actually is perfectly random (or more accurately, as random as go's rand package). It would be biased with N=4, for example.
Regardless, with small values of N, the bias is very slight so you would need many many iterations to see the imperfection in a statically significant way.
A quick search didn't reveal any good resources for how to test the quality of a random number generator in a number range. Is what I came up with the best strategy, and you just need to run it for much longer (and compare to a known-good implementation) to see the difference?
> (As a simpler example, try converting 4 equally likely values into 3.)
No, but you can convert a RNG that emits 4 equally likely values into an RNG that emits 3 equally likely values. Just - anytime the RNG returns 4, try again.
Here's a fun puzzle / annoying interview question: You have a biased coin. You can flip it as often as you want, but heads and tails are not equally likely. Without figuring out the bias of the coin, how do you produce purely random bits?
Enlist my friends to flip more coins in parallel :)
At a high level I’d probably try and exploit the fact that every bit sequence with a given number of H and T has equal probability. e.g., HHHT HHTH HTHH THHH are equally probable and so can be mapped to four different values. That still only gets me 2 bits (50%) but other combinations (e.g., variations on HHTT) could get me log2(6) bits. I’m guessing with a higher number of flips I could extract (on average) more and more as a proportion. No clue what the asymptote would be.
Thinking further, for N flips you get 0 bits of entropy for all H or all T. For all other sequences, you get log2(N choose count(H)) bits of entropy, and you can average the sum of these over N.
According to Wolfram Alpha this works as N gets larger but it’s not great. For 16 flips you get 9.5 bits of entropy, but hey at least I beat half! 32 flips gets you about 20 bits of entropy. By 64 flips you get 43 bits, and that’s approaching 2/3 efficiency. Maybe not so bad after all!
Going higher is a little tough since I’m on mobile but it starts crawling reaching only 71% efficiency by 1024 flips. I’m curious if it does actually asymptotically reach 100% efficiency (for a fair coin), even if quite slowly.
Edit: Playing more[1] it really seems to approach 72.1. I wonder if I can figure out the asymptote analytically…
This kind of nerd-sniped me. I did find a closed-form solution, that relies on an identity mapping the product of successive factorials ("hyperfactorials") to the Barnes G-Function which is related to the Riemann Zeta Function at a level deep past my comprehension. Still, here's[1] the closed form solution which returns the number of bits of entropy able to be generated given some number of coin flips using this approach, assuming the coin is indeed fair. Divide by the number of flips again to get the efficiency.
Unfortunately, Wolfram Alpha isn't able to determine the limit of this function[2], and neither am I. :)
This algorithm produces biased result with probability 1/2^(32-bitwidth(N)). Using 64 or 128 random bits can make the bias practically undetectable. Comprehensive overview of the approach can be found here: https://github.com/apple/swift/pull/39143
Your results will be biased. It is tiny with small values of N, and absent when N is a power of two, but the skew becomes more obvious when your N is 2^31 + 2^30 + 1, for example.
As often I’m impressed by the quality of all of this. The amount of thinking that went into this, this excellent written blog post. I love the Go blog.
> Second, all changes must be rooted in respect for existing usage and users
This is one of the reasons I love working in Go. I feel that the language maintainers understand that people using Go have more important work to do than update their code to whatever the new hotness is this month.
That's precisely what I was thinking when I was reading this. The go module transition was not awesome, but if the result is being able to "step" the standard library forward like this without a corresponding major language release, then I take back all the bad things I ever said about it.
What this have to do with go modules? Any standard lib should have the ability add a new builtin module under a different namespace regardless of how third-party packages of managed, right?
The standard lib was considered core code and to make even the smallest change can have a huge impact. It was more a matter of, "if we're going to do a v2 language, what would we fix?" And, as it turns out this required no regression of existing core language code.
Not the same, but I wanted to say that when I was upgrading apps from .NET Framework to Core and above, I was surprised how many Framework packages not only had upgrades, but were deprecated entirely and replaced by a new package. We had a difficult time migrating. (This was at MSFT btw)
Yeah. "Why should / shouldn't code be put in the standard library" is a really interesting question that I think people don't think about enough.
I think a lot of the benefit of putting stuff in the standard library is interoperability. It seems obvious but - having a string and list type in std means you can pass strings and lists between packages. I think a lot of standard stuff that acts as "glue" should be in std. For example, in nodejs the standard library includes HTTP request and response types because of how useful they are in their ecosystem.
Notably, unlike swift, rust doesn't have an "inlined string" type in std. There's a lot of crates that implement small strings, but most interfaces that need to pass an actual string buffer use std::String - and thats way less efficient. (Thankfully, &str is more common at interface boundaries). Rust also doesn't have much support for futures in std - which upsets a lot of people, because tokio ends up being included by a lot of programs.
Anyway, when it comes to crates like rand where interoperability isn't important, I think its fine to keep this stuff out of std. Code can evolve much more easily when it lives as a 3rd party library.
At least .NET is very capable of allowing you to support third party libraries. Heck, even ASP .NET Core isn't built-in anymore, you get it through NuGet. So you're not stuck with the standard libraries.
ASP isnt included OOTB anymore, so if you want to do web dev in .NET you have to install it, they decoupled a ton of things from the core of the language.
But when you're replacing, like, much of the standard library, you have to be a bit sad about all the interop work that falls on the user. It should instead fall on the makers of the bad standard library.
Microsoft is more likely to just create new alternative libraries / classes to the standard library. They drop new things all the time that improve on old approaches. Look at every way you can make a GUI in .NET from Microsoft as an example of this.
People have no issue using third party libraries in other languages to get more done than the OOTB libraries.
> Ideally, the v2 package should be able to do everything the v1 package could do, and when v2 is released, the v1 package should be rewritten to be a thin wrapper around v2.
And even more ideally, as many v1 usages should be automatically fixed as possible by `go fix` or similar tools. Allowing this to all user packages would be a major improvement over the status quo.
I recently found a super simple algorithm that appears to produce a number in the interval [0,N] with a branchless expression with a single multiplication in an extended number size. (Sorry I don't have a reference.)
Say you want to generate a number, G, in interval [0,N] where N<=UInt32Max. The algorithm is:
It seems like this should select a number in the range with no bias. Is there something I missed?