Hacker News new | past | comments | ask | show | jobs | submit login
Evolving the Go Standard Library with math/rand/v2 (go.dev)
156 points by spacey 11 months ago | hide | past | favorite | 44 comments



> In 2018, Daniel Lemire found an algorithm that avoids the divisions nearly all the time (see also his 2019 blog post). In math/rand, adopting Lemire’s algorithm would make Intn(1000) 20-30% faster...

I recently found a super simple algorithm that appears to produce a number in the interval [0,N] with a branchless expression with a single multiplication in an extended number size. (Sorry I don't have a reference.)

Say you want to generate a number, G, in interval [0,N] where N<=UInt32Max. The algorithm is:

    G = uint32( uint64(N)*uint64(rand.UInt32())>>32 )
It seems like this should select a number in the range with no bias. Is there something I missed?


You have written a deterministic function. If you test this function with all 4 billion uint32 on one odd interval, go and count the number of times you get each result. Now look at your results, are all of the numbers equally likely or is there bias towards some outputs?

Ps: it looks like your function is exclusive like [0,N) not [0,N] Also your function is described in this blog post https://www.pcg-random.org/posts/bounded-rands.html


Correct on all counts. Thanks!


> It seems like this should select a number in the range with no bias. Is there something I missed?

Yes. There are many values of N that aren’t divisors of UInt32Max.

As the article says: “However, no algorithm can convert 2⁶³ equally likely values into n equally likely values unless 2⁶³ is a multiple of n: otherwise some outputs will necessarily happen more often than others. (As a simpler example, try converting 4 equally likely values into 3.)”


Are you sure? Maybe this isn't a good test but it seems pretty evenly distributed to me:

https://go.dev/play/p/IeJQEAclBCU

Edit: maybe this shows the bias better: https://go.dev/play/p/3eKJibIlF1a


UInt32Max (i.e. 4294967295) is divisible by 3, so your code actually is perfectly random (or more accurately, as random as go's rand package). It would be biased with N=4, for example.

Regardless, with small values of N, the bias is very slight so you would need many many iterations to see the imperfection in a statically significant way.


That makes sense.

A quick search didn't reveal any good resources for how to test the quality of a random number generator in a number range. Is what I came up with the best strategy, and you just need to run it for much longer (and compare to a known-good implementation) to see the difference?


Why would you need to “see” it? Unlike the distribution of the RNG itself, this is trivial to solve analytically.


> (As a simpler example, try converting 4 equally likely values into 3.)

No, but you can convert a RNG that emits 4 equally likely values into an RNG that emits 3 equally likely values. Just - anytime the RNG returns 4, try again.

Here's a fun puzzle / annoying interview question: You have a biased coin. You can flip it as often as you want, but heads and tails are not equally likely. Without figuring out the bias of the coin, how do you produce purely random bits?


My guess after a few glasses of wine and thinking on it for a few minutes:

Flip twice. If both flips are the same discard the result. Output 0 for TH, 1 for HT.


Nice - you got it!

The followup is this: That approach only uses about 50% of your coin flips. The other 50% are discarded. How would you improve the efficiency?


Enlist my friends to flip more coins in parallel :)

At a high level I’d probably try and exploit the fact that every bit sequence with a given number of H and T has equal probability. e.g., HHHT HHTH HTHH THHH are equally probable and so can be mapped to four different values. That still only gets me 2 bits (50%) but other combinations (e.g., variations on HHTT) could get me log2(6) bits. I’m guessing with a higher number of flips I could extract (on average) more and more as a proportion. No clue what the asymptote would be.


Thinking further, for N flips you get 0 bits of entropy for all H or all T. For all other sequences, you get log2(N choose count(H)) bits of entropy, and you can average the sum of these over N.

According to Wolfram Alpha this works as N gets larger but it’s not great. For 16 flips you get 9.5 bits of entropy, but hey at least I beat half! 32 flips gets you about 20 bits of entropy. By 64 flips you get 43 bits, and that’s approaching 2/3 efficiency. Maybe not so bad after all!

Going higher is a little tough since I’m on mobile but it starts crawling reaching only 71% efficiency by 1024 flips. I’m curious if it does actually asymptotically reach 100% efficiency (for a fair coin), even if quite slowly.

Edit: Playing more[1] it really seems to approach 72.1. I wonder if I can figure out the asymptote analytically…

[1] https://www.wolframalpha.com/input?i=sum+from+i=1+to+N-1+of+...


This kind of nerd-sniped me. I did find a closed-form solution, that relies on an identity mapping the product of successive factorials ("hyperfactorials") to the Barnes G-Function which is related to the Riemann Zeta Function at a level deep past my comprehension. Still, here's[1] the closed form solution which returns the number of bits of entropy able to be generated given some number of coin flips using this approach, assuming the coin is indeed fair. Divide by the number of flips again to get the efficiency.

Unfortunately, Wolfram Alpha isn't able to determine the limit of this function[2], and neither am I. :)

[1] https://www.wolframalpha.com/input?i2d=true&i=evaluate+Divid...

[2] https://www.wolframalpha.com/input?i2d=true&i=Limit%5BDivide...


This maps UInt32Max input values to N output values, so there is guaranteed to be bias by pigeonhole, unless N divides Uint32Max


Your algorithm is the first step of Lemire’s algorithm, without the followup check to debias the result. https://dotat.at/@/2020-10-29-nearly-divisionless-random-num...


This algorithm produces biased result with probability 1/2^(32-bitwidth(N)). Using 64 or 128 random bits can make the bias practically undetectable. Comprehensive overview of the approach can be found here: https://github.com/apple/swift/pull/39143


Your results will be biased. It is tiny with small values of N, and absent when N is a power of two, but the skew becomes more obvious when your N is 2^31 + 2^30 + 1, for example.


Lemire uses your found but corrects its biases.


As often I’m impressed by the quality of all of this. The amount of thinking that went into this, this excellent written blog post. I love the Go blog.


Agreed. I admire the clarity of Cox's writing, as much as his thoughtful restraint on adding new features to the language.


Rob Pike called Russ Cox "future winner of the Kyoto prize".

My favorite is https://research.swtch.com/qart (see also: https://spinroot.com/pico/pjw.html)


I like the Principles section. Very measured and practical approach to releasing new stdlib packages. https://go.dev/blog/randv2#principles

The end of the post they mention that an encoding/json/v2 package is in the works: https://github.com/golang/go/discussions/63397


> Second, all changes must be rooted in respect for existing usage and users

This is one of the reasons I love working in Go. I feel that the language maintainers understand that people using Go have more important work to do than update their code to whatever the new hotness is this month.

Basically the opposite of this: https://steve-yegge.medium.com/dear-google-cloud-your-deprec...


A long tradition in the Java, C and C++ ecosystems.


Nice - I wish .NET would be more willing to condemn chunks of the standard library and replace them with something better!


That's precisely what I was thinking when I was reading this. The go module transition was not awesome, but if the result is being able to "step" the standard library forward like this without a corresponding major language release, then I take back all the bad things I ever said about it.


What this have to do with go modules? Any standard lib should have the ability add a new builtin module under a different namespace regardless of how third-party packages of managed, right?


The standard lib was considered core code and to make even the smallest change can have a huge impact. It was more a matter of, "if we're going to do a v2 language, what would we fix?" And, as it turns out this required no regression of existing core language code.


Not the same, but I wanted to say that when I was upgrading apps from .NET Framework to Core and above, I was surprised how many Framework packages not only had upgrades, but were deprecated entirely and replaced by a new package. We had a difficult time migrating. (This was at MSFT btw)


Rust has rand in a separate crate which felt weird at first but makes sense for reasons like this thread.


Yeah. "Why should / shouldn't code be put in the standard library" is a really interesting question that I think people don't think about enough.

I think a lot of the benefit of putting stuff in the standard library is interoperability. It seems obvious but - having a string and list type in std means you can pass strings and lists between packages. I think a lot of standard stuff that acts as "glue" should be in std. For example, in nodejs the standard library includes HTTP request and response types because of how useful they are in their ecosystem.

Notably, unlike swift, rust doesn't have an "inlined string" type in std. There's a lot of crates that implement small strings, but most interfaces that need to pass an actual string buffer use std::String - and thats way less efficient. (Thankfully, &str is more common at interface boundaries). Rust also doesn't have much support for futures in std - which upsets a lot of people, because tokio ends up being included by a lot of programs.

Anyway, when it comes to crates like rand where interoperability isn't important, I think its fine to keep this stuff out of std. Code can evolve much more easily when it lives as a 3rd party library.


At least .NET is very capable of allowing you to support third party libraries. Heck, even ASP .NET Core isn't built-in anymore, you get it through NuGet. So you're not stuck with the standard libraries.


“Don’t you guys have internet?”

What happened to batteries-included support?


ASP isnt included OOTB anymore, so if you want to do web dev in .NET you have to install it, they decoupled a ton of things from the core of the language.


ASP comes included in SDK. All you need is to specify in the csproj that you want a Web project


You don’t get it through NuGet. You get it by specifying the sdk type in the project.

https://learn.microsoft.com/en-us/dotnet/core/project-sdk/ov...


Used to be this way for .NET Core, I guess with modern .NET they changed it?

https://www.nuget.org/profiles/aspnet

Some projects listed were last updated in 2022.


But when you're replacing, like, much of the standard library, you have to be a bit sad about all the interop work that falls on the user. It should instead fall on the makers of the bad standard library.


Microsoft is more likely to just create new alternative libraries / classes to the standard library. They drop new things all the time that improve on old approaches. Look at every way you can make a GUI in .NET from Microsoft as an example of this.

People have no issue using third party libraries in other languages to get more done than the OOTB libraries.


Related, this article discusses difference generators and tradeoffs for Go using them as Source: https://zephyrtronium.github.io/articles/randomness.html


> Ideally, the v2 package should be able to do everything the v1 package could do, and when v2 is released, the v1 package should be rewritten to be a thin wrapper around v2.

And even more ideally, as many v1 usages should be automatically fixed as possible by `go fix` or similar tools. Allowing this to all user packages would be a major improvement over the status quo.


> Allowing this to all user packages would be a major improvement over the status quo.

We have plans to get there. https://github.com/golang/go/issues/32816


If both are now crypto secure, what's the point of having both? Also seems like they've made math/rand slower, not a win in my book.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: