Thanks for sharing! One quick question: in the methodology section, you mention that the results are from a median value out of repeated runs. While I get that you run JITted code, you also mention that it makes the runs isolated from GC side-effects.
Just out of curiosity, what did you do to ensure that GC would not interfere with the execution while the tests are running?
By forcing the GC to run in between tests, basically. The README for the benchmarking tool describes it in more detail: https://github.com/hugoduncan/criterium.
Just out of curiosity, what did you do to ensure that GC would not interfere with the execution while the tests are running?