Superoptimizers attempt to find shorter instruction sequences. This may or may not produce faster code - sometimes you can make code faster by inserting nops!
The idea of superoptimization is to find missed optimizations, which a human can review and then implement directly.
Here we multiply an int by -2. LLVM emitted the multiply and compare, but Souper proved that no int multiplied by -2 can equal 1, so this could be optimized to 0.
By its nature Souper becomes less effective over time as its findings are incorporated into LLVM, so most of the trophies are old!
Can it optimize sequences of shuffles?
If so, it'd be worth a try, even if it takes hours (probably doesn't take that long?), I could let it run until it finds faster sequences than what I have now, and update my code.
But for it to be worth the time for me to figure out how to install and use it, I need some sort of expectation for what I'd get out of trying this instead of something else.
it's not an optimizer and so it's runtime doesn't matter really. It's intended to find new rules (patterns) that can in turn be implemented in llvm as new optimizations. So this thing is just a research step.
Are you sure about shorter? If you have a model of execution behaviour you can build the 'best' instruction sequence for your operation. Not all superoptimizers are born equal.
I think I get what you're saying, I think people would read too much into them though. The results are very dependent on the type of program you have. I wouldn't be surprised if there were 10s of percentage point differences in benchmarks between various examples.
That's the point. Benchmarks would show what code is already squeezed dry by basic LLVM tools and what code could benefit from superoptimization, which (as other comments point out) is the only reasonable basis for deciding to give Souper a try.
The scientific paper without reproducible environment.
The deep learning models with no weights.
The optimizer without benchmarks.