Hacker News new | past | comments | ask | show | jobs | submit login

In the context of a superoptimizer benchmarks dont make sense really. You don't know what it's going to find.



For this sort of benchmarking you don’t want to know what it should find, that would be a functional test and not a benchmark.

Compiling and profiling reasonably complex apps against baseline would be enough.


I'd also like to see motivating examples of specific things it does find (that LLVM doesn't), even if that's not a representative benchmark


I think I get what you're saying, I think people would read too much into them though. The results are very dependent on the type of program you have. I wouldn't be surprised if there were 10s of percentage point differences in benchmarks between various examples.


That's the point. Benchmarks would show what code is already squeezed dry by basic LLVM tools and what code could benefit from superoptimization, which (as other comments point out) is the only reasonable basis for deciding to give Souper a try.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: