Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if that is possible to optimize at compile or execution time. Is it possible to determineif the gains sending the work to the GPU is worth the latency hit?


It would be possible to generate both CPU and GPU versions at compile-time, and pick the best one at run-time based on the data encountered. I do research on a similar technique for the Futhark language.


I have little experience in this area, but if your workload size isn’t static and is dependent on say user input, I don’t see a good way to select “the better option” at compile time. Plus, do compilers take into account graphics capabilities when compiling programs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: