Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

CUDA already does printf, and C++20 support, minus modules.


C++20 would be news to me. Do you have a reference? The closest I can find is https://github.com/NVIDIA/cccl which seems to be atomic and bits of algorithm. E.g. can you point to unordered_map that works on the target?

I think some pieces of libc++ work but don't know of any testing or documentation effort to track what parts, nor of any explicit handling in the source tree.



Well, the docs say C++ support. There's a reference to <type_traits> and a lot of limitations on what you can do with lambda. I don't see supporting evidence that the rest of libc++ exists. I think what they mean is some syntax from C++20 is implemented, but the C++ library is not.

By "run C on the GPU" I'm thinking of taking programs and compiling them for the GPU. The lua interpreter, sqlite, stuff like that. I'm personally interested in running llvm on one. Not taking existing code, deleting almost all uses of libc or libc++ from it, then strategically annotating it with host/device/global noise and partitioning it into host and target programs with explicit data transfer.

That is, I don't consider "you can port it to cuda with some modern C++ syntax" to be "you can run C++", what with them being different languages and all. So it doesn't look like Nvidia have beaten us to shipping this yet.

Thank you for the reference.

Edit: a better link might be https://nvidia.github.io/libcudacxx/standard_api.html which shows an effort to port libc++, but it's early days for it. No STL data structures in there.


IIRC, CUDA Toolkit 12.0 added partial support for C++20 in nvcc and nvrtc.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: