Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As of this year (ish), `int main() {puts("hello, world\n");}` stands a decent chance of running on a GPU and doing the right thing if you compile it with clang. Terminal application style. Should be able to spell it printf shortly, variadic functions turn out to be a bit of a mess.


CUDA already does printf, and C++20 support, minus modules.


C++20 would be news to me. Do you have a reference? The closest I can find is https://github.com/NVIDIA/cccl which seems to be atomic and bits of algorithm. E.g. can you point to unordered_map that works on the target?

I think some pieces of libc++ work but don't know of any testing or documentation effort to track what parts, nor of any explicit handling in the source tree.



Well, the docs say C++ support. There's a reference to <type_traits> and a lot of limitations on what you can do with lambda. I don't see supporting evidence that the rest of libc++ exists. I think what they mean is some syntax from C++20 is implemented, but the C++ library is not.

By "run C on the GPU" I'm thinking of taking programs and compiling them for the GPU. The lua interpreter, sqlite, stuff like that. I'm personally interested in running llvm on one. Not taking existing code, deleting almost all uses of libc or libc++ from it, then strategically annotating it with host/device/global noise and partitioning it into host and target programs with explicit data transfer.

That is, I don't consider "you can port it to cuda with some modern C++ syntax" to be "you can run C++", what with them being different languages and all. So it doesn't look like Nvidia have beaten us to shipping this yet.

Thank you for the reference.

Edit: a better link might be https://nvidia.github.io/libcudacxx/standard_api.html which shows an effort to port libc++, but it's early days for it. No STL data structures in there.


IIRC, CUDA Toolkit 12.0 added partial support for C++20 in nvcc and nvrtc.


What does that do under the hood though? What does it mean to execute puts from a GPU?


Libc on x64 is roughly a bunch of userspace code over syscall which traps into the kernel. Looks like a function that takes six integer registers and writes results to some of those same registers.

Libc on nvptx or amdgpu is a bunch of userspace code over syscall, which is a function that takes eight integers per lane on the GPU. That "syscall" copies those integers to the x64/host/other architecture. You'll find it in a header called rpc.h, the same code compiled on host or GPU. Sometime later a thread on the host reads those integers, does whatever they asked for (e.g. call the host syscall on the next six integers), possibly copies values back.

Puts probably copies the string to the host 7*8 bytes at a time, reassembles it on the host, then passes it to the host implementation of puts. We should be able to kill the copy on some architectures. Some other functions run wholly on the GPU, e.g. sprintf shouldn't talk to the host, but fprintf will need to.

The GPU libc is fun from a design perspective because it can run code on either side of that communication channel as we see fit. E.g. printf floating point handling seems prone to large numbers of registers needed on the GPU at the moment so we may move some work to the host to make the register usage better (higher occupancy).


Do you happen to have a link to these developments?


Documentation is lagging reality a bit, we'll probably fix that around the next llvm release. Some information is at https://libc.llvm.org/gpu/using.html

That GPU libc is mostly intended to bring things like fopen to openmp or cuda, but it turns out GPUs are totally usable as bare metal embedded targets. You can read/write to "host" memory, on that and a thread running on the host you can implement a syscall equivalent (e.g. https://dl.acm.org/doi/10.1145/3458744.3473357), and once you have syscall the doors are wide open. I particularly like mmap from GPU kernels.


Is there a way to directly use these developments to already write a reasonable subset of C/C++ for simpler usecases (basically doing some compute and showing the results on screen by just manipulating pixels in a buffer like you would with a fragment/pixel shader) in a way that's portable (across the three major desktop platforms, at least) without dealing with cumbersome non-portable APIs like OpenGL, OpenCL, DirectX, Metal or CUDA? This doesn't require anything close to full libc functionality (let alone anything like the STL), but would greatly improve the ergonomics for a lot of developers.


I'll describe what we've got, but fair warning that I don't know how the write pixels to the screen stuff works on GPUs. There are some instructions with weird names that I assume make sense in that context. Presumably one allocates memory and writes to it in some fashion.

LLVM libc is picking up capability over time, implemented similarly to the non-gpu architectures. The same tests run on x64 or the GPU, printing to stdout as they go. Hopefully standing up libc++ on top will work smoothly. It's encouraging that I sometimes struggle to remember whether it's currently running on the host or the GPU.

The datastructure that libc uses to have x64 call a function on amdgpu, or to have amdgpu call a function on x64, is mostly a blob of shared memory and careful atomic operations. That was originally general purpose and lived on a prototypey GitHub. Its currently specialised to libc. It should end up in an under-debate llvm/offload project which will make it easily reusable again.

This isn't quite decoupled from vendor stuff. The GPU driver needs to be running in the kernel somewhere. On nvptx, we make a couple of calls into libcuda to launch main(). On amdgpu, it's a couple of calls into libhsa. I did have an opencl loader implementation as well but that has probably rotted, intel seems to be on that stack but isn't in llvm upstream.

A few GPU projects have noticed that implementing a cuda layer and a spirv layer and a hsa or hip layer and whatever others is quite annoying. Possibly all GPU projects have noticed that. We may get an llvm/offload library that successfully abstracts over those which would let people allocate memory, launch kernels, use arbitrary libc stuff and so forth running against that library.

That's all from the compute perspective. It's possible I should look up what sending numbers over HDMI actually is. I believe the GPU is happy interleaving compute and graphics kernels and suspect they're very similar things in the implementation.


CUDA allows for straight C++ for quite some time, that is how renderers like nanite are written.

https://docs.nvidia.com/cuda/cuda-c-std/index.html

"C++ Standard Parallelism"

https://www.youtube.com/watch?v=nwrgLH5yAlM

Or if you prefer more vendor neutral,

https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-...

Currently with C++17 support.


I’m cautiously optimistic for SYCL. The absurd level of abstraction is a bit alarming, but single source performance portability would be a godsend for library authors.


This is one area where I imagine C++ wannabe replacements like Rust having a very hard time taking over.

It took almost 20 years to move from GPU Assembly (DX 9 timeframe), shading languages, to regular C, C++, Fortran and Python JITs.

There are some efforts with Java, .NET, Julia, Haskell, Chapel, Futhark, however still trailing behind the big four.

Currently in terms of ecosystem, tooling and libraries, as far as I am aware, Rust is trailing those, and not yet being a presence on HPC/Graphics (Eurographics, SIGGRAPH) conferences.


This is one area where I imagine C++ wannabe replacements like Rust having a very hard time taking over.

I 100% agree. Although I have a keen interest in Rust I can’t see it offering any unique value to the GPGPU or HPC space. Meanwhile C++ is gaining all sorts of support for HPC. For instance the parallel stl algorithms, mdspan, std::simd, std::blas, executors (eventually), etc. Not to mention all of the development work happening outside of the ISO standard, e.g. CUDA/ROCm(HIP)/OpenACC/OpenCL/OpenMP/SYCL/Kokkos/RAJA and who knows what else.

C++ is going to be sitting tight in compute for a long time to come.


There is always the argument that it can help reduce the errors produced due to memory corruption.

However industry standards matter more.


HPC researchers already employ some techniques to detect memory corruption, hardware flaws, floating point errors, and so on. Maybe Rust could meaningfully reduce memory errors, but if it comes at the cost of bounds checking (or any other meaningful runtime overhead) they will have absolutely zero interest.


Chapel and Julia ongoing efforts proves otherwise, and I can tell from CERN days, not everyone uses those tools.

In any case, that means those languages are much better positioned than Rust in such ecosystem.


If you’re willing to deal with 5 layers of C++ TMP, then a library like Kokkos will let you abstract over those APIs, or at least some of them. Eventually if or when SYCL is upstreamed in the llvm-project it’ll be possible to do it with clang directly.


This is super interesting, thanks!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: