Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not exactly unusual, for example pytorch has Python, C++, C, and Cuda.


Notice those are all (except arguably CUDA) very mainstream languages. All four of AMDs are niche. Upstreaming this into pytorch would double the number of languages used. (Although HIP is very similar to CUDA)


HIP is essentially the same as CUDA, CK is not a language but a library, and assembly is basically used in the Nvidia ecosystem as well, in the form of PTX.

There is absolutely nothing out of the ordinary here. Yes, it's multiple languages, but not any more or any different than what you'd use on an Nvidia platform (except obviously for the assembly part -- AMD's ISA is different from PTX, but that's to be expected).


I agree using both a high level and a low level language is normal, and yes using libraries is fine.

It's having both Triton and HIP in the same project which I find weird. It feels very fragmented to me to use two high level languages. Maybe it makes sense given Triton is easier to use but less fully featured, but it definitely didn't strike me as normal.

I would be interested to know if NVIDIA use more than CUDA and PTX/SASS to write CUDNN and CUBLAS.


I would argue that Triton is in fact higher-level than HIP. Plus, it is more specialised for specific use cases.


Well, if you're including ASM in AMD's you have to include it in CUDA too, people definitely will embed PTX in their kernels. Triton is also gaining steam, so not too crazy. But yes, HIP and CK are rather obscure. In my limited time working w/ the AMD software stack this was a trend -- lots of little languages and abandoned toolchains, no unified strategy.


I believe that PyTorch already uses Triton; I recently tried to do torch.compile on a Windows machine and it did not work because the inductor backend relies on Triton.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: