Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Intel keeps getting it right with numerical libraries. They're open. They work well. They work on AMD.

What Intel numerical libraries are you thinking of? When I think of Intel numerical libraries, the first that comes to mind is MKL. MKL is neither open-source nor does it work well on AMD without some fragile hacks [0].

[0] https://www.pugetsystems.com/labs/hpc/How-To-Use-MKL-with-AM...



Well, OP didn't say MKL works well on AMD. But you can at least run it on a non-Intel CPU. Compare CUDA.


The nvidia pgi compiler compiles CUDA to multi-core x86-64. There are other third-party compilers for CUDA->x86-64 (one LLVM-based one from Intel).

There is a "library replacement" for CUDA from AMD called HIP, that you can use to map CUDA programs to ROCm. But... it doesn't work very well.

NVIDIA also open-sourced CUDA support for Clang and LLVM. So anybody can extend clang to map CUDA to any hardware supported by LLVM, including SPIRV. The only company that would benefit from doing this would be AMD, but AMD doesn't have many LLVM contributors.

Intel drives clang and LLVM development for x86_64, paying a lot of people to work on that.


It sounds like people want nvidia to write drivers for AMD.

This criticism makes even less sense when any bystander could implement CUDA suppport on AMD by connecting open source software.


> any bystander

You aren't seriously implying than any bystander is capable of extending LLVM to map CUDA to SPIR-V? What percentage of present day gainfully employed software engineers do you suppose even has the background knowledge? How many hours do you suppose the work would require?


If LLVM has a SPIRV backend, probably very little. For a proof of concept, a bachelor CS thesis would probably do.

Clang already has a CUDA parser, and all the code to lower CUDA specific constructs to LLVM-IR, some of which are specific for the PTX backend. If you try to compile CUDA code for a different target, like SPIRV, you'll probably get some errors saying that some of the LLVM-IR instructions generated by clang are not available in that backend, and you'll need to generate the proper SPIRV calls in clang instead.

Its probably a great beginner task to get started with clang and LLVM. You don't need to worry about the C++ frontend side of things because that's already done, and can focus on understanding the LLVM-IR and how to emit it from clang when you already have a proper AST.


FWIW, there already exists LLVM to SPIR-V compiler: https://github.com/KhronosGroup/SPIRV-LLVM-Translator

Alas, this supports SPIR-V to 1.1.


Late response I know, but I would say anyone who needs that feature could learn to do it, at least if they are on Hacker News. Maybe bystander isn't the most accurate term, but certainly anyone with criticism could take the gauntlet.

LLVM is very well documented and so are these standards. The open source community is also huge and full of talented contributors and more are always welcome to join. I think there's a reason why Linux and GitHub exist.

So in short, if it's a question of motivation and it's something you need, then become motivated to make it happen. That's more likely to happen then convincing a company to invest in supporting a competitor.


CUDA appears to have come out well before even OpenCL. I don't see why there would be expectation that nVidia would design their framework to work on a competitors product.


Cuda was also a response to ATI's own efforts at a proprietary effort which they eventually have up on.


ATI came out with CTM, which was just an assembler. CUDA was released a month or so after that. It was a full C compiler and already had a pretty large set of examples and library functions.

I downloaded CUDA about the day it was released, and used it for real some months later when I bought a 8600 GT GPU.

To call CUDA a response to CTM is too much praise for Nvidia, because it suggests that their response included cobbling a compiler and SDK in just a month. :-)


Not on ARM, or POWER, you can't. Why you'd want to run it on AMD, I don't understand. I don't know what fraction of peak BLIS and OpenBLAS get, but it will be high.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: