Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To my knowledge, and looking at http://riscv.org, this is supposed to be an open-ISA (instruction set architecture). Their specification allows chip manufacturers to write their own extensions using the "custom" opcodes.

The Kendryte K210 is a RISC-V-compliant CPU. It has off-core components, such as what they're calling a KPU The ML and GPU cores are controlled via I/O, not by the CPU directly. These are called platform-level components. In general, this uses MMIO with a hard-wired memory address to control. You can see the KPU (their ML accelerator) here: https://s3.cn-north-1.amazonaws.com.cn/dl.kendryte.com/docum...

See section 3.2.

I think the extensions are meant to be modular. Right now, not many embedded devices allow for the H mode, and hypervisor-ing is still in development. Currently, I know of Machine mode, Supervisor mode, and User mode, but since the change from 2018 to 2019, they have really started to ramp up the virtualization ISA support.



So RISC-V is open, but the extensions are often proprietary?

And what does virtualization even mean when extensions proliferate? Will you need a distinct physical machine for each combination of extensions that someone might want to virtualize?


Properitary = custom...it just means that it's particular to a single chip manufacturer. They have set aside opcode space for doing just that. RISC-V can't forecast everything someone will want to do with a RISC-V chip. Instead, they promise not to use those opcodes so that it won't conflict with a chip manufacturer's "custom" instructions.

Here's a post from SiFive talking specifically about DSAs in RISC-V: https://www.sifive.com/blog/part-1-fast-access-to-accelerato...

In terms of virtualization, most of the extensions can be emulated. This happens a lot with the hodgepodge of extensions laid on Intel/AMD, such as SSE, SSSE, AVX, and so forth. Just because the underlying physical machine doesn't necessarily support it doesn't mean that the guest can't. At the operating system level, the OS can read the misa (Machine ISA) register to see which extensions are supported, and emulate those which are not. I don't think RISC-V solves the issues that virtualizing Intel/AMD also suffer--and I don't think that's really their goal.

In terms of the host, if there is an extension that cannot be emulated, then yes, I would think you'd need the physical machine to be able to support it.


The SDKs for the non big name (Nvidia, Google, AMD) accelerators have very poor developer experiences. (Granted CUDA/ROCm isn't much better) A number of the embedded SDKs require you to use their custom framework and won't support Tensorflow/Torch models out of the box. Onnx and similar conversion frameworks target only the big name chips, not off the shelf generic AI accelerator chips. Embedded systems need more software engineering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: