Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What’s the Smallest Variety of CHERI? (2022) (microsoft.com)
46 points by bshanks on Sept 12, 2023 | hide | past | favorite | 11 comments


Since this was published MS have published the RTL of their CHERIoT Ibex variant: https://github.com/microsoft/cheriot-ibex. There's also the full technical report and software stack including RTOS available: https://www.microsoft.com/en-us/research/publication/cheriot...

I always thought it made more sense to try introducing capabilities on higher-performance applications (all the stuff you might use an arm A-class for) given they are pretty heavyweight. This is what Arm's Morello (https://www.arm.com/architecture/cpu/morello) offer. However introducing them at the low end, in the embedded space, instead may work a lot better. Within the A-class processor space there's a huge software ecosystem to work with and your software likely comes from multiple vendors, it's an uphill struggle to inject capabilities into that space, especially if you want to make full use of them.

With embedded applications you tend to have far tighter control over the whole software stack, there's a lot more vertical integration and it's pretty static. Once you've deployed your product it's doing the same job day in day out. You need occasional updates, maybe the odd new feature but it's a very different world to the software stack on the typical phone. So overall easier for a single company or group to say 'yes let's try capabilities' and just get on and do it.

Security is potentially a lot more critical in these applications as well. Everyone knows IoT security is a joke but regulators are watching this too and there will be future legislation that will put a lot more liability on the manufacturers of IoT devices and they'll need to demonstrate they've taken security seriously, using a capability based system is one way to do that.

Operational technology (industrial IoT) is also a key area of concern for security. Having unsecure internet enabled operational technology running critical infrastructure and industrial processes is clearly a major issue. The various cyber security agencies across the western world recognise this and published a guide: https://www.cisa.gov/resources-tools/resources/secure-by-des... urging security by design and default and it explicitly mentions CHERI. Again the initial costs and work to introduce capabilities become very justifiable against the security (and critically for companies, liability reduction) benefits.


I'd say another factor is that microcontrollers are less equipped to run memory checking tools like Valgrind or ASAN than desktops, and lack separate address spaces between processes, increasing the "blast radius" of memory errors.


Thanks for the links and overview.


Read also the follow-up post,

"First steps in CHERIoT Security Research"

https://msrc.microsoft.com/blog/2023/02/first-steps-in-cheri...

Ironically, the future of secure computing is bringing back memory tagging.


> Ironically, the future of secure computing is bringing back memory tagging

I find it's often the case exciting new tech turns out to have its fundamental principles described in a paper from the 60s or 70s ;)


This seems par for the course in computer architecture, and probably computer systems in general.

Part of it seems to be that technological improvements (such as better silicon and faster compute/storage/communication as well as more input data) can enable formerly impractical ideas to scale up/out and become useful.

Another part is that the performance and power wall has forced CPU designers to think about other ways to improve CPUs, such as improving security and reliability. Maybe the market will finally be willing to trade off some cost and speed for better security and reliability.

Lastly, software which used to be impractical or overly expensive because of resource usage can often run easily on modern hardware.


Why was memory tagging ignored for most of the personal computing? Any decent reading materials on the history of it?


Most likely the hardware constraints and economics.

Burroughs was one of the first systems with it, the Lisp and Ada Machines, Xerox Workstations, IBM mainframes, ETHZ systems, among others, all of them rather expensive, or niche, when compared with what became regular consumer hardware.

The failure of Intel's APX32 project probably did not help as well.


Cost. Not only in extra memory but I rather suspect, in access patterns as well. So cost in speed too.


In case anyone else needs basic background:

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/


I really like the way they separated execution and writing memory capabilities. This makes it possible to write code, then run it, using separate capabilities, but no code can modify itself directly by mistake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: