Just in time for Christmas, we are excited to announce the v0.5 release of
iceoryx2 – a ultra-fast and reliable inter-process communication (ipc)
library written in Rust, with language bindings for C, C++ and soon Python!
But what is iceoryx2, and why should you care? If you’re looking for a
solution to:
- Communicate between processes in a service-oriented manner,
- Transfer gigabytes of data between processes efficiently,
- payload-independent, consistently low latency
- Wake up processes, send notifications, and handle events seamlessly,
- Build a decentralized, robust system with minimal IPC overhead,
- Use a communication library that doesn’t spawn threads,
- Communicate without serialization overhead,
- Ensure your system remains operational even when some processes crash,
- Work with C, C++, and Rust processes in a single project
(with Python and Go support coming next year!),
...then iceoryx2 is the library you’ve been waiting for!
> This smells like they are using shared memory, which is almost certainly a security nightmare.
Yes, we are using shared memory, and I agree that shared memory is a challenge but there are some mechanisms that can make it secure.
The main problem with shared memory is, that one process can corrupt the data structure while another process is consuming it. Even verifying the contents of the data structure is insufficient since it can always be corrupted afterwards. We have named the problem "modify-after-delivery problem" - a sender modifies the data after it has been delivered to a receiver.
This can be handled with:
1. memfd: The sender acquires it, writes its payload, seals it so that it is read-only and then transfers the file descriptor to all receivers. The receiver can verify the read-only seal with fcntl. Since linux guarantees us that it cannot be reverted the receiver can now safely consume the data. This allows it to be used even in a zero-trust environment. [1] provides a good introduction (see the File-Sealing IPC subsection).
2. Memory protection keys [2]: I do not have too much experience with them, but as far as I understand, they solve the problem with mprotect, meaning, the sender can call mprotect and make the segment read only for it self, but the receiver has no way of verifying it or to prevent the sender from calling mprotect again and granting it read/write access again to corrupt the data.
So, the approach is that a sender acquires shared memory, writes its payload into it, makes it read-only, and then transfers it to the receivers.
> Shared memory works as a transport if you either assume that all parties are trusted (in which case why do IPC in the first place?
Robustness is another use case. In mission-critical systems you trust each process but a crash caused by a bug in one sub-system shall not bring down the whole system. So you split up the monolith in many processes and the overall system survives if one process goes down or deadlocks, assuming you have a shared memory library that itself is safe. If you detect a process crash, you can restart it and continue operations.
You guessed right. We have a layered architecture that abstracts this away for every platform.
With this, we can support every OS as long as it has a way of sharing memory between processes (or tasks as some RTOSes are calling it) and you have a way of sending notifications.
This is a longer story, but I'll try to provide the essence.
* All IPC resources are represented in the file system and have a global naming scheme. So if you would like to perform a service discovery, you take a look at the `/tmp/iceoryx2/services` list all service toml files that you are allowed to access and handle them.
* Connecting to a service means, under the hood, opening a specific shared memory identified via a naming scheme, adding yourself to the participant list, and receiving/sending data.
* Crashing/resource cleanup is done decentrally by every process that has the permissions to perform them.
* In a central/broker architecture you would have the central broker that checks this in a loop.
* In a decentralized architecture, we defined certain sync points where this is checked. These points are placed so that you check the misbehavior before it would affect you. For instance, when a sender shall send you a message every second but you do not receive it, you would actively check if it is still alive. Other sync points are, when an iceoryx2 node is created or you connect or disconnect to a service.
The main point is that the API is decentralized but you can always use it in a central daemon if you like - but you don't have to. It is optional.
Tier 1 also means all security/safety features. Windows is not used in mission-critical systems like cars or plans, so we do not need to add those to Windows.
We aim to support Windows so iceoryx2 can be used safely and securely in a desktop environment.
In iceoryx2, we use something that we call event concept, which can either use unix-domain sockets so that you have something to select/wake/epoll on or a variant where we use a semaphore stored in shared memory. See: https://github.com/eclipse-iceoryx/iceoryx2/tree/main/iceory...
The unix-domain socket has the advantage that you can combine it with external non-iceoryx events, but you pay with a small performance hit. The semaphore event is usually faster.
As a user, you can configure your IPC service architecture and use the mechanism that suits you best (but I have to admit that we have not yet documented this in detail). For the zero_copy service variants, it is done here: https://github.com/eclipse-iceoryx/iceoryx2/blob/main/iceory....
If you want to use push notifications to wake up processes, iceoryx2 has the event messaging pattern where a listener waits for incoming events, and a notifier in another process can send specific events to all waiting listeners. See this example: https://github.com/eclipse-iceoryx/iceoryx2/tree/main/exampl...
Months ago, a friend and I founded a company called https://ekxide.io to continue the development of iceoryx2. It's a library that enables incredibly fast data and signal exchange between processes (zero-copy IPC middleware).
We now have our first customers, which has allowed us to accelerate open-source development. With this in mind, we wanted to start providing regular updates on what's happening, what new features are available, and also to gather feedback and critiques. Our goal is to stay in close touch with the community and create a library that solves real-world problems.
I am one of the maintainers of iceoryx and the creator of iceoryx2, so I wanted to add and complete some more details.
iceoryx/iceoryx2 was intended for safety-critical systems initially but now expands to all other domains. In safety-critical systems that run, for instance, in cars or planes, you do not want to have undefined behavior - but the STL is full of it, so we had to reimplement an STL subset in (https://github.com/eclipse-iceoryx/iceoryx/tree/master/iceor...) that does not use heap, exceptions or comes with undefined behavior.
So you can send vectors or strings via iceoryx, but you have to use our STL implementations.
It also comes with a service-oriented architecture; you can create a service - identified by name - and communicate via publish-subscribe, request-response, and direct events (and in the planning: pipeline or blackboard).
One major thing is iceoryx robustness. In safety-critical systems, we have a term called freedom-of-interference, meaning that a crash in application A does not affect application B. When they communicate via shared memory, for instance, and use a mutex, they could dead-lock each other when one app dies while holding the mutex. This is why we go for lock-free algorithms here that are tested meticulously, and we are also planning a formal verification of those lock-free constructs.
iceoryx2 is the next-gen of iceoryx where we refactored the architecture heavily to make it more modular and address all the major pain points.
* no longer requires a central daemon and has decentralized all the management tasks, so you get the same behavior without the daemon
* comes with events that can be either based on an underlying fd-event (slower but can be integrated with OS event-multiplexing), or you can choose the fast semaphore route (it is now up to the user)
Currently, we are also working on language bindings for C, C++, Python, Lua, Swift, C#, etc.
Just in time for Christmas, we are excited to announce the v0.5 release of iceoryx2 – a ultra-fast and reliable inter-process communication (ipc) library written in Rust, with language bindings for C, C++ and soon Python!
But what is iceoryx2, and why should you care? If you’re looking for a solution to:
- Communicate between processes in a service-oriented manner, - Transfer gigabytes of data between processes efficiently, - payload-independent, consistently low latency - Wake up processes, send notifications, and handle events seamlessly, - Build a decentralized, robust system with minimal IPC overhead, - Use a communication library that doesn’t spawn threads, - Communicate without serialization overhead, - Ensure your system remains operational even when some processes crash, - Work with C, C++, and Rust processes in a single project (with Python and Go support coming next year!),
...then iceoryx2 is the library you’ve been waiting for!
Happy Hacking,
Elfenpiff
- GitHub iceoryx2: https://github.com/eclipse-iceoryx/iceoryx2 - GitHub ROS 2 iceoryx2_rmw binding: https://github.com/ekxide/rmw_iceoryx2 - Release Announcement Article: https://ekxide.io/blog/iceoryx2-0-5-release/ - crates.io: https://crates.io/crates/iceoryx2 - docs.rs: https://docs.rs/iceoryx2/latest/iceoryx2/