I can't shake the feeling that the otters could have used a bulletin board - on the left side they could have written "what kind?", in the middle "when", and on the right more detailed description as necessary. Then any otter in a hurry could skim through the left column, find topics they liked, and just read that part from the right.
Sorry, it's not really a fair criticism, and I did like the art style - I just don't like Kafka much because I had to deal with it in places where it didn't make much sense.
I have not used Kafka, but what the book describes works with most queue systems. And those are great for localizing dynamic complexity. One part of your system behaving weirdly (spiking, dying, fluctuating etc. ) can’t affect other parts because they sit at the other end of a queue happily consuming the messages at their own pace. This means tour complex system becomes less complex in terms of behavior and is less likely to end up in some weird metstable state
This is sort of funny to me, since ActiveMQ had this exact problem. A single slow client could break the entire system because the broker would slow down producers to prevent exhausting the queue space. Which was exactly some "weird" state because it wasn't obvious what happened until you spent a lot of time debugging.
For me a perfect use case is sensor data processing. I've been involved with two independent sensor data platforms that used kafka as the backbone. Sensor data is persisted unprocessed to raw kafka topics, and then gradually deduplicated, refined, enriched and aggregated across multiple stages until finally a normalized, functionally sharded and aggregated view of the data is stored in databases for dashboarding.
It is easy to scale horizontally to massive volumes of data, and any issues in the processing pipeline can be fixed without losing any raw sensor data (restarting the consumers from the last known valid point).
I'd recommend Kafka for any case where you need to act on data that changes over time - almost any software system. In almost every system you want to be able to reliably record events, and you want to be able to partially replay your state in a deterministic way (e.g. after you fix a bug); both those things should be table stakes for a serious datastore, but AFAIK Kafka is practically the only one that offers them.
Kafka itself kind of only solves half the problem, because it doesn't offer indexes as a builtin, so you have to build your indexed state yourself, or maybe even use a "traditional" datastore as a de facto cache. But since you've moved all the hard distributed part of the problem into kafka, that part is not so bad.
Kafka is a streaming log service. People who use it just for passing messages are hugely overengineering something that would fit just fine for something like RabbitMQ, Redis or other tools.
An RSS feed must be powered by something underneath, and any of those tools can do the job. In most situations it would be extremely impractical to use a relational database for this kind of thing. If you are getting thousands of messages in per second, which is not uncommon, no transactional database will give you enough write performance and won't be able to handle too many queries per second for clients polling for updates, like an RSS feed would require. Note that caching queries is almost useless here because the latest content is updated every few milisseconds.
Kafka, as pretty much any other queueing datastore, is optimized for append-only writes with no deletes or updates. Reading from the end of the queue is extremely fast and sequential reads down the stream are quite fast too. Random reads such as the ones commonly handled by SQL databases are either not available or are less efficient than with SQL.
That said, Kafka can be used to pass any kind of message between applications: from simple text messages and small JSON data to vídeo frames, protobuf messages and other types of chunks of serialized data.
It is also a very durable data store for immutable, time-ordered data, and is widely used in the financial world to store transaction logs.
When you have many servers who all need to see the same chronological data stream (including messages they might miss during network downtime), and see new events in real time.
If you set "log.retention.hours = -1" and "log.retention.bytes = -1" in kafka config, kafka stores messages forever.
In a game for example, user-inputs and other events can be produced in Kafka, then reconstruct the entire game-state by reading and processing the kafka-stream from start to finish. It has an advantage over most DBs because it's real-time.
You can use also chronological data streams to represent data structures more complicated than a simple array. For example, a tree can be represented while preserving chronology. This is far from the ideal use case however.
> In a game for example, user-inputs and other events can be produced in Kafka, then reconstruct the entire game-state by reading and processing the kafka-stream from start to finish.
Wherever you have a firehose of data that needs to be processed.
I've heard of it being used as a sort of message queue for application level events before, but that sounds like a nightmare of trying to reinvent the actor model with 1000x the complexity.
Akka, and a lot of actor model services break microservice availability, durability and general reliability because nuking a random node messes with Akka and now whatever actor happened to be on that node is now stalled until it's transferred.
Just like SOA and ESB, the concept isn't the problem, is the technical constraints of the design at the time. Decoupling and messaging isn't bad, but having a legacy message queue on physical hardware doesn't really hold up. Any derived architecture faces the same problem.
Then again, Kafka isn't an actor model implementation, and Akka isn't a partitioned redundant stream processing system, they don't have all that much overlap ;-)
If you shard your Akka actors, the messages are buffered and passed to the actor when it's initialized on the new node. You get even more stability if you persist your actors backed by a DB or some other persistent store.
Not saying Akka can replace Kafka but many of the issues around availability, durability and reliability have been attempted to be solved in Akka.
Surprising too. I would have expected a far darker approach to this subject!
Perhaps something like 60's horror comics genre, or better, like those Jack Chick religious tracts where people end up in burning in Hell forever as an immediate consequence of a sin against god.
It’s rare a piece of tech has a more fitting name!
“Is your orgs politics so complicated that direct team-to-team communication has broken down? Is your business process subject to unannounced violent change? Bogged down by consistent DB schemas and API versioning? Tired of retries on failed messages? Introducing Kafka by Apache: an easy to use, highly configurable, persistently stored ephemeral centralized federated event based messaging data analytics Single Source of Truth as a Service that can scale with any enterprise-sized dysfunction!”
I don't think many children are into Franz Kafka. Kafka is more for cynical grownups. (And teenagers about to become it). It was very Kafkaesque, having to read Kafka in school ..
"Alas", said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I am running into."
"You only need to change your direction," said the cat, and ate it up.
I've seen a lot of these children's books about technical topics. Is there any record of an actual child reading any of them? And, as children do with good books: reading them more than once? (Excluding children in the author's family.)
I have read about 10 of these "children's books about programming" and while I enjoy them myself, I find that they lack most of the things that grab children's attention, such as repetition and visual-only sub-plots.
This is not to criticize the use of an illustrated fable as a storytelling device for adults: They're great! It's just sad that we have to frame an illustrated fable as "for children" in order for it to be accepted. I think it says something about how content dictates and narrows the expected presentation format, sometimes to the detriment of clear communication.
It's got very pretty pictures, so that's very understandable! Have you been able to find out if he understands any of the abstract concepts that are explained, or is he mostly interested in the character interactions?
Most of the books have been low-key attempts from programmer parents, probably to get their own children into programming. They were posted on different forums, and I have a hard time finding my way back to them.
The example that I think illustrates my point best is: http://arthur-johnston.com/hacker_writes_a_childrens_book/ (which was posted here in 2017: https://news.ycombinator.com/item?id=15879519) The book works well as entertainment for grown-up programmers: "G is for Garbage Collector/when something's no longer needed/it frees up the memory/so your program is speeded." You can look inside the book on Amazon for more examples. I judge this rhyme as too advanced for any child below 8 that I've read aloud to. Most theories about cognitive development agree with Piaget (1896–1980) in that children have a hard time grasping difficult abstract concepts before the age of 12–13, so there is at least some "scientific backing" to my hunch: https://en.wikipedia.org/wiki/Cognitive_development#Concrete... (This is only a hunch though; the children I've read aloud to are all picked from the same group – children of family and friends.)
My observation is that children seem to need _lots_ of concrete verbal and visual imagery in order to stay interested. You can also get away with more abstract themes if the actual text is lyrically well-crafted, like Dr. Seuss' books (or André Bjerke's children's rhymes in Norwegian, my mother tongue). The most successful attempts at teaching programming to children that I've seen, seem to give the children a lot of practical tasks they can work on (e.g. the Hello Ruby book series). You also have some programming elements in Minecraft that children could pick up, because the concepts are implemented as concrete objects.
All of this makes me suspect that the main audience for these "children's books" teaching programming is adults that already know a fair bit about the subject. And that's completely all right by me, because it gives the books traction and brings the book's readers entertainment.
PS: You also have the Javascript/HTML/CSS for babies series, which are only jocularly presented as children's books: https://imgur.com/eOYc8fC
I like the novelty of it. I have a copy of The Manga Guide to Databases at my desk that I occasionally 'lend out' to people that mess up my databases. It won't turn anyone into a db hero but it's a decent primer at least.
Different strokes, I guess; I can't count the number of people that openly gush about _why's Ruby book with the cats or whatever, but to me it just read like the ravings of a highly functional something-opath. But reasonable people can disagree, and I'm sure I'll get downvoted for disagreeing with the hivemind.
I don't really get these weird ways of explaining different technologies, just give me a straightforward text description. But straightforward text isn't going anywhere and it doesn't hurt me if people want to read mangas about foxes or whatever.
To me, the attraction is less about the specific art used to communicate the concept, and more about the careful attention to a fully-formed analogy that explains the tech in completely different terms.
These kinds of explanations tend to focus on the most critical/important concepts, and help validate (or dispel) assumptions I've made about the tech.
This focus on analogy also lets the author tell the story faster, because I already understand:
- What an otter is
- That rivers flow
- The water flowing down a river that forks will be spread across those forks
- etc...
Depending on the strength of the analogy, it's possible to get the reader on the same page much faster than an intro/tutorial that must first explain foundational concepts just to get to the basics of the technology itself.
I've never really been a fan of analogies, because I feel like in the end I just need to first understand what the thing is underneath, so I can understand how the author has built their analogy... so the task of understanding why they used this analogy is equivalent to understanding the thing itself.
Do people find these analogies helpful? The concepts aren't that difficult, the audience for them is already technical, and adding a cutesy abstraction about it makes it harder to understand.
The art and animation in this is great, but I feel like the author's talents are wasted on a document with no audience. Make a kids' book instead!
Yea that's where I was going with asking why. It feels like it's a trendy thing is to convert a technical thing into a children's book. (Which is odd.. children don't need that book.. so it's for an adult that wants to consume children's literature)
Note: I'm not against creative attempts to explain technical concepts. But the form to me seems odd, and that it feels like we're producing very short tutorials in a childrens format. That's even weirder.
My initial impression was "more of this garbage" - but it was really well done in the sense that it distilled the core functionality of the system in an easy-to-understand form.
The guy above you mentioned manga-guides to stuff, which utterly fail at their job, which again, is to distill key-information in an entertaining, easy-to-read, general (but fully accurate) manner.
I never found it helpful from a purely technical perspective, but I found it extremely eye-opening as an unorthodox approach to programming that really captured my imagination. It was definitely something that encouraged me to dig deeper into Ruby and do more explorative "creative" coding.
There's no shortage of dry technical documentation, so seeing something akin to outsider art in that space was really refreshing.
Personally I would love to see more technical books come with a soundtrack!
Because to me it seems like a very pure way of separating a concept from implementation. In language and metaphors that are easy to understand and fun to read. I really like it. Now, in a very short time I can decide if Kafka can solve my problem or whether I should move on. I for one store information presented in this way much better, it's feasible that in 5 years I'm presented with a problem and the Otters pop into my mind.
Wow this is such a beautiful read. Has anything like this been done before for some other topic?
P.S. This sentence is hard to grasp for me
"This Unawareness helps Decouple systems that produce events from otters that read events." https://www.gentlydownthe.stream/#/20
I don't have a link but the NSA has a coloring book about cryptography that's in the public domain, since it was created by the government with tax dollars and has no copyright notice. I actually have a copy I've been meaning to scan, I'm just afraid of ending up on a list if I do. :)
I'm glad people are enjoying this. I wrote a story two years ago about Queues / Kinesis - and am in the process of getting it illustrated. I doubt it'll make any money, but it's good to know that people enjoy this type of thing!
> "This Unawareness helps Decouple systems that produce events from otters that read events."
It simply means that the producer doesn't need to maintain a list of listeners. It just throws the event into the stream, and assumes that anyone who wants to read it will be able to do so.
Current version:
"First, they dropped large stones into the river, splitting each topic into a smaller number of streams, or Partitions."
I know nothing about Kafka, but I think maybe this should be:
"First, they dropped large stones into the river, splitting each topic into a number of smaller streams, or Partitions."
For kids only? I personally had little understanding of Kafka before and now
I get the gist of what it does. Maybe books should look at this style of explanation instead of just putting a bunch of crazy equations on day 1 that put 90% of people off.
No crazy equations? But how else can I prove my intellectual superiority over lesser men but by demonstrating trivial concepts with greek letters, in keeping with the way Plato originally taught computer science?
No worries. ±1 day is universally race-condition territory. Optimizing at that level is super tricky and incredibly hard to get right, which is why I wanted to mention it.
Can someone please explain why to use Kafka over Redis streams? Is it basically use Kafka if you need your data long term and/or if it's way too big for an in-memory store? Redis seems much, much simpler to setup.
Even if you do need to keep your events long term, why not use something like Eventstore?
4 years ago at work we compared Kafka to alternatives and ended up using it. In my opinion the only reason for using Kafka is that you need a very very highly performant message queue. It had bad usability at the time and some very annoying undocumented behaviour (e.g. there was no official guide or feature to permanently remove a node).
It feels like whenever Kafka developers had to make any tradeoffs they chose performance over everything else. I would only ever use it again if I the sheer data volume makes it impossible to use an alternative.
With a friend we’ve built this [1]. Usability has been our main focus, though we ended up having very good perfs (millions of mps out of the box, usually 2x to 10x kafka). We provide REST and WebSocket APIs, + a simple to implement binary protocol for the most demanding workloads. It’s in Go so very light and rock solid in stress tests. We provide trivial to implement and understandable usage patterns for exactly once processing. We’re currently working on HA and cloud integration. And we’re looking for community feedback BTW ! :)
Kafka is more focused on distributed use-case, I think. It is built around being able to write to fault-tolerant queues, and having multiple consumers in a group reading from it; you can have it configured so that losing a queue node will not lose availability of any of the data in the queue, and losing a consumer will not drop any messages. All of that while being able to handle very high throughput, even with persistence. The tradeoff is in the baseline complexity of Kafka is much higher.
As far as I know, Redis Streams does not offer all of that on its own. You could certainly build a lot of that for Redis yourself (and someone probably has already), but then you start getting back that complexity.
So if you have high requirements for performance, durability and availability, are don't mind the added complexity then Kafka is worth a look.
From my limited research, Redis streams offers pretty good durability if you run with AOF and sync frequently. Performance should be great since it's all in memory. As for availability, there is sentintel and cluster mode which I know nothing about.
edit: the differences in consumer groups confuse me as well. With Redis streams, if you have a single stream with a single consumer group with multiple consumers, each consumer will get a new message at different times so processing of each message may happen out of order. That seems... fine to me. Apparently that is not the case with Kafka which makes me ask, why have multiple consumers in a single group if one will be blocked on the other?
Kafka consumer groups do not block. Kafka uses partitions to scale out a single stream/topic and a consumer group allows consumers to exclusively read a partition. Multiple consumers can read from multiple partitions simultaneously, as long as there are enough partitions available.
Redis doesn't have partitions, and instead has a single host managing a stream, while taking on all the consumer tracking functionality. It's fast, but not as scalable, and eventually if you keep growing then scale is how you get speed.
Messages within a partition are read in order. Partitions themselves may be read simultaneously or out of order. This is the trade-off for scalability.
You can compromise by using hashing or other logic to group related messages into the same partition. For example, hashing by user-id lets you process events for a particular user in order, while processing all events by users in parallel. If you can divide into logically consistent but globally isolated boundaries then this setup works very well.
Order guarantees is not part of the promise with Kafka. If your consumer drops or times out the broker will pause all active consumers for the topic, revoke assignments and issue their new assigned blocks based on the new consumer group acknowledgement count.
While I love the commentary and found it funny, it would more accurately be "life before and after a huge volume of messages". Kafka isn't the problem in this specific case.
This is a wonderful contribution to the software development community, in the vein of Why's Poignant Guide and Land of Lisp. I fully support such combinations of artistry, whimsy and technology - true to the core hacker spirit.
You need some other animals - Beavers maybe? - who are in the same forest and who just have a simple system of scratching messages onto the side of a single large tree somewhere that everyone goes to look at regularly. Then at the end see how many picnics the beavers missed vs how many otters lost their fucking minds from the complexity of the system and then see who comes out ahead.
I think the otters should question some of the things that are meant to be axiomatic. Is tight coupling really a scalability problem? Otters should be prepared to defend these statements. Nixie's song might be the vacuous equivalent of "Mongo is web scale".
Also: did adding the stream really decouple the otters, or did it just make the coupling less visible? After all, the consuming otters may still depend a great deal on the precise behaviors of the producing otters. It's just that now the producing otters don't know in what ways other otters depend on them, making it harder for them to make changes without harming their otter-dependents.
WOW, this is so nice. The drawings are really cute and the animations give it its own style. Bit of feedback the color palet all over the place. Could use a bit of hierarchy to help the focus of the viewer and reuse same colors to make it more coherent.
https://www.gentlydownthe.stream/#/10/1: I was wondering why the “//gi” and “/round-robin-books/g” were purple, while the rest of the line was white. hljs decided the whole block was awk, and those parts regular expressions. Might be nice to tell hljs it’s a different language.
https://www.gentlydownthe.stream/#/11: “In the rivers gleam” doesn’t seem quite right; I can’t decide whether that should be “In the river’s gleam” (the river doing the gleaming) or whether it was intended to be “In the rivers gleam” (the events doing the gleaming, in multiple rivers) in which case I suspect it should be a singular river.
Not really (unfortunately). Kubernetes can leverage a number [1] of container runtimes - OpenShift uses CRI-O, for instance. There's also the problem that Docker can refer to a container runtime, orchestration engine (Swarm, which is analogous to k8s), CLI tools (`docker` and `docker-compose`), or HTTP API (registry). At my last few gigs the maintenance of separate configs for docker[-]compose and k8s has been a consistent pain point.
The container ecosystem is pretty complicated :-).
I've been trying to put my finger on why Kafka so well captured the imagination of many distributed systems engineers. My best answer is, "low-cost publish and multi-consumer data-sharded subscribe is the key to resilient horizontal scaling and parallelism."
Kafka has its flaws, but it really served us well. We have Python Data Engineers who focus on distributed system design[1], and Kafka is one of the team's least finicky open source components, but it is used everywhere, and it basically enables the entire rest of the real-time data processing stack.
I've seen many comments here along the lines of "but when should Kafka be used?"
If you want a deep dive into event streaming systems (logs) that also gives some examples of big data systems they interact with, I highly recommend "The Log: What every software engineer should know about real-time data's unifying abstraction," from LinkedIn, the creators of Kafka.
Their engineering blog also has many other interesting articles on their data systems.
This is so awesome. It almost made me cry.
I think it's a brilliant way to introduce a complex technology because we have all grown up to absorb lessons through stories, especially stories about cute animals. And you know what? we remember those stories and the lessons we glean from this.
I'm also glad I can swap the image I have of Franz Kafka's Gregor Samsa bug with cute otters.
Who is this actually for? Programmers need clarity and precision, not "fun" analogies stretched past their breaking point. (I mentally facepalmed when it got to the glass floats. Why even bother with the river at that point?)
And it can't be aimed at children, given that amount of technical jargon (we suddenly go from otters and rivers and bees to headers, keys, values and timestamps). And, well, children don't need a book about Kafka.
This is awesome, excellent work! Like some others here, I feel like the otters over-engineered the entire thing using fancy glass balls and partitioning a river where a categorized bulleting board would have been good enough.
But then I realized that that's what makes it a realistic story ;)
Jokes aside, the best explanation I've ever seen. It should be a must-do for people who want a quick and understandable introduction to the concepts.
Fantastic. I'm betting a large group of people would much more likely watch/read this than a typical technical paper, and still gain 60% of the knowledge.
I have to say, i was looking for a slide on how the otters cleverly know what messages they've already read, and that was unfortunate, but kudos!
Awesome! Love the illustrations and the subtle animations underneath.
Quick feedback: My nephew prefers page turns (curling) on their iPad. I assume that he prefers it over a powerpoint style slides that we currently have on your book. Well, at least he spent some time flipping through the animated pages!
Why did the otters use streams as opposed to e.g. having a post office? Is their throughput really that high? Or would a more regular message passing solution be good enough for them?
Perhaps they usually work with streams and couldn’t resist applying a familiar solution to the problem?...
This is great because (like _why's {poignant} guide to ruby did for me) it breaks the psychological barrier that I have to even get an intro to those technologies -- as an adult developer -- because it has new concepts that are foreign to me.
Love it! But the cdn serving the images is improperly configured. The browser hits the cdn each time you go back and forth, downloading the images (multiple megabytes each) every time.
Likely the `expires` header is the problem (it's a date in the past)
From the review:
"The Story About Ping has earned a place on my bookshelf, right between Stevens' Advanced Programming in the Unix Environment, and my dog-eared copy of Dante's seminal work on MS Windows, Inferno. Who can read that passage on the Windows API ("Obscure, profound it was, and nebulous, So that by fixing on its depths my sight -- Nothing whatever I discerned therein."), without shaking their head with deep understanding. But I digress."
> Using deft allegory, the authors have provided an insightful and intuitive explanation of one of Unix's most venerable networking utilities. Even more stunning is that they were clearly working with a very early beta of the program, as their book first appeared in 1933, years (decades!) before the operating system and network infrastructure were finalized.
The fact that since January 2000 nearly 16k people "found this helpful" opened my eyes to how Amazon book reviews really do have the potential to be "classic".
Love the concept, and nicely executed! But what is Kafka-specific here? It seems like this illustrated guide describes any highly scalable, persistent pub/sub system
I think
...Hm... That it's awesome. The notion that a book can possess child-like qualities... which convey complex ideas is appealing. The audience can be anyone.
Really enjoyed reading it. the analogies are very clear, easy to understand and fun to read. the illustrations are amazing as well. wish there was more of it :)
Could be just me. I see Apache kafka more nightmarish than bureaucracy in 'The Castle' or Police/judiciary in 'The Trial' from author Franz Kafka. Mr Franz btw stole his last name from this very famous Apache Kafka project.
Unfortunately we do not have something to discuss. All these consumer/producer/broker failing randomly, system behaving erratically is me doing it wrong. Those white papers from Confluent and testimonial from customers are proof that Kafka works fine.
Individual Kafka consumers and producers tend to have simple behavior, which is good, and initially plugging them together yields simple, predictable systems, but people tend to keep going until they experience pain, at which point they have a system that is right at the limit of what they can understand. Then further feature work pushes them over the limit into darkness.
I think Kafka needs the equivalent of OpenAPI and Redoc, a simple spec and document generator, but for groups of consumers and producers rather than single applications. This would increase the tractability of complex systems, but it would also let you see the system getting more complex over time, even when you haven't reached the pain point yet.
Thanks for the tip; that's a really cool project. It looks like the documentation generator for it works on a single file at a time, and a single file defines a single service. I was thinking of something that works on the level of a system of interacting services. I want a tool that reads the API specifications for a group of services and documents the interactions between them. I'd like to look at service X and see that it emits message Y on queue Z, and then see which services read message Y on queue Z, jump to their documentation, etc. I think the AsyncAPI format is perfect to build on, though. I'll start there if I decide to take a hack at it.
I'm taking a closer look today, and it looks to me like one AsyncAPI document, which can be defined using multiple files, defines one application. If you look at the "fixed fields" section under the root object, the second field is "id"[0]:
id Identifier Identifier of the application the AsyncAPI document is defining.
This format is fine, but the tool I'm looking for is something that will read the definitions of multiple AsyncAPI documents (multiple applications) and show how their inputs and outputs connect, so I can answer a question like, "When application X publishes message Y on channel Z, which applications consume that message?"
AsyncAPI gets me 90% there by defining the service spec and providing code to parse it. It's possible somebody has already written the rest; I'll have to see.
Kafka is very configurable .. and teams struggle to optimize the production deployment configuration and tend to second guess their decisions. Getting a managed deployment from Confluent etc. mitigates these issues as you can count on proven SMEs to deliver a well optimized solution. However, if you were to DIY , you might not be so blessed. Disclosure: I never did a DIY Kafka prod deploy -- my employer was already a Confluent customer.
While the otters are busy planning elaborate ways to tell each other about neat rocks, the much smarter humans are revving up their chainsaws to cut down the forest.
Well, fishing and mining are both resource extraction but apart from that have nothing in common. Kafka and REST both have APIs but apart from that have nothing in common.
I'm sorry you're getting downvoted, but I think this question is legitimate because the book is peddling Kafka as if it's the only way to do event sourcing. Event sourcing is what you should compare with REST APIs, Kafka is one way of doing it, but you can do the same with any database, as long as you have a way to write things in and read things out and organize them, you can achieve event sourcing.
With REST APIs (first few pages of the book), services talk directly with each other, with event sourcing (the rest of the book) services talk with an event store (Kafka in the book) as the intermediary.
I read the whole thing. That's why I have the question. So they used to have messages send to everyone (websocks?), now they have the messages persistent in different channels for people to consume. Isn't this just different entry points of a Rest API?
Sorry, it's not really a fair criticism, and I did like the art style - I just don't like Kafka much because I had to deal with it in places where it didn't make much sense.