I’m the same regarding that article. I always thought of that article as a kind of “deconstruction” or “decoupling” and “rearrangement” of traditional DBMS architecture.
The end result you speak of is a more flexible data platform, which can handle more use cases, including applying decoupled parts to different things.
A good example, in my mind, is Apache Spark - which is like taking a compute and SQL query engine, making it distributed and decoupling it from the underlying storage. Now, suddenly SQL or code can be used on anything you can fit into a data frame, even streams - wow!
Pulling out the transaction log - and wow, suddenly we have durable, potentially immutable queues we can use for even long term storage.
I was so excited by the line of thinking in that article. :)
The end result you speak of is a more flexible data platform, which can handle more use cases, including applying decoupled parts to different things.
A good example, in my mind, is Apache Spark - which is like taking a compute and SQL query engine, making it distributed and decoupling it from the underlying storage. Now, suddenly SQL or code can be used on anything you can fit into a data frame, even streams - wow!
Pulling out the transaction log - and wow, suddenly we have durable, potentially immutable queues we can use for even long term storage.
I was so excited by the line of thinking in that article. :)