> Not separating persisting the event history and persisting a view of the current state
This seems like the classic half-way that people take when adopting ES without buying into CQRS.
In my experience you need a minimal amount of historical knowledge about an aggregate to validate business rules, and that is strictly orthogonal to ways you may want to build projections over the data.
If I understand OP correctly they are saying that rather than deriving state by consuming their events, they were maintaining a kind of snapshot that served as both the read/write model representation of a given aggregate in addition to storing the events.
It can be very, very tempting to try and reuse "models" in both parts of an application, but some basic scrutiny and some hard-won experience will lead to examples such as how.. in a view model for a User there's almost certainly no need to ever consume PasswordChanged type events, but in the write model (business logic) you may need to consume not only the newest state (to compare for authentication purposes) but maybe some kind of analysis over time of how frequently passwords have been changed recently. This asymmetry is a bit contrived in this case, but I'm sure OP has similar examples from their own codebase.
In reality I have found that at most a handful of fields on otherwise quite rich models ever end up being "rehydrated" in the write model, and rarely ever as dumb attr setters; in the read models by comparison the vast majority of events recorded against any given aggregate end up being used by one projection or another.
In my experience CQRS is useless these days when I can use something like MongoDb that has great read speeds. The idea of splitting reads and writes is fine conceptually but doesn't make sense to me when databases are so fast.
I worked on multiple Event Sourcing CQRS based systems and I see no advantages vs traditional databases.
The event stream is exactly the same as a commit log in a regular DB. Building "projections" is the same as making views or calculated columns. In my experience storing the whole event history is not useful and consumes huge amounts of space. Just like everyone compacts their commit logs, you don't have to, and if you don't, isn't ES the same thing with extra steps? It stores all the changes to tables and views, which are exactly the same as projections. With ES + CQRS combined you're basically replicating a database, badly.
Sorry to be so negative about this topic but I've worked on several of these projects at a decent scale and it's been the biggest disaster of my professional life. The idea may be viable but tooling is so bad that you're almost surely making a huge mistake implementing these patterns in production code.
I disagree. The events tend to relate directly to what would be regular database tables in my experience.
And data is the new oil is definitely a businessweek cargo cult. None of the event data was of any use on the projects I worked on. We were under mandate to find something to do with it and still couldn't. The closest useful thing was allowing undo and replaying past events but we already had that on Postgres with hibernate auditing on interesting tables (money related stuff)
> Sorry to be so negative about this topic but I've worked on several of these projects at a decent scale and it's been the biggest disaster of my professional life. The idea may be viable but tooling is so bad that you're almost surely making a huge mistake implementing these patterns in production code.
Quite, which is why I'm both writing a book, and developing tooling to help with this problem. ES/CQRS is a nice pattern that you can explain to someone on the back of an envelope in 5 minutes, but the devil really is in the details.
I agree it's a nice pattern. What you really need though is a "database in code". A library that handles all the projections, event stream, and especially replays and upgrading event versions automatically.
I'm hopeful that one day it will be a common and useful pattern but it desperately needs language integration to help with the details. That's why right now I can just say stay the hell away. Looking forward to advances in the space to make it viable in production
One thing you do get "for free" pretty much is an audit log though. Each event can easily have an ID associated with it in case you need to be able to trace such a thing. Of course you can model this outside of a CQRS system but then you basically have created a CQRS system.
And if you store events with both an event_timestamp and effective_timestamp, you get bi-temporal state for free too.
Invaluable when handing a time series of financial events subject to adjustments and corrections. For instance, backdate interest adjustments due to misbooked payments, recalculate a derivatives trade if reported market data was initially incorrect, calculate adjustments to your business end of month P&L after correcting errors from two months ago.
You get audit logs for free on a regular database with an ORM like Hibernate or even support on database snapshot level with Postgres or MSSQL. You get the logging capabilities of CQRS without any of the complexity.
After working on several such systems, I strongly believe you should keep data storage concerns in the database. Moving it to code implementation is a massive amount of overhead for no benefit.
This seems like the classic half-way that people take when adopting ES without buying into CQRS.
In my experience you need a minimal amount of historical knowledge about an aggregate to validate business rules, and that is strictly orthogonal to ways you may want to build projections over the data.
If I understand OP correctly they are saying that rather than deriving state by consuming their events, they were maintaining a kind of snapshot that served as both the read/write model representation of a given aggregate in addition to storing the events.
It can be very, very tempting to try and reuse "models" in both parts of an application, but some basic scrutiny and some hard-won experience will lead to examples such as how.. in a view model for a User there's almost certainly no need to ever consume PasswordChanged type events, but in the write model (business logic) you may need to consume not only the newest state (to compare for authentication purposes) but maybe some kind of analysis over time of how frequently passwords have been changed recently. This asymmetry is a bit contrived in this case, but I'm sure OP has similar examples from their own codebase.
In reality I have found that at most a handful of fields on otherwise quite rich models ever end up being "rehydrated" in the write model, and rarely ever as dumb attr setters; in the read models by comparison the vast majority of events recorded against any given aggregate end up being used by one projection or another.