Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm always puzzled when the Datomic folks speak of reads not being covered under a transaction. This is dangerous.

Here's the scenario, that in a conventional update-oriented store, is termed as a "lost update". "A" reads object.v1, "B" reads the same version, "B" adds a fact to the object making it v2, then "A" comes along and writes obj.v3 based on its own _stale_ knowledge of the object. In effect, it has clobbered what "B" wrote, because A's write came later and has become the latest version of the object. The fact that DAtomic's transactor serialized writes is meaningless because it doesn't take into account read dependency.

In other words, DAtomic gives you an equivalent of Read-committed or snapshot isolation, but not true serializability. I wouldn't use it for a banking transaction for sure. To fix it, DAtomic would need to add a test-and-set primitive to implement optimistic concurrency, so that a client can say, "process this write only if this condition is still true". Otherwise, two clients are only going to be talking past each other.



You are incorrect -- datomic transactions can depend on the previous state of the DB, and prevent it from being modified from under it. To do this, you do the transaction in the transactor process.


I think "transaction functions" are intended to provide a solution to that, although I do wonder how it would fair performance wise under very cpu intensive transaction functions.


    ...would need to add a test-and-set primitive
Datomic provides CAS via its `:db.fn/cas` database function. I'm not sure that it's documented at the moment.



Lovely. That addresses my concern.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: