Correct, it's not write-scalable in the same way it is read-scalable. The transactor is a bottleneck for writes.
However, that doesn't mean it has slow writes - it should still do writes at least on a par with any traditional transactional database, and probably a good deal faster since it's append-only.
I'm more concerned with what happens when the transactor goes down, or gets silently partitioned from some of the clients. I assume reads will continue to work but all writes will break?
I'd also like to know more about how the app-side caching works. If I've got a terabyte of User records and want to query for all users of a certain type, does a terabyte of data get sent over the wire, cached, and queried locally? Only the fields I ask for? Something else?
1. You're correct, however, the architecture does allow you to run a hot backup for fast failover.
2. The database is oriented around 'datoms', which are an entity/attribute/value/time. Each of these has its own (hierarchical) indexes, so you only end up pulling the index segments you need to fulfill a given query. You'd only pull 1TB if your query actually encompassed all the data you had.
However, that doesn't mean it has slow writes - it should still do writes at least on a par with any traditional transactional database, and probably a good deal faster since it's append-only.