Hacker News new | past | comments | ask | show | jobs | submit login

No, and even modest write loads will cause immense pain -- there was a single lock per db when I last used them. Plus lots of the standard tricks for getting performance out of dbs don't work for them. I'd recommend staying away unless your performance needs are very moderate and you must have unstructured tables.



There is still a single lock per db. It also uses huge amounts of space (around double JSON of same data). That puts increased pressure on memory. It also has no usable compaction. And can be rather dumb with indexes. (note)

The main mongo solution for things is to run lots of copies of it across many machines each with small chunks of data. In theory what is then a pain point on bigger systems becomes lots of lesser pains on small systems.

(note) every criticism is answered with how the next release will improve things. Sometimes that happens.


Check out tokumx, it fixes a lot of the problems with mongo.


Seconded; we've had a very positive experience with it so far. We're running around 1.5TB in it (and growing) and it's working pretty decently.

Our biggest problems with it have been CPU saturation due to compression (solvable with sharding) and oplog size (due to supporting ACID; supposedly much better in the upcoming release), but both of those are surmountable. In exchange we get massively better disk usage characteristics, no global locks, ACID compliance, transactions, and generally better performance. It's not perfect, but it solved a lot of our problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: