Hacker News new | past | comments | ask | show | jobs | submit login

My experience with MongoDB has been terrible. Apart from just look-ups I don't think it's meant for much data wrangling. Joins with different collections are harder to do. I see the best use case of Mongo is for data dumps.



It's pretty good for a document store.. partial updates to documents, as well as indexing work well. Setting it up for replica sets with auto failover is much easier than say PostgreSQL, as is the API interface (especially geo searches). It's run well for most of my own uses with it, though I do keep an eye towards RethinkDB as well as ElasticSearch and Cassandra.

RethinkDB really needs to get the auto-hot failover and geo searches worked out, geo is on the table for the next release iirc, and failover the next after.

Cassandra is great for key/value searches, but falls down for range queries.

ElasticSearch is pretty awesome in its' own right, but not perfect either.

PostgreSQL has a lot to offer as well. 9.4 should be pretty sweet, and if they get automagic failover in the community versions worked out, I'm totally in.

It really just depends on what your workload is... MongoDB offers a lot of DB-like scenarios against indexes in a single collection, a clean set of interfaces, and a fairly responsive system overall. There have been growing pains, and problems... the same can be said of any database.

To each their own, it really just depends on your needs, and for that matter how far out your project's release is, vs. how long you need to support it.

Right now, I'm replacing an over-normalized SQL database structure that is pretty rancid. Most of the data fits so much better with a document db it isn't funny. When I had done the first parts, I had issues with geo searches in similar alternatives, and that has been a deal breaker for a lot of the options.

You don't use a document store if you need to use joins.. you're better off either duplicating the data, or using separate on-demand queries... odds are your data isn't shaped right and you should have used a structured database, or you aren't thinking of the problem right.

YMMV.


> Setting it up for replica sets with auto failover is much easier than say PostgreSQL

MongoDB replica sets are for availability not for consistency. Even with a write concern of majority, you can still have inconsistency. Without heavy load you might never see this race condition.


I don't see what's complicated about the Postgres goesearch API. Just use earthdistance not PostGIS.


You may not need joins, but someone else May.

Someone else likely Will in my experience. Or I will, when a new requirement comes in.


Again, I'd say it depends on the data... either by shape or need. I also wouldn't use NoSQL for highly structured and relational data. For example, a classifieds site, absolutely yes to nosql... for comment threads, I'd favor SQL.

If you need certain reporting, does it have to be real time, is real time enough okay, and what are the other needs. I find that sometimes duplicating data (one point of authority) is better than using one or the other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: