I built a simple Twitter client on my laptop against MongoDB - persistently storing every tweet from everyone I follow. I found it to be stable, flexible, quick to build with and extremely high performance - but I'll probably move the app over to MySQL or PostgreSQL simply because I want to do queries with joins in them!
I strongly considered using MongoDB for an internal project at work. I eschewed it for my project though. I simply didn't have time to fully investigate it.
We forked our app to give it a try. It's definitely an interesting contender, but in our case, we didn't see any significant performance gains and so it wasn't worth moving away from MySQL. It did help us simplify our MySQL schema a lot though and i wouldn't be surprised if we revisit it when it hits v1. For someone getting started with it, the google group is quite useful [http://groups.google.com/group/mongodb-user] and the project team was pretty responsive.
I am getting ready to use it for a very similar purpose, the ability to quickly increment a counter seems to make it perfect for this kind of use case.
Just started using it 2 weeks ago after a month of reviewing various options. So far its been great. Currently, I'm writing a lightweight wrapper util for models in ruby. There is mongomapper, but its an ActiveRecord-like thing and I really want something much simpler.
My current concerns are with tiny little nits in the API. But these are minor and there seems to be a way to do everything I need.
I read that. How is the master-master replication working for you? Also, why didn't you rather do shrading and replication considering m-m replication is experimental?
The only replication we're doing is master-slave which works extremely well. The slave comes back in sync after about 10 hours and then stays up to date.
The replication pairs are very interesting as it allows automated failover. WIthin your code you specify a list of the servers and it will automatically set one as the master. If that fails then the others will negotiate which one will become the master and take over without any need to change your code.
Sharding is experimental and isn't documented or recommended for use at the moment so I can't comment on that.
We're also using the commercial support option to guarantee us 24/7 response if something goes wrong and to get bug fixes quickly, which we have used once so far.
I'm building a bug tracker with MongoDB at the moment. Nothing but good things to say. First incarnation I used CouchDB but found the whole 'must know your queries before-hand' stuff didn't really fit the project. MongoDB is blazing fast with custom (runtime) queries.
antirez (the author of Redis) says good things about MongoDB, but like him, I found Redis suits my needs better (as a high-performance log server; 60k writes per second, trivial replication, etc.)
The problem with Redis is its insane memory consumption. Memory space taken by data is inflated over an order of magnitude compared to its "natural" size on-disk. I'm not sure if this is inherent or is something that can be fixed (eg with some sort of compression) but it makes the DB pretty much unusable for me at the moment. Pity because I really like it!
That's silly - I can ftell_i64 on this 32-bit machine, so it's not the OS, it's them not coding in the 64-bit file access APIs that are available, at least in Windows XP.