Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is somewhat unrelated, but I remember reading in one of Mixpanel's job posts that they had over 200 servers. 200 for a company of their size that charges by the data point seems kind of a lot. I've worked at a couple tech companies who get by with an order of magnitude less servers and deal with the same load that I bet they deal with. So either they were exaggerating by re-defining what a "server" was in the cloud, they have tons of (costly) freeloaders, or their infrastructure is inefficient.


Or you bet wrong about the load they deal with.

They may also have higher availability requirements than most companies and need 2X (more?) the infrastructure to protect against a data collection failure.

They may be counting nodes used periodically, e.g. a large Hadoop map-reduce run.

Edit: don't get me wrong -- 200 servers is a lot. :)


I wouldn't doubt it. But they also don't publish any figures so its difficult to confirm. I work for one of their competitors and we most likely have the same availability requirements... anyways, just curious. Here's where we're at, as a comparison http://bit.ly/qLrKOt

edit: looks like they did publish some figures :) http://techcrunch.com/2010/07/01/mixpanel-billion-datapoints...


Mixpanel is an analytics company.

Analytics is server-intensive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: