Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Expanding upon this line of thought I am not sure their comment RE: storing data indexed in 32GB of RAM holds any meaning considering it requires a hop over the wire to access. Locally stored indexes on disk would likely be much faster.

I would imagine a smooth update mechanism could allow you to update file indexes without loosing too much agility on feature development but come with huge security gains.



Well they say "machines" so it might be much more than 32GB. (Though on disk it might be strongly compressed.) And on HDDs, a few seeks is actually slower than a network round-trip.


Uncompressed might indeed eat up tons of disk space.

Admittedly I was thinking of an SSD + some caching in RAM when thinking of this timing. But I was always under the impression an HDD seek would be on the order of 10's of ms? I'd assume a query to a cloud service would be on the order of 100-200ms.


The network latency should be somewhere around 40ms assuming you're in the US on a proper connection. If a query takes 10 disk seeks (10ms each) vs 10 RAM accesses + HTTP parsing...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: