Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Garnet fascinates me. Their benchmarks even claim that it is better than Redis and also Dragonfly. Are there any papers or write ups explaining what makes Garnet fast? (I do know its based on FASTER)


The tl;dr is it's just a lockless hashmap attached to a TCP server with a log. Simple Get/Set operations are highly optimized, so with high batching they are able to efficiently fetch a lot of data efficiently. The architectures scales very well when you add threads and data access that is uniform.

It struggles a bit on certain types of workloads like hot keys, think heavy hitting a single sorted set. It's a cool architecture.


There's more to it than just having a fast hashmap: https://www.microsoft.com/en-us/research/wp-content/uploads/...

(I'd imagine implementing mechanisms for resilient storage and larger than memory data sizes would be the hard parts)


That's... Pretty much exactly what you'd expect from a KV store. It's a hashmap on the network.


The difference is that Redis is single-threaded, the value proposition of Redis was/is a fast (in-memory) simple (single-threaded) server to serve things so much faster than a traditional DB. It was a perfect fit and became popular with the Ruby/Rails community,etc since those environments combined with traditional SQL servers are slogs compared to what a fast server like Redis could do.

As good as Redis is, modern computers with multiple cores potentially leaves a lot of performance on the table. Garnet seems to be a well designed multithread KV-store (though sadly the benchmark page doesn't list benchmark for more complex objects even if the simple cases looks good).


I feel like most of the obsession over Redis came from old school cgi-esque server side apps that couldn't easily maintain state themselves...

Nowadays your webapp is often a persistent, multi-threaded web server where you can cache whatever temporary state you want far more efficiently than reaching out to a KV store.

Using postgres and caching common results becomes a non-issue even at scale. The main area where a super fast KV store shines is when you need to share a very large number of keys, where many servers only need to access a random subset and isn't interested in maintaining the full set and getting updates on a bus, where the values often change and needs to be revalidated so caching is ineffective, and where persistence is not needed so a database doesn't matter and a lighter KV store is acceptable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: