Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Should see a rise in embedded KV popularity in correlation with ML applications. Storing embeddings in something like leveldb in formats such as flatbuffer offer high-performance solutions for online prediction (i.e. for mapping business values to their embedding format on the fly to send off to some model for inference).


Would that be on mobile devices for offline usage? I'm thinking that for typical backend use cases one would use a dedicated key value store service, right?


This would depend on your requirements and type of inference. Say you need to compute inference across 1000's of content/documents/images every second or so, out of some corpus of millions-billions, then having a kv store on disk/SSD (NVME) might be for more efficient & cheaper (in terms of grabbing those embeddings to conduct a downstream ML task). How you update the corpus matters too -- a lot of embedding spaces need to be updated in aggregate.


I've heard this a lot recently about storing embeddings. As someone who has dabbled in ML I don't understand what it means. Can you point me to a good overview of the topic please?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: