Hacker News new | past | comments | ask | show | jobs | submit login

My favorite paper for introducing the idea is this oldie but goodie: http://lsa.colorado.edu/papers/dp1.LSAintro.pdf

The math behind Word2Vec and GloVe is different. Most word embedding models have been shown to be equivalent to some version of matrix factorization, though, and, at least if you're comfortable with linear algebra, the SVD formulation makes it relatively easy to get an intuitive grasp on what the dimensions in an embedding really "mean".




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: