Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is that RAG though? Perhaps I’m missing something but I don’t see where the retrieval step is. Extracting the metadata and passing it to the LLM in the context sounds like a non-RAG LLM application. Or you’re saying that the DB schema is so big and/or the LLM context too small so not all the metadata can be passed in one go and there’s some search step to prune the number of tables?


RAG is augmenting the llm generation with external data. How the external data is retrieved is irrelevant. A search is not necessary.

Of course you can do a search on the related tables with regard to the question to narrow down the table list to help the llm to come up with the correct answer.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: