Naive RAG vs. HyDE, explained visually: One critical problem with the naive RAG system is that questions are not semantically similar to their answers. Consider you want to find a sentence similar to "What is ML?". It is likely that "What is AI?" is more similar to it than "Machine learning is fun." Due to this semantic dissimilarity, several irrelevant contexts get retrieved during the retrieval step. HyDE solves this. The following visual depicts how it differs from traditional RAG. Here's how it works: - Use an LLM to generate a hypothetical answer H for the query Q (this answer does not have to be entirely correct). - Embed the answer using a contriever model to get E (Bi-encoders trained using contrastive learning are famously used here). - Use the embedding E to query the vector database and fetch relevant context (C). - Pass the hypothetical answer H + retrieved-context C + query Q to the LLM to produce an answer. Done! Now, of course, the hypothetical generated will likely contain hallucinated details. But this does not severely affect the performance due to the contriever model—one which embeds. More specifically, this model is trained using contrastive learning and it also functions as a near-lossless compressor whose task is to filter out the hallucinated details of the fake document. This produces a vector embedding that is expected to be more similar to the embeddings of actual documents than the question is to the real documents. Several studies have shown that HyDE improves the retrieval performance compared to the traditional embedding model. But this comes at the cost of increased latency and more LLM usage. I'll cover a hands-on of HyDE in the Daily Dose of Data Science newsletter soon. Join here: https://lnkd.in/gB6HTzm8. Also, get a free data science PDF (530+ pages) with 150+ core DS/ML lessons. 👉 Over to you: What are some other ways to improve RAG? ____ Find me → Avi Chawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.