Retrieval-Augmented Generation (RAG)
YouTube ... Quora ...Google search ...Google News ...Bing News
- Embedding ... Fine-tuning ... RAG ... Search ... Clustering ... Recommendation ... Anomaly Detection ... Classification ... Dimensional Reduction. ...find outliers
- Excel ... Documents ... Database; Vector & Relational ... Graph ... LlamaIndex
- So you want to improve your RAG pipeline | Ryan Nguyen - How AI Built This ... Ways to go from prototype to production with LlamaIndex
- LlamaIndex: How to evaluate your RAG applications | Ryan Nguyen - How AI Built This
- RAG Pipeline Pitfalls: The Untold Challenges of Embedding Table | Ryan Nguyen - How AI Built This ... A typical journey of building the RAG pipeline from Zero to Something and a guide on handling tables of RAG with LlamaIndex
- Forget RAG, the Future is RAG-Fusion | Adrian H. Raudaschl - Towards Data Science ... The Next Frontier of Search: Retrieval Augmented Generation meets Reciprocal Rank Fusion and Generated Queries
Retrieval-Augmented Generation (RAG) is a method of generating text by first retrieving relevant information from a knowledge base and then generating text based on that information. This can be done by using a retrieval model to find relevant documents or passages, and then using a language model to generate text that is consistent with the retrieved information.
Here is a more detailed explanation of how RAG works:
- The user provides a query to the RAG system.
- The RAG system uses a retrieval model to find relevant documents or passages from a knowledge base.
- The RAG system then uses a language model to generate text that is consistent with the retrieved information.
- The generated text is then returned to the user.
The retrieval model can be a simple keyword search engine, or it can be a more sophisticated model that uses machine learning to identify relevant documents. The language model can be a pre-trained language model, or it can be a model that is specifically trained for the task of generating text based on retrieved information. RAG has been shown to be effective in a variety of tasks, including question answering, summarization, and creative writing. It is a promising approach for improving the accuracy and fluency of text generation models. Here are some of the benefits of using RAG:
- It can improve the accuracy of text generation models by providing them with access to external knowledge.
- It can make text generation models more fluent and coherent by providing them with context from the retrieved information.
- It can be used to generate text on a variety of topics, even if the model has not been trained on that specific topic.
- It is a more efficient and transparent approach than fine-tuning, as it does not require a large amount of labeled data.
- Vector Search Fusion: RAG introduces a novel paradigm by integrating vector search capabilities with generative models. This fusion enables the generation of richer, more context-aware outputs from large language models (LLMs).
- Reduced Hallucination: RAG significantly diminishes the LLM’s propensity for hallucination, making the generated text more grounded in data.
- Personal and Professional Utility: From personal applications like sifting through notes to more professional integrations, RAG showcases versatility in enhancing productivity and content quality while being based on a trustworthy data source.
However, RAG also has some limitations:
- It can be computationally expensive, as it requires the retrieval model and the language model to be run separately.
- The quality of the generated text can still depend on the quality of the retrieved information.
- It is not as effective as fine-tuning on tasks that require the model to learn complex patterns and relationships.
Fine-tuning vs. RAG
- Fine-tuning is likely to produce better performance than RAG on tasks that require the model to learn complex patterns and relationships. This approach is suitable for use cases like customer code migration, machine translation, question answering, and summarization. However, finetuning can be computationally expensive and time-consuming, and it requires a large amount of labeled data.
- RAG is a more efficient and transparent approach than finetuning. This approach is suitable for tasks where labeled data is scarce or expensive to obtain. RAG can also be used to generate creative content, such as poems, code, scripts, and musical pieces. However, RAG may not be as accurate as fine-tuning on tasks that require the model to learn complex patterns and relationships.
Feature | Fine-tuning | RAG |
---|---|---|
Approach | Adapts a pre-trained model to a specific task | Generates text by retrieving information from a knowledge base |
Performance | Better on tasks that require the model to learn complex patterns and relationships | More efficient and transparent |
Cost | More expensive | Less expensive |
Data requirements | Requires a large amount of labeled data | Can be used with less labeled data |
Accuracy | More accurate on tasks that require the model to learn complex patterns and relationships | May not be as accurate on these tasks |
RAG Application Evaluation
- LlamaIndex: How to evaluate your RAG applications | Ryan Nguyen - How AI Built This ... There are a few tools in the market that can help you to have a deep evaluation of your app such as Trulens, Arize AI or DeepEval.
- Evaluating RAG pipelines with Ragas + LangSmith | LangChain
- The RAGas framework ... RAGas is an open-source framework that provides a number of metrics for evaluating RAG applications. It also provides a platform for running evaluations and visualizing results.
- LangSmith platform ... LangSmith is a platform for developing and evaluating QA systems. It also provides a number of metrics for evaluating RAG applications.
- RAG Evaluation paper ... This paper presents a number of metrics and techniques for evaluating RAG applications.
- Ragas blog
To evaluate a RAG application, it is important to consider both the retrieval and generation components of the system. The retrieval component should be able to find relevant information from the knowledge source quickly and efficiently. The generation component should be able to use this information to produce accurate and informative responses.
There are a number of different metrics that can be used to evaluate RAG applications. Some common metrics include:
- Faithfulness: This metric measures how well the generated response is supported by the retrieved information.
- Answer relevancy: This metric measures how relevant the generated response is to the original prompt.
- Context relevancy: This metric measures how relevant the retrieved information is to the original prompt.
- Context recall: This metric measures how much of the relevant information from the knowledge source was retrieved.
In addition to these quantitative metrics, it is also important to evaluate the quality of the generated responses qualitatively. This can be done by having human evaluators assess the responses for accuracy, informativeness, and fluency. One way to evaluate a RAG application is to use a held-out test set. This test set should consist of a set of prompts and ground truth responses. The RAG application can then be evaluated on its ability to generate responses that are similar to the ground truth responses. Another way to evaluate a RAG application is to use a user-study. In a user-study, users are given a set of tasks to complete using the RAG application. The users' performance on these tasks can then be used to evaluate the effectiveness of the application. Overall, the evaluation of RAG applications is a complex task that requires the consideration of a number of different factors. However, by using a combination of quantitative and qualitative metrics, it is possible to get a good understanding of the performance of a RAG application.
Here are some additional tips for evaluating RAG applications:
- Use a variety of metrics to assess the performance of the application. This will help you to get a more complete picture of the application's strengths and weaknesses.
- Use a held-out test set to evaluate the application on data that it has not seen before. This will help you to get a more accurate estimate of the application's performance in the real world.
- Consider using a user-study to evaluate the application from the user's perspective. This can help you to identify areas where the application can be improved to make it more user-friendly and effective.
RAG from Scratch