Machine Learning

Gets a hallucination in RAG | About Data Science

How to measure how much of your RAG output is correct

About Data Science
Photo by Johannes Plenio on Unsplash

Lately I've come to like Graph RAGs more than vector store based ones.

No offense to the vector database; they work great in many situations. The caveat is that you need to speak clearly in the text to retrieve the correct context.

We have plans to fix that, and I've included a few in my previous posts.

For example, ColBERT and Multi-representation are useful retrieval models to consider when developing RAG applications.

GraphRAGs suffer a bit from retrieval problems (I didn't say they don't.) Whenever retrieval requires some thinking, GraphRAG performs exceptionally well.

Providing the right context solves the main problem in LLM-based programs: missing objects. However, it does not completely eliminate hallucinations.

If you can't fix something, you measure it. And that is the focus of this post. In other words, how we test RAG applications?

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button