Generative AI

To create a system of unworthy generation system (RAG) with FASS and Open-Source LLMS

Refund generation (RAG) appears as a powerful paradigm to improve the power of large languages ​​of Language (LLMS). By combining the creative skills of llms 'and restoring the accuracy of the truth, the RAG provides a solution in one of the persistent phrases' most persistent phrases: halucination.

In this lesson, we will create a comprehensive RAG program:

  • FASS (Facebook AI similarities), such as our Tatabase Levector
  • Changers of kidney to build high quality maximization
  • The llm open llm from the face of face (we will use a lane model associated with CPU)
  • Custom Information Basis for Construction

At the end of this lesson, you will have a valid RAG program that can answer questions based on your documents with advanced and compliance accuracy. This method is important for creating certain domain assistants, customer support programs, or any app when the low llM answers in certain documents are important.

Let's get started.

Step 1: To set our environment

First, we need to enter all the required libraries. In this lesson, we will use Google Colab.

# Install required packages
!pip install -q transformers==4.34.0
!pip install -q sentence-transformers==2.2.2
!pip install -q faiss-cpu==1.7.4
!pip install -q accelerate==0.23.0
!pip install -q einops==0.7.0
!pip install -q langchain==0.0.312
!pip install -q langchain_community
!pip install -q pypdf==3.15.1

Let's also examine that we have access to GPU, which will speed up our model's compliance:

import torch


# Check if GPU is available
print(f"GPU available: {torch.cuda.is_available()}")
if torch.cuda.is_available():
   print(f"GPU name: {torch.cuda.get_device_name(0)}")
else:
   print("Running on CPU. We'll use a CPU-compatible model.")

Step 2: Creating Our Information Foundation

In this lesson, we will create a basis for simple information about the concepts of AI. In the real world of the global, one can use to import PDF documents, Web pages, or details.

import os
import tempfile


# Create a temporary directory for our documents
docs_dir = tempfile.mkdtemp()
print(f"Created temporary directory at {docs_dir}")


# Create sample documents about AI concepts
documents = {
   "vector_databases.txt": """
   Vector databases are specialized database systems designed to store, manage, and search vector embeddings efficiently.
   They are crucial for machine learning applications, particularly those involving natural language processing and image recognition.
  
   Key features of vector databases include:
   1. Fast similarity search using algorithms like HNSW, IVF, or exact search
   2. Support for various distance metrics (cosine, euclidean, dot product)
   3. Scalability for handling billions of vectors
   4. Often support for metadata filtering alongside vector search
  
   Popular vector databases include FAISS (Facebook AI Similarity Search), Pinecone, Weaviate, Milvus, and Chroma.
   FAISS specifically was developed by Facebook AI Research and is an open-source library for efficient similarity search.
   """,
  
   "embeddings.txt": """
   Embeddings are dense vector representations of data in a continuous vector space.
   They capture semantic meaning and relationships between entities by positioning similar items closer together in the vector space.
  
   Types of embeddings include:
   1. Word embeddings (Word2Vec, GloVe)
   2. Sentence embeddings (Universal Sentence Encoder, SBERT)
   3. Document embeddings
   4. Image embeddings
   5. Audio embeddings
  
   Embeddings are created through various techniques, including neural networks trained on specific tasks.
   Modern embedding models like those from OpenAI, Cohere, or Sentence Transformers can capture nuanced semantic relationships.
  
   The dimensionality of embeddings typically ranges from 100 to 1536 dimensions, with higher dimensions often capturing more information but requiring more storage and computation.
   """,
  
   "rag_systems.txt": """
   Retrieval-Augmented Generation (RAG) is an AI architecture that combines information retrieval with text generation.
  
   The RAG process typically works as follows:
   1. User query is converted into an embedding vector
   2. Similar documents or passages are retrieved from a knowledge base using vector similarity
   3. Retrieved content is provided as context to the language model
   4. The language model generates a response informed by both its parameters and the retrieved information
  
   Benefits of RAG include:
   1. Reduced hallucination compared to pure generative approaches
   2. Up-to-date information without model retraining
   3. Attribution of information sources
   4. Lower computation costs than increasing model size
  
   RAG systems can be enhanced through techniques like reranking, query reformulation, and hybrid search approaches.
   """
}


# Write documents to files
for filename, content in documents.items():
   with open(os.path.join(docs_dir, filename), 'w') as f:
       f.write(content)
      
print(f"Created {len(documents)} documents in {docs_dir}")

Step 3: Loading and processing documents

Now, let's upload the documents and explore them through our RAG program:

from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter


# Initialize a list to store our documents
all_documents = []


# Load each text file
for filename in documents.keys():
   file_path = os.path.join(docs_dir, filename)
   loader = TextLoader(file_path)
   loaded_docs = loader.load()
   all_documents.extend(loaded_docs)


print(f"Loaded {len(all_documents)} documents")


# Split documents into chunks
text_splitter = RecursiveCharacterTextSplitter(
   chunk_size=500,
   chunk_overlap=50,
   separators=["nn", "n", ".", " ", ""]
)


document_chunks = text_splitter.split_documents(all_documents)
print(f"Created {len(document_chunks)} document chunks")


# Let's look at a sample chunk
print("nSample chunk content:")
print(document_chunks[0].page_content)
print(f"Source: {document_chunks[0].metadata}")

Step 4: Creating supplier

Now, let's change our Document chunks into Vectors Asasheddings:

from sentence_transformers import SentenceTransformer
import numpy as np


# Initialize the embedding model
model_name = "sentence-transformers/all-MiniLM-L6-v2"  # A good balance of speed and quality
embedding_model = SentenceTransformer(model_name)


print(f"Loaded embedding model: {model_name}")
print(f"Embedding dimension: {embedding_model.get_sentence_embedding_dimension()}")


# Create embeddings for all document chunks
texts = [doc.page_content for doc in document_chunks]
embeddings = embedding_model.encode(texts)


print(f"Created {len(embeddings)} embeddings with shape {embeddings.shape}")

Step 5: To create a FISS index

Now we will create our FASSS's chargment about these movements:

import faiss


# Get the dimensionality of our embeddings
dimension = embeddings.shape[1]


# Create a FAISS index - we'll use a simple Flat L2 index for demonstration
# For larger datasets, consider using indexes like IVF or HNSW for better performance
index = faiss.IndexFlatL2(dimension)  # L2 is Euclidean distance


# Add our vectors to the index
index.add(embeddings.astype(np.float32))  # FAISS requires float32


print(f"Created FAISS index with {index.ntotal} vectors")


# Create a mapping from index position to document chunk for retrieval
index_to_doc_chunk = {i: doc for i, doc in enumerate(document_chunks)}

Step 6: Loading Language model

Now let's upload the open-language model from the face of face. We will use a small functional model in CPU:

from transformers import AutoTokenizer, AutoModelForCausalLM


# We'll use a smaller model that works on CPU
model_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"


# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
   model_id,
   torch_dtype=torch.float32,  # Use float32 for CPU compatibility
   device_map="auto"  # Will use CPU if GPU is not available
)


print(f"Successfully loaded {model_id}")

Step 7: Creating our RAG pipe

Let's build a job that includes returning and generation:

def rag_response(query, index, embedding_model, llm_model, llm_tokenizer, index_to_doc_map, top_k=3):
   """
   Generate a response using the RAG pattern.


   Args:
       query: The user's question
       index: FAISS index
       embedding_model: Model to create embeddings
       llm_model: Language model for generation
       llm_tokenizer: Tokenizer for the language model
       index_to_doc_map: Mapping from index positions to document chunks
       top_k: Number of documents to retrieve


   Returns:
       response: The generated response
       sources: The source documents used
   """
   # Step 1: Convert query to embedding
   query_embedding = embedding_model.encode([query])
   query_embedding = query_embedding.astype(np.float32)  # Convert to float32 for FAISS


   # Step 2: Search for similar documents
   distances, indices = index.search(query_embedding, top_k)


   # Step 3: Retrieve the actual document chunks
   retrieved_docs = [index_to_doc_map[idx] for idx in indices[0]]


   # Create context from retrieved documents
   context = "nn".join([doc.page_content for doc in retrieved_docs])


   # Step 4: Create prompt for the LLM (TinyLlama format)
   prompt = f"""<|system|>
You are a helpful AI assistant. Answer the question based only on the provided context.
If you don't know the answer based on the context, say "I don't have enough information to answer this question."


Context:
{context}
<|user|>
{query}
<|assistant|>"""


   # Step 5: Generate response from LLM
   input_ids = llm_tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)


   generation_config = {
       "max_new_tokens": 256,
       "temperature": 0.7,
       "top_p": 0.95,
       "do_sample": True
   }


   # Generate the output
   with torch.no_grad():
       output = llm_model.generate(
           input_ids=input_ids,
           **generation_config
       )


   # Decode the output
   generated_text = llm_tokenizer.decode(output[0], skip_special_tokens=True)


   # Extract the assistant's response (remove the prompt)
   response = generated_text.split("<|assistant|>")[-1].strip()


   # Return both the response and the sources
   sources = [(doc.page_content, doc.metadata) for doc in retrieved_docs]


   return response, sources

Step 8: To explore our RAG program

Let us consider our program with specific questions:

#Define some test questions
test_questions = [
   "What is FAISS and what is it used for?",
   "How do embeddings capture semantic meaning?",
   "What are the benefits of RAG systems?",
   "How does vector search work?"
]


# Test our RAG pipeline
for question in test_questions:
   print(f"nn{'='*50}")
   print(f"Question: {question}")
   print(f"{'='*50}n")


   response, sources = rag_response(
       query=question,
       index=index,
       embedding_model=embedding_model,
       llm_model=model,
       llm_tokenizer=tokenizer,
       index_to_doc_map=index_to_doc_chunk,
       top_k=2  # Retrieve top 2 most relevant chunks
   )


   print(f"Response: {response}n")


   print("Sources:")
   for i, (content, metadata) in enumerate(sources):
       print(f"nSource {i+1}:")
       print(f"Metadata: {metadata}")
       print(f"Content snippet: {content[:100]}...")

Which is output:

Step 9: Checking and Improving our RAG program

Let's use a simple test task to check the performance of our RAG program:

def evaluate_rag_response(question, response, retrieved_sources, ground_truth_sources=None):
   """
   Simple evaluation of RAG response quality


   Args:
       question: The query
       response: Generated response
       retrieved_sources: Sources used for generation
       ground_truth_sources: (Optional) Known correct sources


   Returns:
       evaluation metrics
   """
   # Basic metrics
   response_length = len(response.split())
   num_sources = len(retrieved_sources)


   # Simple relevance score - we'd use better methods in production
   source_relevance = []
   for content, _ in retrieved_sources:
       # Count overlapping words between question and source
       q_words = set(question.lower().split())
       s_words = set(content.lower().split())
       overlap = len(q_words.intersection(s_words))
       source_relevance.append(overlap / len(q_words) if q_words else 0)


   avg_relevance = sum(source_relevance) / len(source_relevance) if source_relevance else 0


   return {
       "response_length": response_length,
       "num_sources": num_sources,
       "source_relevance_scores": source_relevance,
       "avg_relevance": avg_relevance
   }


# Evaluate one of our previous responses
question = test_questions[0]
response, sources = rag_response(
   query=question,
   index=index,
   embedding_model=embedding_model,
   llm_model=model,
   llm_tokenizer=tokenizer,
   index_to_doc_map=index_to_doc_chunk,
   top_k=2
)


# Run evaluation
eval_results = evaluate_rag_response(question, response, sources)
print(f"nEvaluation results for question: '{question}'")
for metric, value in eval_results.items():
   print(f"{metric}: {value}")

Step 10: Developed RAG Techniques – The Expansion of Question

Let's use the expansion of a question to improve returning:

# Here's the implementation of the expand_query function:


def expand_query(original_query, llm_model, llm_tokenizer):
   """
   Generate multiple search queries from an original query to improve retrieval


   Args:
       original_query: The user's original question
       llm_model: The language model for generating variations
       llm_tokenizer: Tokenizer for the language model


   Returns:
       List of query variations including the original
   """
   # Create a prompt for query expansion
   prompt = f"""<|system|>
You are a helpful assistant. Generate two alternative versions of the given search query.
The goal is to create variations that might help retrieve relevant information.
Only list the alternative queries, one per line. Do not include any explanations, numbering, or other text.
<|user|>
Generate alternative versions of this search query: "{original_query}"
<|assistant|>"""


   # Generate variations
   input_ids = llm_tokenizer(prompt, return_tensors="pt").input_ids.to(llm_model.device)


   with torch.no_grad():
       output = llm_model.generate(
           input_ids=input_ids,
           max_new_tokens=100,
           temperature=0.7,
           do_sample=True
       )


   # Decode the output
   generated_text = llm_tokenizer.decode(output[0], skip_special_tokens=True)


   # Extract the generated variations
   response_part = generated_text.split("<|assistant|>")[-1].strip()


   # Split response by lines to get individual variations
   variations = [line.strip() for line in response_part.split('n') if line.strip()]


   # Ensure we have at least some variations
   if not variations:
       variations = [original_query]


   # Add the original query and return the list with duplicates removed
   all_queries = [original_query] + variations
   return list(dict.fromkeys(all_queries))  # Remove duplicates while preserving order

Step 11: Checking and improving our work added

Let's use a simple test task to explore our expertise work

# Example usage of expand_query function
test_query = "How does FAISS help with vector search?"


# Generate query variations
expanded_queries = expand_query(
   original_query=test_query,
   llm_model=model,
   llm_tokenizer=tokenizer
)


print(f"Original Query: {test_query}")
print(f"Expanded Queries:")
for i, query in enumerate(expanded_queries):
   print(f"  {i+1}. {query}")


# Enhanced RAG with query expansion
all_retrieved_docs = []
all_scores = {}


# Retrieve documents for each query variation
for query in expanded_queries:
   # Get query embedding
   query_embedding = embedding_model.encode([query]).astype(np.float32)


   # Search in FAISS index
   distances, indices = index.search(query_embedding, 3)


   # Track document scores across queries (using 1/(1+distance) as score)
   for idx, dist in zip(indices[0], distances[0]):
       score = 1.0 / (1.0 + dist)
       if idx in all_scores:
           # Take max score if document retrieved by multiple query variations
           all_scores[idx] = max(all_scores[idx], score)
       else:
           all_scores[idx] = score


# Get top documents based on scores
top_indices = sorted(all_scores.keys(), key=lambda idx: all_scores[idx], reverse=True)[:3]
expanded_retrieved_docs = [index_to_doc_chunk[idx] for idx in top_indices]


print("nRetrieved documents using query expansion:")
for i, doc in enumerate(expanded_retrieved_docs):
   print(f"nResult {i+1}:")
   print(f"Source: {doc.metadata['source']}")
   print(f"Content snippet: {doc.page_content[:150]}...")


# Now use these documents with the LLM to generate a response
context = "nn".join([doc.page_content for doc in expanded_retrieved_docs])


# Create prompt for the LLM
prompt = f"""<|system|>
You are a helpful AI assistant. Answer the question based only on the provided context.
If you don't know the answer based on the context, say "I don't have enough information to answer this question."


Context:
{context}
<|user|>
{test_query}
<|assistant|>"""


# Generate response
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
with torch.no_grad():
   output = model.generate(
       input_ids=input_ids,
       max_new_tokens=256,
       temperature=0.7,
       top_p=0.95,
       do_sample=True
   )


# Extract response
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
response = generated_text.split("<|assistant|>")[-1].strip()


print("nFinal RAG Response with Query Expansion:")
print(response)

Which is output:

FASS can handle various types of vector varieties, including text, photo and sound, and can be integrated with popular types of types as Tensorflow, Pytorch, and Skurven.

Store

In this lesson, we create a comprehensive RAG program using FASS as our Vector database and the open llm. We used the processing of documents, embeddlarations, and vector indicator, and combine these parts by expanding the question and hybrid search strategies to improve revenue.

In addition, we can look:

  • Getting started activation of Cross-Encoders
  • To create a web interface using gradio or gradio direction
  • Adding Metadata Solving Power
  • Checking different embodding models
  • Rate the solution with FIINE-active FIIA indications (HNSW, IVF)
  • Good planning for the llm to your domain data

Useful Resources:


Here is the Colab Notebook. Also, don't forget to follow Sane and join ours Telegraph station including LinkedIn Grtopic. Don't forget to join ours 80k + ml subreddit.


ASJAD is the study adviser in the MarktechPost region. It invites the B.Tech in Mesher Engineering to the Indian Institute of Technology, Kharagpur. ASJAD reading mechanism and deep readings of the learner who keeps doing research for machinery learning applications in health care.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button