Airtable + GPT: Prototyping RAG FINAL RAG program with code tools


Photo for Editor | Chatgt
Obvious Introduction
Ready for the best active movement No code is involved, depending on the way you choose? This lesson shows how tying together two powerful tools – OpenGPT models and Iron Database based on clouds – to Protetype a simple, Toy-Augment Retrieval-Augmented General TREE – Augiments (RAG). The program receives the promotion and uses text data stored in the asostable as the basis for information to produce basic answers. If you are not familiar with RAG programs, or you want to refresh, do not miss this article series in the understanding of the RAG.
Obvious Ingredients
To do this for tutorials yourself, you will need:
- Account with foundation is found in your workplace.
- The Opelai API key (well properly for a paid flexibility system in the model select).
- A Pipedream Account – Organization and Automation Nation that allows for the FREE TIERA TIER (Based on Run Run daily).
Obvious Retrieval-Augmented Generage Recipe
The process of building our RAG is not correct, and some steps can be taken in different ways. According to your level of program information, you may choose a code with no code or approximately codes, or create service delivery.
Primarily, we will build a 4chestration of work involving three parts, using a pipe:
- Trigger: Similar to the application of the web service, this item begins to move the action beyond the following to work travel. Once sent, it is where you specify the application, that is, the moving user in our Prototype Rag system.
- Airtable Block: Establish a link to our aircraft and table to use its data as a basis for the RAG System. We will enter some of the text information on the sooner in the air.
- Opelai Block: Connecting to OVPTI models based on the API key and immediately transfers user side by side (received data) in the answer model.
But first, we need to build a new table on our spiritistic foundation with text data. For example, I formed a blank table (ID: one line, Source: Contents: Contents: Course: and import data available to the public containing the basic text. Use CSV options and the link to import data on the table. More information about creating tables and importing data is in this article.
Once you have your knowledge of information you are ready for Artable, let us switch to Pipedream to find another process! Subscribe if you have not yet done yet, and then log in. There are several ways to start the work of work. One of them for explaining what you want to create the EA Pipedream's Ai agent, accurate but may need to repair, as we immediately show. Pipedream has the same UIS: One of the hand and planning the flow of work, and one of the discussions and its AI agent, called the cable.


Pipedream's Ai Conventational Interface (String)
If you choose to allow the rope to create a limited form of your preferred service, clearly describe what is building, soon something like:
I would like to add orchestrates a simple Toy RAG app using a windless tablet containing the text, and Opelai's GPT models (I have a paid API key).
The Pipedram's language model can return the answer to agree (by your accident), as follows:
I will build a simple rag (retrieveral-augmented generatorge) linking an actable table for Opelai's GPT models. Here's what I'm going to build:
Set Trigger to accept questions (with Webook or manualization)
Search and re-get the correct text from your table with a question based
Use text obtained as Openai GPT City to generate an experienced response
Return the response generated by AI based on your air with air
This will provide you with a basic rag program when you can ask questions and get answers according to the content stored in your spirit, enhanced by GPT consultation skills.
For the purposes of testing and “seeing the occurrence,” confess the proposed course of action to get the three blocks such in the picture shown above.
As this is not Core Pipedream Editor UI, switch to click on the “Open Pipedream” in the upper right corner. The new tab will be opened by a Pipedream travel editor.
With Trigger Block, the URL is automatically made with syntax like this one I have received: https://eoupscprutt37xx.m.pipedream.net. Click again, to the window that opens the right side, make sure that the original options are set up in “Full HTTP application” and “replace the Tuli feedback.
The second block (an actless verb) may have a little work to do. First, connect to your spiritless foundation. If you work in the same browser, this may be direct: Sign in to an ASTAable from the pop-up window appearing after you clicked the account “Connect


Pipedream Workflow Editor: Connecting to Artetable
Here is a crafty part (and the reason to deliberately leave the AI agent to build a SkeletflowFlow) may, this is not an action you see in the second time block of your work. If so, remove it, add a new block between, select Adtable, “and select” List Records. ” Then connect to your desk and test the connection to make sure it works.
This is what the successful communication that looks like:


Pipedream Workflow Editor: Checking the connection to the asostable
Finally, set and prepare for Openai access to GPT. Keep your API key. If your second Block's second block “produces the RAG response,” Delete the block and replace the new Acleai Block for this clock.
Start by inventing Openai connection using your API key:


Establish Openai connections
The user's questions must be set as {{ steps.trigger.event.body.test }}and records of support for information “Your” RAG “documents” in the RAG from AIRTABLE) must be set as {{ steps.list_records.$return_value }}.
You can save the rest as automatic and testing, but you can meet the standard divorce error errors, which moves you to jump back into the default support string using ai agent using ai agent using ai. In another way, you can copy directly and paste the following to the correct body of the solid solution:
import openai from "@pipedream/openai"
export default defineComponent({
name: "Generate RAG Response",
description: "Generate a response using OpenAI based on user question and Airtable knowledge base content",
type: "action",
props: {
openai,
model: {
propDefinition: [
openai,
"chatCompletionModelId",
],
},
question: {
type: "string",
label: "User Question",
description: "The question from the webhook trigger",
default: "{{ steps.trigger.event.body.test }}",
},
knowledgeBaseRecords: {
type: "any",
label: "Knowledge Base Records",
description: "The Airtable records containing the knowledge base content",
default: "{{ steps.list_records.$return_value }}",
},
},
async run({ $ }) {
// Extract user question
const userQuestion = this.question;
if (!userQuestion) {
throw new Error("No question provided from the trigger");
}
// Process Airtable records to extract content
const records = this.knowledgeBaseRecords;
let knowledgeBaseContent = "";
if (records && Array.isArray(records)) {
knowledgeBaseContent = records
.map(record => {
// Extract content from fields.Content
const content = record.fields?.Content;
return content ? content.trim() : "";
})
.filter(content => content.length > 0) // Remove empty content
.join("nn---nn"); // Separate different knowledge base entries
}
if (!knowledgeBaseContent) {
throw new Error("No content found in knowledge base records");
}
// Create system prompt with knowledge base context
const systemPrompt = `You are a helpful assistant that answers questions based on the provided knowledge base. Use only the information from the knowledge base below to answer questions. If the information is not available in the knowledge base, please say so.
Knowledge Base:
${knowledgeBaseContent}
Instructions:
- Answer based only on the provided knowledge base content
- Be accurate and concise
- If the answer is not in the knowledge base, clearly state that the information is not available
- Cite relevant parts of the knowledge base when possible`;
// Prepare messages for OpenAI
const messages = [
{
role: "system",
content: systemPrompt,
},
{
role: "user",
content: userQuestion,
},
];
// Call OpenAI chat completion
const response = await this.openai.createChatCompletion({
$,
data: {
model: this.model,
messages: messages,
temperature: 0.7,
max_tokens: 1000,
},
});
const generatedResponse = response.generated_message?.content;
if (!generatedResponse) {
throw new Error("Failed to generate response from OpenAI");
}
// Export summary for user feedback
$.export("$summary", `Generated RAG response for question: "${userQuestion.substring(0, 50)}${userQuestion.length > 50 ? '...' : ''}"`);
// Return the generated response
return {
question: userQuestion,
response: generatedResponse,
model_used: this.model,
knowledge_base_entries: records ? records.length : 0,
full_openai_response: response,
};
},
})
If there are no mistakes or warnings from, you should be ready to test and defraud. Post first, and test through the user's question such as the newly opened tab:


To explore the work flow of post by quick inquiry what the capital of Japan
If the application is handled and everything is effective, scroll down to see the answer restored by GPT Model at the last phase of service:


The Response of the GPT model
Well done! This answer is placed on the foundation of the Artable Information, so we now have a simple Prototype Rag program that includes PTTY contexts with Pipedream.
Obvious Rolling up
This article has shown how it can be built, with a small code, orchestrolation of the operatyption system RAG program uses airtables Text Databas as a basis for GPT information about response models. Pipedream allows to describe the flow of the good fruit work, for designed, or helped by its variable AI agent. In the writer's experience, we clearly demonstrated the good and the bad in each way.
Iván Palomares Carrascus He is a leader, writer, and a counselor in Ai, a machine study, a deep reading and llms. He trains and guides others to integrate AI in the real world.



