The SQL-agent SQL Assistant can trust with Human-In-the Loop Checkpoint & LLM Cost Management

About building your AI agents? Are you always frustrated with all buzzwords around agents? You are not alone; I've been there. There are many tools available, and even find out which one to choose can feel like the project itself. In addition, there is no uncertainty around the cost and infrastructure. Will I eat too many tokens? How and where can I use my solution?
For a time, I doubted and building something alone. I needed to understand the basics first, see a few examples to understand how things work, and try something to bring those ideas to life. After a lot of research, I finally came Crewai – And it turned out to be a perfect place. There are two good courses offered by FEPLEETTING.AI: Many Ai Agent programs with Crewai and Ai agents and charges of improved and crewai. In this lesson, the teacher explained everything you need to know about agents Ai to start. There are more than 10 cases of cases with codes provided in the study that serves as the first place.
It is not enough to read about the existing things. If you did not use what you learned, you may forget the foundations later. If I update charges for use from the course, it does not “apply it”. He had to build something and use it. I decided to form a charge of the most related use. As a data analyst and engineer, I work most about Python and SQL. I thought to me how good it would be if I could build a helper to produce SQL questions according to the natural language. I agree that there are many solutions outside of the box that are available in the market. I'm not trying to use the wheel here. With this POC, I want to learn how to build those programs and what their limitations are possible. What I'm trying to reveal here is here requires that you build such a helper.
In this post, I'll go for the way I use it Crewai Harm SupportTo create a Multi-Agent SQL assistant . Allows users to ask a Sqlite database using the natural language. Having more control over the entire process, add me again man-in-loop Look and I show The cost of the use of llm In every question. When a question is produced by a helper, the user will have 3 options: Receive and continue if the question is looking good, ask the assistant to try again when output process if it comes out. Having this testing method makes a big difference – it gives a lot of energy to the user, avoiding bad questions, and helps in the salvation of the cost of the LLM.
You can find the entire code code here. Below the complete project structure:
SQL Assistant Crew Project Structure
===================================
.
├── app.py (Streamlit UI)
├── main.py (terminal)
├── crew_setup.py
├── config
│ ├── agents.yaml
│ └── tasks.yaml
├── data
│ └── sample_db.sqlite
├── utils
│ ├── db_simulator.py
│ └── helper.py

Agent Building (my crewai group)
With my SQL distract, I needed at least 3 basic agents to deal with the whole process:
- Question Generator agent It would change the natural language questions by the SQL question using the database schema as a context.
- An Agency to Review the agent It may take a SQL question generated by a journal agent and is very good with accuracy and efficiency.
- The result of testing agent They can check the question of the PII exposure that may also send a decision on what the question is or not.
Every provider must have 3 basic qualifications – the passage (what the agent must be), policy (which is an agent's objective) and the seller (set up by the agent's personality to conduct their behavior). I am enabled verbose=“True” viewing the internal agent process. I use openai/gpt-4o-mini Like a language model below all my agents. After a lot of trial and error, I put temperature=0.2 to minimize agents halucinations. Low temperatures make model determined and provide predicted results (such as SQL Queries in my case). There are many other parameters found in tune max_tokens (Set the restrictions of feedback), top_p (with nucleus sample), allow_delegation (Providing other work for other agents), etc. If you use other llms, you can simply specify the word of the llM model here. You can put the same llm to all agents or separately according to your needs.
Below yaml file with agents meanings:
query_generator_agent:
role: Senior Data Analyst
goal: Translate natural language requests into accurate and efficient SQL queries
backstory: >
You are an experienced analyst who knows SQL best practices. You work with stakeholders to gather requirements
and turn their questions into clear, performant queries. You prefer readable SQL with appropriate filters and joins.
allow_delegation: False
verbose: True
model: openai/gpt-4o-mini
temperature: 0.2
query_reviewer_agent:
role: SQL Code Reviewer
goal: Critically evaluate SQL for correctness, performance, and clarity
backstory: >
You are a meticulous reviewer of SQL code. You identify inefficiencies, bad practices, and logical errors, and
provide suggestions to improve the query's performance and readability.
allow_delegation: False
verbose: True
model: openai/gpt-4o-mini
temperature: 0.2
compliance_checker_agent:
role: Data Privacy and Governance Officer
goal: Ensure SQL queries follow data compliance rules and avoid PII exposure
backstory: >
You are responsible for ensuring queries do not leak or expose personally identifiable information (PII) or
violate company policies. You flag any unsafe or non-compliant practices.
allow_delegation: False
verbose: True
model: openai/gpt-4o-mini
temperature: 0.2
When you are finished create your agents, the next step is to describe the activities to be done. Every job should have a clear meaning for the agency to do. It is very recommended that and set expected_output Parameter to write the final llM response. It is a way to tell the llm directly to the type of response you expect – it can be a document, number, question, or even an article. Description should contain information and concrete as possible. Having vague definitions will only lead to unclear or totally wrong. I had to change repeated definitions during the assessment of the quality repair of the answer. One of my favorite things is the ability to inject Powerful Inputin the work process by giving curly braces ({}). These placesHolders can be user ideas, concepts, descriptions, or even the output of previous agents. All of this allows llm to produce appropriate results.
query_task:
description: |
You are an expert SQL assistant. Your job is to translate user requests into SQL queries using ONLY the tables and columns listed below.
SCHEMA:
{db_schema}
USER REQUEST:
{user_input}
IMPORTANT:
- First, list which tables and columns from the schema you will use to answer the request.
- Then, write the SQL query.
- Only use the tables and columns from the schema above.
- If the request cannot be satisfied with the schema, return a SQL comment (starting with --) explaining why.
- Do NOT invent tables or columns.
- Make sure the query matches the user's intent as closely as possible.
expected_output: First, a list of tables and columns to use. Then, a syntactically correct SQL query using appropriate filters, joins, and groupings.
review_task:
description: |
Review the following SQL query for correctness, performance, and readability: {sql_query} and verify that the query fits the schema: {db_schema}
Ensure that only tables and columns from the provided schema are used.
IMPORTANT:
- First, only review the SQL query provided for correctness, performance, or readability
- Do NOT invent new tables or columns.
- If the Query is already correct, return it unchanged.
- If the Query is not correct and cannot be fixed, return a SQL comment (starting with --) explaining why.
expected_output: An optimized or verified SQL query
compliance_task:
description: >
Review the following SQL query for compliance violations, including PII access, unsafe usage, or policy violations.
List any issues found, or state "No issues found" if the query is compliant.
SQL Query: {reviewed_sqlquery}
expected_output: >
A markdown-formatted compliance report listing any flagged issues, or stating that the query is compliant. Include a clear verdict at the top (e.g., "Compliant" or "Issues found")
It is a good practice to have agent and job descriptions in different yaml files. If you ever want to do any updates to agents or activities, you only need to convert yamml files and not a barcode at all. In crew_setup.py File, everything is met. I have read and uploaded an agent and job configuration to its Yaml files. I also created the definitions of all the expected results using the Pydantic models to give them a building and confirm that the llm should also return. Then I give the agents for their proper jobs and mix my group. There are many ways to organize your employees according to the use of using. One group of agents can make functions in order or similar. Alternatively, you can create many groups, each responsible for a specific part of your work. My partner's case, I chose to create a lot of teams so that she could deal more to the death of the death and control facility.
from crewai import Agent, Task, Crew
from pydantic import BaseModel, Field
from typing import List
import yaml
# Define file paths for YAML configurations
files = {
'agents': 'config/agents.yaml',
'tasks': 'config/tasks.yaml',
}
# Load configurations from YAML files
configs = {}
for config_type, file_path in files.items():
with open(file_path, 'r') as file:
configs[config_type] = yaml.safe_load(file)
# Assign loaded configurations to specific variables
agents_config = configs['agents']
tasks_config = configs['tasks']
class SQLQuery(BaseModel):
sqlquery: str = Field(..., description="The raw sql query for the user input")
class ReviewedSQLQuery(BaseModel):
reviewed_sqlquery: str = Field(..., description="The reviewed sql query for the raw sql query")
class ComplianceReport(BaseModel):
report: str = Field(..., description="A markdown-formatted compliance report with a verdict and any flagged issues.")
# Creating Agents
query_generator_agent = Agent(
config=agents_config['query_generator_agent']
)
query_reviewer_agent = Agent(
config=agents_config['query_reviewer_agent']
)
compliance_checker_agent = Agent(
config=agents_config['compliance_checker_agent']
)
# Creating Tasks
query_task = Task(
config=tasks_config['query_task'],
agent=query_generator_agent,
output_pydantic=SQLQuery
)
review_task = Task(
config=tasks_config['review_task'],
agent=query_reviewer_agent,
output_pydantic=ReviewedSQLQuery
)
compliance_task = Task(
config=tasks_config['compliance_task'],
agent=compliance_checker_agent,
context=[review_task],
output_pydantic=ComplianceReport
)
# Creating Crew objects for import
sql_generator_crew = Crew(
agents=[query_generator_agent],
tasks=[query_task],
verbose=True
)
sql_reviewer_crew = Crew(
agents=[query_reviewer_agent],
tasks=[review_task],
verbose=True
)
sql_compliance_crew = Crew(
agents=[compliance_checker_agent],
tasks=[compliance_task],
verbose=True
)
I'm on the database of the sqlite area and some data of sample imitations of the original database interactions of my POC. I download the data schema that contains all tables and column names in the system. Later with this schema as a LLM context and the original user question to help the llm produce SQL question with the first tables and columns from the given schema. When a journal agent builds a SQL question, it wants to review the review agent followed by the check-in check agent. It is after this review, allowing a question to be reviewed in the database to indicate the final effects on the user with the broadcast interface. By adding verification and security checks, I only confirm the high-quality queries of the database reduces unnecessary tokens and long-term involvement costs.
import sqlite3
import pandas as pd
DB_PATH = "data/sample_db.sqlite"
def setup_sample_db():
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Drop tables if they exist (for repeatability in dev)
cursor.execute("DROP TABLE IF EXISTS order_items;")
cursor.execute("DROP TABLE IF EXISTS orders;")
cursor.execute("DROP TABLE IF EXISTS products;")
cursor.execute("DROP TABLE IF EXISTS customers;")
cursor.execute("DROP TABLE IF EXISTS employees;")
cursor.execute("DROP TABLE IF EXISTS departments;")
# Create richer example tables
cursor.execute("""
CREATE TABLE products (
product_id INTEGER PRIMARY KEY,
product_name TEXT,
category TEXT,
price REAL
);
""")
cursor.execute("""
CREATE TABLE customers (
customer_id INTEGER PRIMARY KEY,
name TEXT,
email TEXT,
country TEXT,
signup_date TEXT
);
""")
cursor.execute("""
CREATE TABLE orders (
order_id INTEGER PRIMARY KEY,
customer_id INTEGER,
order_date TEXT,
total_amount REAL,
FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
);
""")
cursor.execute("""
CREATE TABLE order_items (
order_item_id INTEGER PRIMARY KEY,
order_id INTEGER,
product_id INTEGER,
quantity INTEGER,
price REAL,
FOREIGN KEY(order_id) REFERENCES orders(order_id),
FOREIGN KEY(product_id) REFERENCES products(product_id)
);
""")
cursor.execute("""
CREATE TABLE employees (
employee_id INTEGER PRIMARY KEY,
name TEXT,
department_id INTEGER,
hire_date TEXT
);
""")
cursor.execute("""
CREATE TABLE departments (
department_id INTEGER PRIMARY KEY,
department_name TEXT
);
""")
# Populate with mock data
cursor.executemany("INSERT INTO products VALUES (?, ?, ?, ?);", [
(1, 'Widget A', 'Widgets', 25.0),
(2, 'Widget B', 'Widgets', 30.0),
(3, 'Gadget X', 'Gadgets', 45.0),
(4, 'Gadget Y', 'Gadgets', 50.0),
(5, 'Thingamajig', 'Tools', 15.0)
])
cursor.executemany("INSERT INTO customers VALUES (?, ?, ?, ?, ?);", [
(1, 'Alice', '[email protected]', 'USA', '2023-10-01'),
(2, 'Bob', '[email protected]', 'Canada', '2023-11-15'),
(3, 'Charlie', '[email protected]', 'USA', '2024-01-10'),
(4, 'Diana', '[email protected]', 'UK', '2024-02-20')
])
cursor.executemany("INSERT INTO orders VALUES (?, ?, ?, ?);", [
(1, 1, '2024-04-03', 100.0),
(2, 2, '2024-04-12', 150.0),
(3, 1, '2024-04-15', 120.0),
(4, 3, '2024-04-20', 180.0),
(5, 4, '2024-04-28', 170.0)
])
cursor.executemany("INSERT INTO order_items VALUES (?, ?, ?, ?, ?);", [
(1, 1, 1, 2, 25.0),
(2, 1, 2, 1, 30.0),
(3, 2, 3, 2, 45.0),
(4, 3, 4, 1, 50.0),
(5, 4, 5, 3, 15.0),
(6, 5, 1, 1, 25.0)
])
cursor.executemany("INSERT INTO employees VALUES (?, ?, ?, ?);", [
(1, 'Eve', 1, '2022-01-15'),
(2, 'Frank', 2, '2021-07-23'),
(3, 'Grace', 1, '2023-03-10')
])
cursor.executemany("INSERT INTO departments VALUES (?, ?);", [
(1, 'Sales'),
(2, 'Engineering'),
(3, 'HR')
])
conn.commit()
conn.close()
def run_query(query):
try:
conn = sqlite3.connect(DB_PATH)
df = pd.read_sql_query(query, conn)
conn.close()
return df.head().to_string(index=False)
except Exception as e:
return f"Query failed: {e}"
def get_db_schema(db_path):
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
schema = ""
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
for table_name, in tables:
cursor.execute(f"SELECT sql FROM sqlite_master WHERE type='table' AND name='{table_name}';")
create_stmt = cursor.fetchone()[0]
schema += create_stmt + ";nn"
conn.close()
return schema
def get_structured_schema(db_path):
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
lines = ["Available tables and columns:"]
for table_name, in tables:
cursor.execute(f"PRAGMA table_info({table_name})")
columns = [row[1] for row in cursor.fetchall()]
lines.append(f"- {table_name}: {', '.join(columns)}")
conn.close()
return 'n'.join(lines)
if __name__ == "__main__":
setup_sample_db()
print("Sample database created.")
Llm's pay in the tokens – Simple pieces of text. By any llM here, there is a price model based on the number of installation and releasing, usually charged with one million tokens. The total number list of all over the Openaai models, refer to their official price page here. A Members gpt-4o-miniInput tokens cost $ 0.15 / m while the output tokens cost $ 0.60 / m. Processing the full cost of the LLM application, I form the activities below the helper.py Counting full cost based on the application of the application.
import re
def extract_token_counts(token_usage_str):
prompt = completion = 0
prompt_match = re.search(r'prompt_tokens=(d+)', token_usage_str)
completion_match = re.search(r'completion_tokens=(d+)', token_usage_str)
if prompt_match:
prompt = int(prompt_match.group(1))
if completion_match:
completion = int(completion_match.group(1))
return prompt, completion
def calculate_gpt4o_mini_cost(prompt_tokens, completion_tokens):
input_cost = (prompt_tokens / 1000) * 0.00015
output_cost = (completion_tokens / 1000) * 0.0006
return input_cost + output_cost
This page app.py The file creates a powerful broadcasting app that will allow the user to move the SQLITE database using the environmental language. After the scenes, my Creewail agents set set. After the first agent produces SQL question, it is displayed in the user app. The user will have three options:
- Confirm and update – if the user receives an acceptable question and you want to continue
- Try again – if the user is not satisfied with the question and you want an agent to produce a new question and
- Abdomen – If the user wants to stop the process here
Once the options above, the Cost of the LLM received by this application is displayed on the screen. When the user clicks on “Confirm & Review” Button, the SQL question will pass on the next two rates for review. The review agent will enhance the accuracy and efficiency followed by the accompanying approval of compliance. If the question is compatible, it will be made on the SQLITE DATHUSE. The final effects and the combined cost of the LLM received from the entire process is displayed in the app interface. The user does not only control during the process but you know.
import streamlit as st
from crew_setup import sql_generator_crew, sql_reviewer_crew, sql_compliance_crew
from utils.db_simulator import get_structured_schema, run_query
import sqlparse
from utils.helper import extract_token_counts, calculate_gpt4o_mini_cost
DB_PATH = "data/sample_db.sqlite"
# Cache the schema, but allow clearing it
@st.cache_data(show_spinner=False)
def load_schema():
return get_structured_schema(DB_PATH)
st.title("SQL Assistant Crew")
st.markdown("""
Welcome to the SQL Assistant Crew!
This app lets you interact with your database using natural language. Simply type your data question or request (for example, "Show me the top 5 products by total revenue for April 2024"), and our multi-agent system will:
1. **Generate** a relevant SQL query for your request,
2. **Review** and optimize the query for correctness and performance,
3. **Check** the query for compliance and data safety,
4. **Execute** the query (if compliant) and display the results.
You can also refresh the database schema if your data changes.
This tool is perfect for business users, analysts, and anyone who wants to query data without writing SQL by hand!
""")
st.write("The schema of the database is saved. If you believe the schema is incorrect, you can refresh it by clicking the button below.")
# Add a refresh button
if st.button("Refresh Schema"):
load_schema.clear() # Clear the cache so next call reloads from DB
st.success("Schema refreshed from database.")
# Always get the (possibly cached) schema
db_schema = load_schema()
with st.expander("Show database schema"):
st.code(db_schema)
st.write("Enter your request in natural language and let the crew generate, review, and check compliance for the SQL query.")
if "generated_sql" not in st.session_state:
st.session_state["generated_sql"] = None
if "awaiting_confirmation" not in st.session_state:
st.session_state["awaiting_confirmation"] = False
if "reviewed_sql" not in st.session_state:
st.session_state["reviewed_sql"] = None
if "compliance_report" not in st.session_state:
st.session_state["compliance_report"] = None
if "query_result" not in st.session_state:
st.session_state["query_result"] = None
if "regenerate_sql" not in st.session_state:
st.session_state["regenerate_sql"] = False
if "llm_cost" not in st.session_state:
st.session_state["llm_cost"] = 0.0
user_prompt = st.text_input("Enter your request (e.g., 'Show me the top 5 products by total revenue for April 2024'):")
# Automatically regenerate SQL if 'Try Again' was clicked
if st.session_state.get("regenerate_sql"):
if user_prompt.strip():
try:
gen_output = sql_generator_crew.kickoff(inputs={"user_input": user_prompt, "db_schema": db_schema})
raw_sql = gen_output.pydantic.sqlquery
st.session_state["generated_sql"] = raw_sql
st.session_state["awaiting_confirmation"] = True
st.session_state["reviewed_sql"] = None
st.session_state["compliance_report"] = None
st.session_state["query_result"] = None
# LLM cost tracking
token_usage_str = str(gen_output.token_usage)
prompt_tokens, completion_tokens = extract_token_counts(token_usage_str)
cost = calculate_gpt4o_mini_cost(prompt_tokens, completion_tokens)
st.session_state["llm_cost"] += cost
st.info(f"Your LLM cost so far: ${st.session_state['llm_cost']:.6f}")
except Exception as e:
st.error(f"An error occurred: {e}")
else:
st.warning("Please enter a prompt.")
st.session_state["regenerate_sql"] = False
# Step 1: Generate SQL
if st.button("Generate SQL"):
if user_prompt.strip():
try:
gen_output = sql_generator_crew.kickoff(inputs={"user_input": user_prompt, "db_schema": db_schema})
# st.write(gen_output) # Optionally keep for debugging
raw_sql = gen_output.pydantic.sqlquery
st.session_state["generated_sql"] = raw_sql
st.session_state["awaiting_confirmation"] = True
st.session_state["reviewed_sql"] = None
st.session_state["compliance_report"] = None
st.session_state["query_result"] = None
# LLM cost tracking
token_usage_str = str(gen_output.token_usage)
prompt_tokens, completion_tokens = extract_token_counts(token_usage_str)
cost = calculate_gpt4o_mini_cost(prompt_tokens, completion_tokens)
st.session_state["llm_cost"] += cost
except Exception as e:
st.error(f"An error occurred: {e}")
else:
st.warning("Please enter a prompt.")
# Only show prompt and generated SQL when awaiting confirmation
if st.session_state.get("awaiting_confirmation") and st.session_state.get("generated_sql"):
st.subheader("Generated SQL")
formatted_generated_sql = sqlparse.format(st.session_state["generated_sql"], reindent=True, keyword_case='upper')
st.code(formatted_generated_sql, language="sql")
st.info(f"Your LLM cost so far: ${st.session_state['llm_cost']:.6f}")
col1, col2, col3 = st.columns(3)
with col1:
if st.button("Confirm and Review"):
try:
# Step 2: Review SQL
review_output = sql_reviewer_crew.kickoff(inputs={"sql_query": st.session_state["generated_sql"],"db_schema": db_schema})
reviewed_sql = review_output.pydantic.reviewed_sqlquery
st.session_state["reviewed_sql"] = reviewed_sql
# LLM cost tracking for reviewer
token_usage_str = str(review_output.token_usage)
prompt_tokens, completion_tokens = extract_token_counts(token_usage_str)
cost = calculate_gpt4o_mini_cost(prompt_tokens, completion_tokens)
st.session_state["llm_cost"] += cost
# Step 3: Compliance Check
compliance_output = sql_compliance_crew.kickoff(inputs={"reviewed_sqlquery": reviewed_sql})
compliance_report = compliance_output.pydantic.report
# LLM cost tracking for compliance
token_usage_str = str(compliance_output.token_usage)
prompt_tokens, completion_tokens = extract_token_counts(token_usage_str)
cost = calculate_gpt4o_mini_cost(prompt_tokens, completion_tokens)
st.session_state["llm_cost"] += cost
# Remove duplicate header if present
lines = compliance_report.splitlines()
if lines and lines[0].strip().lower().startswith("# compliance report"):
compliance_report = "n".join(lines[1:]).lstrip()
st.session_state["compliance_report"] = compliance_report
# Only execute if compliant
if "compliant" in compliance_report.lower():
result = run_query(reviewed_sql)
st.session_state["query_result"] = result
else:
st.session_state["query_result"] = None
st.session_state["awaiting_confirmation"] = False
st.info(f"Your LLM cost so far: ${st.session_state['llm_cost']:.6f}")
st.rerun()
except Exception as e:
st.error(f"An error occurred: {e}")
with col2:
if st.button("Try Again"):
st.session_state["generated_sql"] = None
st.session_state["awaiting_confirmation"] = False
st.session_state["reviewed_sql"] = None
st.session_state["compliance_report"] = None
st.session_state["query_result"] = None
st.session_state["regenerate_sql"] = True
st.rerun()
with col3:
if st.button("Abort"):
st.session_state.clear()
st.rerun()
# After review, only show reviewed SQL, compliance, and result
elif st.session_state.get("reviewed_sql"):
st.subheader("Reviewed SQL")
formatted_sql = sqlparse.format(st.session_state["reviewed_sql"], reindent=True, keyword_case='upper')
st.code(formatted_sql, language="sql")
st.subheader("Compliance Report")
st.markdown(st.session_state["compliance_report"])
if st.session_state.get("query_result"):
st.subheader("Query Result")
st.code(st.session_state["query_result"])
# LLM cost display at the bottom
st.info(f"Your LLM cost so far: ${st.session_state['llm_cost']:.6f}")
Here is a quick shortty for an app for action. I have asked to show maximum products based on total sales. The assistant produced a SQL question, and clicking me “Confirm and Review”. The question was already well done so the Rever's agent returned the same question without conversion. Next, the Agent to Assess Review the Question and confirm that it was safe to run – no risky work or sensitive information. After passing this two reviews, the question was selling a sample database and the results were displayed. In this process all procedure, the cost of the use of the llM were $ 0.001349.

Here's another example when I ask the app to identify which products have too many things. However, there is no information on schema for return. As a result, the assistant does not produce a question and mean the same reason. To this section, the cost of the LLM was $ 0.00853. As there is no point to review or make a question not available, I just click on “Abort” to complete the process by kindness.

Crewai are big to build many agents. By pending streamlit, a person can easily cause a simple UI connecting UI more to work with the program. In this POC, I checked on how to add a Woop-in Loop item to save control and transparency throughout the movement of work. I also read it that each step tokens that help the user to pay for costs during the process. With the help of the accompanying agent, I intensified certain basic safety methods by preventing the risk-related questions or PII disclosure. I found a model temperature and the Iterative drilled work explanations to improve the quality of removal and reduce the Hallucinations. Is it perfect? The answer is no. There are still times when the system is planning. When I use this science, then the cost of the LLM can be a major concern. In real life, information of information is complex, and as their Sches will also be great. I will have to test the RAG (Renewal Returns) feeding only Sniptets in the llm, preparation for agent, as well as using Caching for unwanted API calls.
The last thoughts
This was a wonderful project that combined the power of llms, the usefulness of planning, and the intelligence of Crewail. If you are interested in creating wise agents to communicate data, try – or fork a reppo and build!
Before you go …
Follow me so you don't miss any new post I write in the future; You will find some of my articles on my profile page. You can connect with me LinkedIn either X!



