Ai-agents from Zero to Hero – Part 1

Erent
Ai agents Individual programs are performing jobs, make decisions, and to communicate with others. Usually, they use tool set to help complete activities. In Genai, these successive chronological purposes can use foreign tools (such as the web search or data questions) when the LLM information is insufficient. Unlike the basic chatbot, producing random Chatbot If you are unsure, Audent AI activated tools to provide accurate, accurate, specific answers.
We are closer and approaching the thought of Agentic AI: Programs showing a high standard of independence and decision-making, without direct control. While today's agents AI responds to the cost of a person, tomorrow Agentic Agentic AIs makes troubleshooting and can change their behavior based on this situation.
Today, construction agents from the beginning begins as training a logical model for a 10 years ago. Back then, Scikit-learn You have given a direct library to rush to train machines for a machine reading of the machine with a few lines of the code, removing many difficulties.
In this lesson, I will show how I can do it Build from different types of AI agentsfrom simple to advanced programs. I will introduce a specific Python code that can be used easily in other similar charges (just copy, run) and move) and go) and move again and re-find this example.
Putting Time
As I said, anyone can have a custom agent running in your area without the GPUs or API keys. Only the required library Ollama (PIP Install Ollama == 0.4.7), as it allows users to work for the LLMS, without needing the clouds of the clouds, give more control of the privacy and data function.
First, you need to download Ollama from the website.
Then, in the fast-speed shell of your laptop, use the selection order of the selected llm. I'm going with Alaba EweAs intelligent and one thing.
After download is completed, you can download to Python and start the writing code.
import ollama
llm = "qwen2.5"
Let's examine the llm:
stream = ollama.generate(model=llm, prompt=""'what time is it?''', stream=True)
for chunk in stream:
print(chunk['response'], end='', flush=True)
Obviously, each llm is very limited and cannot do much without talking. Therefore, we need to give us the opportunity to take action, or in other words, to Activate Tools.
One of the usual tools is the skill Search the Internet. Epython, an easy way to do it with a popular private browser Duckduckgo (pip install duckduckgo-search==6.3.5). You can directly use the original library or import Langchain wrapper (pip install langchain-community==0.3.17).
Reference OllamaTo use the tool, the work should be defined in the dictionary.
from langchain_community.tools import DuckDuckGoSearchResults
def search_web(query: str) -> str:
return DuckDuckGoSearchResults(backend="news").run(query)
tool_search_web = {'type':'function', 'function':{
'name': 'search_web',
'description': 'Search the web',
'parameters': {'type': 'object',
'required': ['query'],
'properties': {
'query': {'type':'str', 'description':'the topic or subject to search on the web'},
}}}}
## test
search_web(query="nvidia")
The Internet searching can be very better, and I want to give an agent option to be more specific. Suppose, I plan to use this agency to learn about financial updates, so I can provide a specific tool for this article, such as a financial website only instead of all web.
def search_yf(query: str) -> str: engine = DuckDuckGoSearchResults(backend="news")
return engine.run(f"site:finance.yahoo.com {query}")
tool_search_yf = {'type':'function', 'function':{
'name': 'search_yf',
'description': 'Search for specific financial news',
'parameters': {'type': 'object',
'required': ['query'],
'properties': {
'query': {'type':'str', 'description':'the financial topic or subject to search'},
}}}}
## test
search_yf(query="nvidia")
A simple agent (Websearch)
In my opinion, a basic agent should be able to choose between one or two tools and redefine the output of action to give the user proper and short.
First, you need to write quickly to describe the agent's purpose, most specific (mine is very important), and that will be the first message in the history of conversation with the LLM history.
prompt=""'You are an assistant with access to tools, you must decide when to use tools to answer user message.'''
messages = [{"role":"system", "content":prompt}]
To keep a conversation with AI alive, I will use the loop that begins with the user's installation and the agent asks for an answer (either a document from the LLM or activating the tool).
while True:
## user input
try:
q = input('🙂 >')
except EOFError:
break
if q == "quit":
break
if q.strip() == "":
continue
messages.append( {"role":"user", "content":q} )
## model
agent_res = ollama.chat(
model=llm,
tools=[tool_search_web, tool_search_yf],
messages=messages)
So far, the chat history can look like something like:
If the model wants to use the tool, the right work needs to be run by installing parameters raised by the llM in its response item:
So our code needs to get that information and then run the tool work.
## response DIC_TOOLS = {'Search_web': Search_web, 'Search_yf': Search_yf} If "Tools["message"].["message"]["tool_calls"]: t_name, t_inlus = tool["function"]["name"]Tool["function"]["arguments"]
If F: = DIC_tools.Get (T_Name): ### Print Tool ('🔧>', F " x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
### tool output
t_output = f(**tool["function"]["arguments"]) Print (T_utput) ### Last R P = F '' summarizing this to answer the user's question, as short as possible: "+ P)["response"]
Other: Print ('🤬>', f " x1B[1;31m{t_name} -> NotFoundx1b[0m")
if agent_res['message']['content'] ! = '': re = agent_rent["message"]["content"]
Print ("👽>", f " x1B[1;30m{res}x1b[0m")
messages.append( {"role":"assistant", "content":res} )
Now, if we run the full code, we can chat with our Agent.

Advanced Agent (Coding)
LLMs know how to code by being exposed to a large corpus of both code and natural language text, where they learn patterns, syntax, and semantics of Programming languages. The model learns the relationships between different parts of the code by predicting the next token in a sequence. In short, LLMs can generate Python code but can’t execute it, Agents can.
I shall prepare a Tool allowing the Agent to execute code. In Python, you can easily create a shell to run code as a string with the native command exec().
import io
import contextlib
def code_exec(code: str) -> str:
output = io.StringIO()
with contextlib.redirect_stdout(output):
try:
exec(code)
except Exception as e:
print(f"Error: {e}")
return output.getvalue()
tool_code_exec = {'type':'function', 'function':{
'name': 'code_exec',
'description': 'execute python code',
'parameters': {'type': 'object',
'required': ['code']'structures': {'Code': {'type': 'Sty "'")
As previously, I will write quickly, but this time, at the beginning of the chat – I'll ask the user to give a file method.
prompt=""'You are an expert data scientist, and you have tools to execute python code.
First of all, execute the following code exactly as it is: 'df=pd.read_csv(path); print(df.head())'
If you create a plot, ALWAYS add 'plt.show()' at the end.
'''
messages = [{"role":"system", "content":prompt}]
start = True
while True:
## user input
try:
if start is True:
path = input('📁 Provide a CSV path >')
q = "path = "+path
else:
q = input('🙂 >')
except EOFError:
break
if q == "quit":
break
if q.strip() == "":
continue
messages.append( {"role":"user", "content":q} )
As coding activities can be a smaller llms trick, I will also include Memory is reinforced. Automatically, at the same time, there is no long term memory. The llms reaches the history of the discussion, so they can remember information temporarily, and follow the context and the instructions you provided before in the conversation. However, memory does not always work as expected, especially when the llm is small. Therefore, a good practice is to strengthen the model memory by adding future reminders to the history of chat.
prompt=""'You are an expert data scientist, and you have tools to execute python code.
First of all, execute the following code exactly as it is: 'df=pd.read_csv(path); print(df.head())'
If you create a plot, ALWAYS add 'plt.show()' at the end.
'''
messages = [{"role":"system", "content":prompt}]
memory = '''Use the dataframe 'df'.'''
start = True
while True:
## user input
try:
if start is True:
path = input('📁 Provide a CSV path >')
q = "path = "+path
else:
q = input('🙂 >')
except EOFError:
break
if q == "quit":
break
if q.strip() == "":
continue
## memory
if start is False:
q = memory+"n"+q
messages.append( {"role":"user", "content":q} )
Please note that default memory length in Ollama are 2048 characters. If your machine can manage, you can invite it by changing the number when the llm is required:
## model
agent_res = ollama.chat(
model=llm,
tools=[tool_code_exec],
options={"num_ctx":2048},
messages=messages)
In this SECSE, the agent's result is especially code and data, so I do not want the llm and describes recycles.
## response
dic_tools = {'code_exec':code_exec}
if "tool_calls" in agent_res["message"].keys():
for tool in agent_res["message"]["tool_calls"]:
t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
if f := dic_tools.get(t_name):
### calling tool
print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
### tool output
t_output = f(**tool["function"]["arguments"])
### final res
res = t_output
else:
print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
if agent_res['message']['content'] != '':
res = agent_res["message"]["content"]
print("👽 >", f"x1b[1;30m{res}x1b[0m")
messages.append( {"role":"assistant", "content":res} )
start = False
Now, if we use the perfect code, we can talk to our agel.
Store
This article combines basic design measures from initial use only Ollama. Through these locations construction sites, you have been armed to start improving your different agencies using charges.
Stay to watch part 2When we go into it, we die deep and more advanced examples.
The full code of this article: A Kiki tree
I hope you enjoy! Feel free to contact me with questions and feedback or just to share your exciting projects.
👉 Let's get on contact 👈



