Returns for all kinds of llm agents

In the heart of all the winners of the winner they get one important skill: adopt (or “fast engineering”). It is a way of teaching the llms to perform jobs by carefully designing the installation text.
Dempt Engineering to appear in the first installation TEX-TOE-TEXT NLP models (2018). At that time, developers often focus on the side of model and characteristics. After the creation of the Great GPT models (2022), we all started using the most trained tools, so focusing on the converting installation. Therefore, “Defect Engineering” discipline He was born, now (2025) Agent in art and science integration, as the NLP blows a line between the code and accelerating.
Different types of encouragement are creating different types of agents. Each approach improves some skill: Logic, planning, Memory, accuracy, and integration of tools. Let us see all of the simple example.
## setup
import ollama
llm = "qwen2.5"
## question
q = "What is 30 multiplied by 10?"
Great strategies
1) ““Let's always stop – Just ask a question and get a straight answer. It also is also “Zero-Shot” leads exactly where the model is given unless previous examples to solve. This basic approach is for Agents another That issues this work without the center of mind, especially the first models.
response = ollama.chat(model=llm, messages=[
{'role':'user', 'content':q}
])
print(response['message']['content'])

2) Respond (reason + an action) – a combination of consultation and actions. The model is not just for the problem but also takes action depending on its reasons. Therefore, it is very effective as the model changes between steps and actions of consultation, telling its way is Iteratively. Basically, it is a thought of thinking of thinking action. Used for more sophisticated activitiessuch as searching for the web and making decisions based on the findings, and generally are designed Multi-Step agents That makes a series of steps and acts of arrival to come on the last effect. They can break sophisticated activities, more controlled parts to one another.
Personally, I especially like handling agents as I find them as people who are very similar because they say “F * ck around and find” like “like us.

prompt = '''
To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Action:', and 'Observation:' sequences.
At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use.
Then in the 'Action:' sequence, you shold use one of your tools.
During each intermediate step, you can use 'Observation:' field to save whatever important information you will use as input for the next step.
'''
response = ollama.chat(model=llm, messages=[
{'role':'user', 'content':q+" "+prompt}
])
print(response['message']['content'])

3) Chain-of-tempent (COT) – The consultation pattern involving producing the process to reach the conclusion. The model is oppressed “think aloud” by clearing reasonable steps that result in the final reply. Basically, it is a plan without the answer. Cot is used for the most Advanced jobssuch as solving a mathematical problem that may need to think about steps by step, and generally are designed Multi-Step agents.

prompt = '''Let’s think step by step.'''
response = ollama.chat(model=llm, messages=[
{'role':'user', 'content':q+" "+prompt}
])
print(response['message']['content'])

Cot extensions
From chain-of-techntique is taken some new ways you make.
4) Reflexion builds that adds an iterative self-check or self-correction phase on top of the initial CoT reasoning, where the model reviews and critiques its own outputs (spotting mistakes, identifying gaps, suggesting improvements).
cot_answer = response['message']['content']
response = ollama.chat(model=llm, messages=[
{'role':'user', 'content': f'''Here was your original answer:nn{cot_answer}nn
Now reflect on whether it was correct or if it was the best approach.
If not, correct your reasoning and answer.'''}
])
print(response['message']['content'])

5) Medicine-of-thoughts (tot) That handles the curtains in the tree, checking many thoughtful chains at the same time.
num_branches = 3
prompt = f'''
You will think of multiple reasoning paths (thought branches). For each path, write your reasoning and final answer.
After exploring {num_branches} different thoughts, pick the best final answer and explain why.
'''
response = ollama.chat(model=llm, messages=[
{'role':'user', 'content': f"Task: {q} n{prompt}"}
])
print(response['message']['content'])

6) Graphs-of-Anients (finding) encountered a Christvis in graph, also thinking about the branches connected.
class GoT:
def __init__(self, question):
self.question = question
self.nodes = {} # node_id: text
self.edges = [] # (from_node, to_node, relation)
self.counter = 1
def add_node(self, text):
node_id = f"Thought{self.counter}"
self.nodes[node_id] = text
self.counter += 1
return node_id
def add_edge(self, from_node, to_node, relation):
self.edges.append((from_node, to_node, relation))
def show(self):
print("n--- Current Thoughts ---")
for node_id, text in self.nodes.items():
print(f"{node_id}: {text}n")
print("--- Connections ---")
for f, t, r in self.edges:
print(f"{f} --[{r}]--> {t}")
print("n")
def expand_thought(self, node_id):
prompt = f"""
You are reasoning about the task: {self.question}
Here is a previous thought node ({node_id}):"""{self.nodes[node_id]}"""
Please provide a refinement, an alternative viewpoint, or a related thought that connects to this node.
Label your new thought clearly, and explain its relation to the previous one.
"""
response = ollama.chat(model=llm, messages=[{'role':'user', 'content':prompt}])
return response['message']['content']
## Start Graph
g = GoT(q)
## Get initial thought
response = ollama.chat(model=llm, messages=[
{'role':'user', 'content':q}
])
n1 = g.add_node(response['message']['content'])
## Expand initial thought with some refinements
refinements = 1
for _ in range(refinements):
expansion = g.expand_thought(n1)
n_new = g.add_node(expansion)
g.add_edge(n1, n_new, "expansion")
g.show()
## Final Answer
prompt = f'''
Here are the reasoning thoughts so far:
{chr(10).join([f"{k}: {v}" for k,v in g.nodes.items()])}
Based on these, select the best reasoning and final answer for the task: {q}
Explain your choice.
'''
response = ollama.chat(model=llm, messages=[
{'role':'user', 'content':q}
])
print(response['message']['content'])

7) Sywood-Motor (pot) That works in order, where the thinking is made possible with a fragile code snippets.
import re
def extract_python_code(text):
match = re.search(r"```python(.*?)```", text, re.DOTALL)
if match:
return match.group(1).strip()
return None
def sandbox_exec(code):
## Create a minimal sandbox with safety limitation
allowed_builtins = {'abs', 'min', 'max', 'pow', 'round'}
safe_globals = {k: __builtins__.__dict__[k] for k in allowed_builtins if k in __builtins__.__dict__}
safe_locals = {}
exec(code, safe_globals, safe_locals)
return safe_locals.get('result', None)
prompt = '''
Write a short Python program that calculates the answer and assigns it to a variable named 'result'.
Return only the code enclosed in triple backticks with 'python' (```python ... ```).
'''
response = ollama.chat(model=llm, messages=[
{'role':'user', 'content': f"Task: {q} n{prompt}"}
])
print(response['message']['content'])
sandbox_exec(code=extract_python_code(text=response['message']['content']))

Store
This article has been a lesson in Repeat all major strategies to encourage AI agents. There is no “best” process of moving as it depends on the work and difficulty of thinking needed.
For example, simple activities, like To summarize and translateis Asy performed with a normal zero-shot / encouraging, while cot works well Matt and Logic jobs. On the other hand, Agents with tools are usually made in renewal mode. In addition, the Reflexion is very good when you learn from mistakes or mistakes that improve results, such as gambling.
About width By complex tasks, The jar is a real winner because it is only based on coding and murder. In fact, the monsoon agents approach him Replace people in several office areas.
I believe, soon, let go of it just won't be “what you say in the model”, but about installing a link connecting between people's goals, machining, and external action.
The full code of this article: A Kiki tree
I hope you enjoy! Feel free to contact me with questions and feedback or just to share your exciting projects.
👉 Let's get on contact 👈

(All photos are the writer without this It was noted)



