How to design persistent memory and agentic agentic agentic agentic agentic program with decomposition and self-evaluation?

In this tutorial, we explore how to build an intelligent agent that remembers, learns and adapts to us over time. We use a Persistent memory & Personalization how to do it It uses simple, rule-based logic to simulate how we store today's Agentic AI and content information. As we progress, we see how the agent's responses evolve with experience, how memory degradation helps to overload them, and how incentives improve performance. We aim to understand, step by step, that persistence transforms the tuli chatbot into a digital one that you know, from a digital partner. Look Full codes here.
import math, time, random
from typing import List
class MemoryItem:
def __init__(self, kind:str, content:str, score:float=1.0):
self.kind = kind
self.content = content
self.score = score
self.t = time.time()
class MemoryStore:
def __init__(self, decay_half_life=1800):
self.items: List[MemoryItem] = []
self.decay_half_life = decay_half_life
def _decay_factor(self, item:MemoryItem):
dt = time.time() - item.t
return 0.5 ** (dt / self.decay_half_life)
We have established the basis of the long-term memory of the Agent. We define a mentimem class to hold each piece of data and build memory with an exponential deboy mechanism. We begin to lay the foundation for the storage and aging of information such as human memory. Look Full codes here.
def add(self, kind:str, content:str, score:float=1.0):
self.items.append(MemoryItem(kind, content, score))
def search(self, query:str, topk=3):
scored = []
for it in self.items:
decay = self._decay_factor(it)
sim = len(set(query.lower().split()) & set(it.content.lower().split()))
final = (it.score * decay) + sim
scored.append((final, it))
scored.sort(key=lambda x: x[0], reverse=True)
return [it for _, it in scored[:topk] if _ > 0]
def cleanup(self, min_score=0.1):
new = []
for it in self.items:
if it.score * self._decay_factor(it) > min_score:
new.append(it)
self.items = new
We expand the memory system by adding ways to insert, search, and clean old memories. We use the same matching function and decomposition-based cleaning process, which makes the agent remember the relevant facts while they are automatically forgotten or out of date. Look Full codes here.
class Agent:
def __init__(self, memory:MemoryStore, name="PersonalAgent"):
self.memory = memory
self.name = name
def _llm_sim(self, prompt:str, context:List[str]):
base = "OK. "
if any("prefers short" in c for c in context):
base = ""
reply = base + f"I considered {len(context)} past notes. "
if "summarize" in prompt.lower():
return reply + "Summary: " + " | ".join(context[:2])
if "recommend" in prompt.lower():
if any("cybersecurity" in c for c in context):
return reply + "Recommended: write more cybersecurity articles."
if any("rag" in c for c in context):
return reply + "Recommended: build an agentic RAG demo next."
return reply + "Recommended: continue with your last topic."
return reply + "Here's my response to: " + prompt
def perceive(self, user_input:str):
ui = user_input.lower()
if "i like" in ui or "i prefer" in ui:
self.memory.add("preference", user_input, 1.5)
if "topic:" in ui:
self.memory.add("topic", user_input, 1.2)
if "project" in ui:
self.memory.add("project", user_input, 1.0)
def act(self, user_input:str):
mems = self.memory.search(user_input, topk=4)
ctx = [m.content for m in mems]
answer = self._llm_sim(user_input, ctx)
self.memory.add("dialog", f"user said: {user_input}", 0.6)
self.memory.cleanup()
return answer, ctx
We design an intelligent agent that uses memory to inform its responses. We develop a mockel language model that adapts responses based on stored preferences and topics. At the same time, cognitive function enables the agent to learn new skills about the user. Look Full codes here.
def evaluate_personalisation(agent:Agent):
agent.memory.add("preference", "User likes cybersecurity articles", 1.6)
q = "Recommend what to write next"
ans_personal, _ = agent.act(q)
empty_mem = MemoryStore()
cold_agent = Agent(empty_mem)
ans_cold, _ = cold_agent.act(q)
gain = len(ans_personal) - len(ans_cold)
return ans_personal, ans_cold, gain
Now we give our agent the ability to perform and test them. We allow you to remember the memories of the formation of state responses and add a small test loop to compare the responses made with memory-less against the base of memory-less, which calls out how much memory helps. Look Full codes here.
mem = MemoryStore(decay_half_life=60)
agent = Agent(mem)
print("=== Demo: teaching the agent about yourself ===")
inputs = [
"I prefer short answers.",
"I like writing about RAG and agentic AI.",
"Topic: cybersecurity, phishing, APTs.",
"My current project is to build an agentic RAG Q&A system."
]
for inp in inputs:
agent.perceive(inp)
print("n=== Now ask the agent something ===")
user_q = "Recommend what to write next in my blog"
ans, ctx = agent.act(user_q)
print("USER:", user_q)
print("AGENT:", ans)
print("USED MEMORY:", ctx)
print("n=== Evaluate personalisation benefit ===")
p, c, g = evaluate_personalisation(agent)
print("With memory :", p)
print("Cold start :", c)
print("Personalisation gain (chars):", g)
print("n=== Current memory snapshot ===")
for it in agent.memory.items:
print(f"- {it.kind} | {it.content[:60]}... | score~{round(it.score,2)}")
Finally, we run a full demo to see our agent in action. We feed you user input, notice how it recommends customized actions, and check its Memory Snapshot. We witness the evolution of adaptive behavior, evidence that persistent memory transforms a stylized script into a learning companion.
In conclusion, we show that adding memory and preferences makes our agent more human, able to remember preferences, adapt to physical objects, and forget outdated information in nature. We see that even simple methods such as decomposition and returns greatly improve the Agent's quality and pricing. In the end, we see that persistent memory is the foundation of the next-generation agentic agent, which learns continuously, tailors solutions intelligently, and maintains dynamic focus with a complete local, offline setup.
Look Full codes here. Feel free to take a look at ours GitHub page for tutorials, code and notebooks. Also, feel free to follow us Kind of stubborn and don't forget to join ours 100K + ML Subreddit and sign up Our newsletter. Wait! Do you telegraph? Now you can join us by telegraph.
AsifAzzaq is the CEO of MarktechPost Media Inc.. as a visionary entrepreneur and developer, Asifi is committed to harnessing the power of social intelligence for good. His latest effort is the launch of the intelligence media platform, MarktechPpost, which stands out for its deep understanding of machine learning and deep learning stories that are technically sound and easily understood by a wide audience. The platform sticks to more than two million monthly views, which shows its popularity among the audience.
Follow Marktechpost: Add us as a favorite source on Google.



