Machine Learning

How to increase agentic memory to read more often

Models can transform various tasks, such as research and coding. However, more often than not, you work with an LLM, finish the job, and the next time you meet an LLM, you start over.

This is a big problem when working with llms. We spend a lot of time simply repeating commands in llms, such as formatting the code you want or performing tasks according to your preferences.

That's it Ages.md Logging files: a way to continuously use LLMS learning, where LLM learns your patterns and behaviors by saving unique information in a separate file. This file is then read every time you start a new task, preventing the cold start problem and helping you avoid repeating commands.

In this article, I will provide a high-level overview of how I find continuous learning with llms through constant updating Ages.md File.

In this article, you will learn how to use continuous in LLMS. Photo by Gemini.

Why do we need continuous learning?

Getting started with a new jail context takes time. The agent needs to take what you like, and you need to spend more time working with the agent, to get you to do exactly what you want.

For example:

  • Telling the agent to use Python 3.13 syntax, instead of 3.12
  • Inform the agent to always use return types in functions
  • To ensure that the agent has never used – What? typing

Many times I had to explicitly tell the agent to use Python 3.13 syntax, and not 3.12 syntax, probably because 3.12 syntax is too prevalent in their database.

The whole point of using AI Agents is to be fast. Therefore, you don't want to waste time repeating commands that repeat which Python version to use, or that the agent should not use this version.

In addition, the AI ​​agent sometimes spends more time to get the information you already have, for example:

  • The name of your document table
  • The names of your Cloudwatch authors
  • Startups in your S3 buckets

If the agent does not know the name of your document table, it must:

  1. Write all the tables
  2. Find a table that sounds like a document table (several options are possible)
  3. Or look at the table to confirm, or ask the user
Agentic memory
This image represents what the agent should do to find the name of your document table. First, all the database tables should be listed, then find the appropriate table names. Finally, the agent must verify that it has the correct table by asking the user to confirm or perform a table lookup. This takes a lot of time. Instead, you can save the document table name in agents.MD, and be more efficient with your code agent in future operations. Photo by Gemini.

This takes a lot of time, and it's something we can easily prevent by adding the document table name, CloudWatch logs, and S3 bucket headers to Ages.md.

Therefore, the main reason we need continuous learning is that repeated instructions are frustrating and time-consuming, and when working with AI Agents, we want to be as efficient as possible.

How to use continuous learning

There are two main methods that I have come across regularly, both of which involve heavy use of Ages.md The file, which you should have all the databases you are working on:

  1. Whenever the agent makes a mistake, I inform the agent how to correct the mistake, and remember this later on agent.md file
  2. After each thread I have an agent, I use the lowest speed. This ensures that whatever was told to the agent in every thread, or information found in every thread, is saved for later use. This makes meeting time over time effective.
Generalize the knowledge from this thread, and remember it for later. 
Anything that could be useful to know for a later interaction, 
when doing similar things. Store in agents.md

Applying these two simple concepts will get you 80% of the way to continuous learning and LLMs and make you a very successful engineer.


The most important point is to always keep aventic memory with Ages.md in mind. Whenever an agent does something you don't like, you should always remember to keep it Ages.md

You might think that you are risking a bloom Ages.md File, which will make the agent both less and more expensive. However, this is not really the case. LLMS is best suited for file-friendly data. In addition, even if you have Ages.md A file containing thousands of words, is not really a problem, either in terms of length or cost.

The length of the Frontier LLMS core is hundreds of thousands of tokens, so it's not a problem at all. Along with the cost, you will probably start to see the cost of pursuing an LLM go down. The reason for this is that the agent will use fewer tokens to get the information, because that information already exists agents.MD.

Powerful use of Ages.md With agentic memory it will make the use of LLM faster, and reduce costs

Some additional tips

I would also like to include some additional tips that are useful when dealing with agentic memory.

The first tip is that when communicating with Claude Code, you can enter the agent's memory using “#”, and write what you need to remember. For example, type this in the terminal when interacting with Claude's code:

# Always use Python 3.13 syntax, avoid 3.12 syntax

You will get an option, as you can see in the image below. Even if you store it in the user's memory, which stores the details of all your interactions with the code in Claude, even if it is code code. This is useful for general information, such as always having a function's return type.

The second and third options save to your current folder or to the root folder of your project. This can be useful for any special folder information, for example, it only describes a specific resource. Or to store information about the final code in general.

Claude Code Memory Options
This image highlights the different memory options you have with Claude. You can save to the user's memory, save the memory of all your sessions, even if there are any saved. In addition, you can save it in the advice of the project you entered, for example, if you want to save information about a particular service. Finally, you can store the memory in the root project folder, so all the work of the repository will have context. Photo by the author.

In addition, different code agents use different memory files.

  • The Claude code uses Claude.md
  • Warp uses warp.md
  • The pointer uses .Curselues

However, all agents tend to learn Agents.MD, Which is why I recommend keeping the information in that file, so you can access the aventic memory regardless of which coding agent you use. This is because the Claude of One Claude code will be the best, but we can see another code agent one day.

AGI AND LEARNING AND LEARNING

I would also like to add a note on AGI and continuous learning. True continuous learning is sometimes said to be one of the last hurdles to achieving AGI.

Currently, LLMS prioritizes continuous learning by keeping a record of what it learns in files it reads later (such as agents.md). However, the ideal would be that LLMs continually update their model tools whenever they learn new information, which is essentially how humans learn nature.

Unfortunately, true continuous reading isn't available yet, but it's likely a power we'll see more of in the coming years.

Lasting

In this article, I talked about how to become a more successful developer by using agents.MD so keep learning. With this, your agent will pick up your habits, the mistakes you make, the information you use often, and many other useful pieces of information. This will also make the subsequent communication with your agent more efficient. I believe that heavy use of the agents.MD file is essential to being a good developer, and something you should strive to achieve.

👉 My free tools

🚀 10x Your Engineering with LLMS (Free 3-Day South Course)

📚 Get my FREE language ebook models

💻 My webinar on visual language models

👉 Find me in the community:

📩 Subscribe to my newsletter

🧑💻 Get in touch

🔗 lickEdin

🐦 X / Twitter

✍️ Medium

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button