Machine Learning

How Do I Upgrade My Claude Code?

articles about the techniques I use when using Claude Code to get the most out of it. However, a topic I have spent less time talking about is how to improve my use of Claude Code in general. How do I improve the way I interact with my Code Claude environments and how my Code Claude works in the code repositories I review?

In this article, I want to highlight how I constantly update how I interact with Claude Code and how my Claude Code works, which will make me and my writing agent more effective in the long run.

The concept of continuous learning is incredibly powerful because if you can improve just a few percent every day, the cumulative effect over weeks and months will be huge. It can be more efficient than the out-of-the-box version of Claude code or any other coding agent.

This simple illustration highlights the main content of this article. I will discuss how you can make Claude Code improve itself by using daily cron jobs and how to improve human interaction with coding agents. Image via ChatGPT

Why should you read further?

I always try to cover why the topic is important, why you should care about it, and how it can help you. The reason you read on is simple: if you're just using the out-of-the-box version of Claude Code, Codex, or any other code agent, you're missing out. Of course, those models are incredibly powerful, and if you compare them to a few years ago, you can still perform many times better than before.

However, that doesn't really matter. The most important thing is that using continuous learning will also give you great potential for efficiency.

In this article, I will cover one simple method of how I make my Code Claude improve itself every day, and give you an insight into how I try to improve my interaction with Claude Code to make the interaction of the coding agent as successful as possible.

Making Claude Code a self-study

I'll start by covering a simple method that you can start using now, which will probably improve the way your Claude Code works.

You can create a skill within Claude Code that goes like this:

Review my last interactions with Claude Code from the last 24 hours. 
Look for any problems that I encountered, things that weren't working 
efficiently, and unnecessary tool calling. Look for common mistakes 
Claude Code was doing and other things that can be optimized. 
Look thoroughly through all conversations and make a plan for how we 
can optimize our flow in the future, both within each repository and 
cross-repositories. Also look for insights that would be useful for the 
coding agent to know beforehand, both before entering a repository and 
when working in multiple repositories at the same time. 

Let's call this a skill-past-performance review. Now, all you have to do is set up a cron job to run this capability at 2 am every night or some time when you know you are not interacting with your agents continuously.

What will happen if you use and use this ability is that Claude will go through all the previous conversations you have had in the last 24 hours. It will look for threads. It will see where you got stuck with Claude's Code (ie, where you spent more time than you should have), and it will see where Claude's Code got stuck making the wrong tool calls, making the wrong assumptions, or where it lacked the context it needed to do the job successfully.

It will then develop a plan on how to prevent these things from happening in the future and make the Claude Code more effective in most cases. This will use variables like:

  • Adding additional information to agent.md or similar standard markup files
  • Creating specific capabilities that an agent can load on demand or run on demand when performing certain tasks
  • Using specific scripts or tools, such as pre-committed hooks, test scripts, and the like, to avoid mistakes from happening again.

The best part about setting up a cron job to use this capability every day is that you don't even need to interact with the agents at all. It will be able to effectively reflect, find inefficiencies, fix, and thus improve the Claude Code over time. One of the best parts of this is that Claude's Code will be customized to your use cases. You may have a particular skill set or preferences when working in databases. Using this skill will find these preferences and make them work as well as possible.

By running this cron job every night, I've unlocked huge efficiency gains, where my coding agents have become more powerful than before, simply because they make fewer mistakes. They know a lot about the right way to do things, and overall, they follow my preferences better.

Improving human interaction with coding agents

Another very complex aspect of improving human interaction with coding agents. I spend a lot of time thinking and reflecting on how to effectively communicate with my agents to get them to implement the code I want as quickly and efficiently as possible.

Obviously, this is not a solved problem yet, as there are many different tools and platforms coming out to make coding and interacting with agents easier, better, and more efficient. In this section, I will include some of my thoughts on human interactions with coding agents and how I try to satisfy them myself.

Note that the techniques I will cover are, of course, developed and developed for my own workflow, and I urge you to read and learn about them and think about how this applies to your workflow.

It uses 7+ agents simultaneously

I often find myself running multiple agents at the same time simply because I have multiple tasks to complete and can start working on them in parallel. Of course, there are external factors that determine whether it is possible or appropriate for me to use these multiple messengers at the same time. If the situation allows it and it makes good sense to do so, I will use as many agents as possible in parallel.

However, I have found that when I start accessing more than seven agents at once, I start to lose control of all my agents. I can't properly switch context between them, keep up with what each agent is doing, and effectively respond to the agent when they ask me questions.

I have tried many different tools and platforms to make this interaction work well. I'm currently using Warp, where I use split panes for all tabs when working with the same agents within the same environment, and start new tabs for each different repository I'm working on. This works relatively well, although, as mentioned, I get stuck when running more than seven agents at once.

I've also tried a lot of IDE-based methods like Conductor or Omnara, but I don't feel like they give me any productivity benefits over what Warp can give me.


My takeaways from this section are some techniques that allow you to use as many agents as possible at the same time. First of all, the situation must allow it. You need to have enough tasks that can all be done in parallel, and where you can let the agent run long enough so that it is not constantly interrupted. The first step is to approve the task or tasks you are completing.

Second of all, the most powerful thing when working with multiple agents in parallel is replication. Claude Code has started providing a recap at the end of the conversation, which is incredibly powerful. It gives you a very brief overview of what you are doing in that conversation, allowing you to quickly grasp the context when you have to contact the agent again. I urge you to enable recaps and actively use them if you need to read the context of a particular series.


Finally, at this stage, I would also like to note that the Claude Code, today, as this article is being written, has just released the opinion of the agent in the Claude Code. This is a view that you should make it easy to maintain a view of all your agents at the same time. I haven't tried it yet, although it seems to address exactly the problem I'm describing in this section. I will try and write an article about it in the future.

Let the agent ask you the questions, not the other way around

This section is interesting because the usual way to interact with AI models, at least in the beginning, was to ask them questions and they would answer you in a short form. However, this completely changes when you start dealing with long-running code sessions. You no longer want to ask it questions, you want it to work independently for as long as possible, and stop only when it has to ask you questions.

So this is something I recommend you include in your coding agents instructions. You want them to run as long and as independently as possible and stop using only when it has to ask the user a question. This is, of course, closely related to another article I wrote, which is how to allow Claude Code to validate its work. To make the agent work for a long time, you need to give it an option or opportunities to validate its work, which I have covered in another article Towards Data Science. Check it out below:

How to Make Claude Code Validate Your Own Work

The conclusion

In this article, I've covered how I'm constantly improving my Claude Code example, both by making Claude Code self-improve every night and by improving human interactions with Claude Code and other coding agents. I believe both of these things are things you should try to do as a developer to make your code more efficient. As an engineer, you should always look for the next bottleneck: what slows down the most and can unlock the biggest productivity improvements. For me, I found that this was:

  1. Claude The code repeats the errors, fixed by the first section in this article
  2. Human interaction with Claude Code, which I covered in the second part of this article

I urge you to always look for such issues and try to remove them as soon as possible to make your coding efforts as productive as possible.

👋 Touch

👉 My Free eBook and Webinar:

🚀 10x Your Engineering with LLMs (Free 3-Day Email Course)

📚 Get my free ebook Vision Language Models

💻 My webinar on Vision Language Models

👉 Find me on social media:

💌 Stack

🔗 LinkedIn

🐦 X / Twitter

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button