How to Write Solid Code with Claude's Code

coding agents can be used to quickly create new applications. However, the problem when you create a new program quickly is that you can't look at the code.
In my opinion, this is really good. You generally don't need to analyze code unless you're creating security-critical applications or similar, considering that coding agents are so good that this is usually not necessary.
However, if you don't take some precautions, you will run into stability problems where your application is less reliable compared to if you programmed it yourself, thinking carefully about every part of the code. In this article, I'll cover some tricks and techniques I use to make my code as robust as possible when programming using Claude Code without looking at the code myself.
Why do you need hard code?
This is a trivial question, as you want robust code that can withstand many different scenarios, because it makes users less likely to make mistakes and have a better experience with your product. Another question to ask yourself here is, of course, shouldn't you actually be looking at the code to make it more robust?
I have two main responses to this last point:
- You don't really have time to look at all the code yourself if you want to keep a high tempo and develop the product quickly.
- Coding agents have become so good at finding problems and building reliable code, if instructed correctly, that improving code robustness can be automated by coding agents and does not have to be manual work.
So, we come to the main point of this article, which is how to automatically ensure code integrity with coding agents so you don't waste time doing it yourself. I will cover this in the following sections.
How to build strong startup code
The first part I'll cover is how to build robust code from scratch. And the next section will cover how to verify the integrity of the code and modify it after it is built. I consider these to be two different problems, and I use different techniques to solve them, which is why I have divided them into two categories.
Effective use of program mode
Editing mode is the first method I'll cover that I think is the most important if you want to get the most out of coding agents. Using program mode allows coding agents to spend more time planning an implementation instead of just starting it right away. This generally improves the models ability to see the big picture and thus avoids bugs for example caused by updates to one component changing things in other components.
Editing mode also asks clarifying questions so that any ambiguities are clarified. having coding agents ask new questions instead of you asking coding agents is an incredibly powerful feature I urge you to use more. You want to let the model do as much of the thinking as possible and only come back to you when it needs to clarify something or better understand what you want to implement.
It is more powerful for LLMs to ask you questions than for LLMs to ask questions.
This, in most cases, causes fewer bugs and the model is more efficient in implementing the solution. Although programming mode is initially time-consuming as you have to program with the agent and not launch immediately, it is usually worth it in the long run because of the few bugs you encounter and having to spend less time iterating with the agent after launch to make sure you get the launch you want.
Saving files skills
The second part is the MD files you have on your storage. Over time, if you spend time writing code in the repository, the number of markup files should gradually grow, highlighting how agents should behave in the repository, previous bugs reported and how they were fixed, and other problems that appeared in the repository.
This is very important and useful for coding agents because they usually have enough context to be able to actively use this information. And it makes them less likely to make the wrong decisions they made in the past. These Markdown files are often made of mistakes agents have made in the past, so having more helps them all make better decisions.
To create these Markdown files, I urge you to have the agent compile the information into a thread after every conversation thread you've had with your coding agent. This is the first point that makes coding with agents so effective. Second, that every time you find and fix a bug, you save a description of the problem and how it was solved in a markup file.
If you use these concepts every time you code, you will develop an incredibly powerful knowledge base within your repository, and your agents will inevitably improve over time and become more efficient and error-prone, and thus build stronger code.
Avoid running your agent with a very large content window
Another very common reason for me to find fragile code, vulnerable code, or code that contains bugs is that I was running my agent with a very long context. Claude Code, for example, released their 1 millionth context model not too long ago. A 1 million token context window is very long, and can contain a lot of information. However, in my experience, the performance of the model decreases significantly if you exceed 3-400 thousand tokens, which is only 30-40% of the maximum window of the model's content.
Therefore, unless you really have to, due to a lot of specific content, I urge you to work with agents that have less saturated content so that they can work effectively.
The reason that the performance of coding agents degrades with longer context is that agents have to consider more context, where more context will be noisy, not really relevant to the problem they are working on. However, it is difficult for models to separate the noise from the really important information, which makes them perform worse.
How to ensure code integrity with coding agents
Of course, it is very important to build a strong code in the beginning. However, it is inevitable that coding agents make mistakes because they cannot see the full context of what they are doing, or for other reasons, they use error-prone and thus ineffective code. In these cases, it is very important to have a safety net where you can find error-prone code and fix it before the user encounters it.
Code agent code review
The first and probably the easiest thing you can do to build more robust code is for writing agents to review the code produced by other writing agents. The way you do this is that you have a new code agent with a clear context window, without their knowledge, of course: analyze the code in the pull request, and look for any errors.
This information provided to the code agent to review the pull request may be updated later. For example, informing about past bugs that have been observed and how these bugs were caught and how they were fixed. This will likely enable update agents to find bugs.
A pro tip here would be to have a different model or a different model for doing code reviews. For example, if you have Claude's code, write your own code. In general, in some cases, it may be useful to have at least one code agent review the code, for example, GPT 5.5 or Gemini 3. This is because different code agents will think differently and thus, in some cases, be able to find bugs.
Discovery of prior commitment
Pre-commit hooks are a concept where you have a piece of code running before all your commits to check for static errors. This could, for example, be translation errors, which is a common pre-commit hook used by many code bases. Of course these hooks work very well and are very useful because if you forget to add a translation they will let you know before you close the deal. Some errors cannot be detected by pre-commit hooks but in these cases it can be very useful for the agent to perform a pre-commit trip. This is where the agent logs in to the application you just made and checks for potential errors. This in many cases saves me a lot of time because the code doesn't have to go to code review and I can fix the errors quickly
Doing this essentially asks the agent:
Is the code generation correct?
This sounds very simple, but it can be really useful, and sometimes helps to find errors in my experience.
The conclusion
In this article, I have discussed how to code using coding agents and ensure that they generate solid code. Indeed, coding agents have improved significantly since the release of ChatGPT in 2022. However, they still tend to make mistakes, especially if they are not used correctly. I've covered two main techniques, including how to build robust code initially and how to validate the code after it's been deployed to look for potential bugs and potential problems with the code. Overall, I think optimizing your coding agents for efficiency will become more important in the future, and many of the techniques I cover in this article will still work even if the general or average performance of LLMs increases dramatically. I just urge you to consider the tips and increase your coding agents.
👋 Touch
👉 My Free eBook and Webinar:
🚀 10x Your Engineering with LLMs (Free 3-Day Email Course)
📚 Get my free ebook Vision Language Models
💻 My webinar on Vision Language Models
👉 Find me on social media:
💌 Stack
🐦 X / Twitter



