Machine Learning

How to write a MCP server

The opportunity to create a MCP server server to provide an AI generally with a powerful code. Due to its opportunities to change apps, MCP is Technology more enjoyable than I am about Genai general. I wrote more often about that and others entered MCPs typically in the post office.

While the first POCS indicates that there is huge The chances of this is the power of our product value, took many waverations and several stumbling to bring in that promise. In this case, I will try to capture some of the lessons learned, as I think this can benefit some MCP Server developers.

My stack

  • I used a cursor with vscode from time to time as a prime client of MCP
  • To improve the MCP server itself, using the 2.net MCP SDK server, as I have decided to host the server at another written app in .net

Lesson 1: Don't discard all your data in agent

In my program, one tool restarts integrated information on errors and different. API is widely defined as it works sophisticated UI views, and produces a large number of deeper data:

  • Error independent
  • Last Methods Founded
  • Stack traces
  • Most important and means
  • Histogograms

My first hunch was only four to reveal API as is as a MCP tool. After all, the agent should be able to make more sense of it than any UI vision, and hold interesting information or communication between events. There were several situations I thought about how I could expect this data to be helpful. Agency can automatically provide recorded repairs to the production or test area, let me know of prominent errors, or to help me deal with some formal issues the causes of obstacles.

So the basis of the foundation allowed the agent to work its 'magic', has more than an agent's description to its efforts. I quickly put a wrapper around our API on MCP ENDPOINT AND I decided to start with a basic basis to see that everything is working:

Photo by the writer

We can see a smart enough agent enough to know that they need to call another tool to take the natural ID of what 'try'The nature I have said. At that time, after finding that there was actually a recent day in the last 24 hours, it took advantage of the extended time, and that is where things have a little problem:

Photo by the writer

An abnormal response. The agent's questions are different seven days ago, returns some visual consequences for this, but however it continues to specify as completely ignoring data. It continues to try and use tool in different ways and different components of parameter, apparently fumby, I have seen that the flat is out of the fact that the data is completely invisible. While the mistakes sent backwards, the agent actually says there There are no faults. What happens?

Photo by the writer

After some interrogation, the problem was revealed that we were just that we reached the power of agent to process a large number of information in response.

I used an existing API that was overly read, when at first I looked at the benefit. The result, however, that somehow I have been able to cross the model. Overall, there were 360k characters around 360k and 16k words in answer to JSON. This includes telephone stacks, error frames, and indicators. This if It is based on viewing the model's window limit (Claude 3.7 Sonnet should support 200K tokens), but anyway the disposal of large left data is increasing.

One strategy can be one model that supports the largest window. I switched to Gemini 2.5 Pro Model just to test that idea out, because it is struggling with a boring limit of a million tokens. Of course, the same question now pointed out a very wise answer:

Photo by the writer

This is good! The agent managed to combine faults and to find a systematic cause of them with basic thinking. However, we will not depend on the user using a particular model, and we have a problem with difficult things, this was from the lowest intolerant surface. What if the datasset was larger?
To solve the matter, I have made a basic change in the way API was organized:

  • Negted Hierarchy data: Keep the first answer that focuses on high details and combinations. Create a different API to get mobile phone stacks as needed.
  • Upgrade Unity: All questions made up to the agent that used the smaller page size (10), if we want the agent to find suitable errors associated with different errors, in the affected ways, the type of error, which is very important and impact on etc.

For a new change, the tool is now analyzing such as important differences and comes with repairing suggestions. However, I looked at some of the little details I need to plan before I really used it.

Lesson 2: What is the time?

A photograph produced by the writer with a MIDJOUNEY

The student all may have seen that in the previous example, returning a period of time in a period of time, using ISO 8601 Time time Format instead of the real days and times. So instead of normal installation 'From the'and'Above'Datemie's treasures, AI sent the time period, for example, seven days or P7d, To show that you want to look at the errors last week.

The reason for this is amazing – The agent may not know the date and time period! You can confirm that yourself by asking a simple agency. This below would not have mind was not because I type that soon inside the day 4 …

Photo by the writer

Using time Time in general Prices turn into a good solution that the agent was handled very well. Don't forget to write the expected number and example of syntax in the interpretation of the tool parameter, however!

Lesson 3: When the agent made a mistake, it has shown you to do better

In the first example, I was amazing how the agent was able to determine the dependence between different telephone calls to provide the right place. In studying the MCP contract, he found that he had to ask for another tactive tool for the Natural ID first.

However, to respond to other applications, agent may sometimes take natural words mentioned in a quick verbatim. For example, I realized that answering this question: Compare soft traces of this method between testing and production facilities, there is a big difference? Depending on Mood, An agent sometimes use natural words mentioned in the request and can send the “testing” and “production” as an Environment ID.

In my original use, my MCP server will fail in peace, bringing back an empty response. The agent, when he found no special debt or mistake, could simply stop and try to solve the application using another strategy. Deleting that Code of Conduct, I soon changed my use to be given, the JSSON answer can well describe what went wrong, and also provides a valid listing list to save the agent one else.

Photo by the writer

This is enough for agent, studying with its mistake, repeated a call at fair value and somehow avoided making that same mistake in the future.

Lesson 4: Focus on the user's intention not to work

While the temptation is simply to explain what API is doing, sometimes familiar words do not allow agent to see the requirements of the requirements that this operation can use you better.

Let's take a simple example: My MCP server has a tool, in each way, the conclusion, or the code area, can show how it is used during work. Specially, it uses tracking data to show what request request accesses a specific activity or method.

The original documents simply defines this applying:

[McpServerTool,
Description(
@"For this method, see which runtime flows in the application
(including other microservices and code not in this project)
use this function or method.
This data is based on analyzing distributed tracing.")]
public static async Task GetUsagesForMethod(IMcpService client,
[Description("The environment id to check for usages")]
string environmentId,
[Description("The name of the class. Provide only the class name without the namespace prefix.")]
string codeClass,
[Description("The name of the method to check, must specify a specific method to check")]
string codeMethod)

The above symbolizes the accurate explanation of what the instrument of the instrument you do, but it is not clear what forms of activities that may work. After seeing the agent was not available for a variety of encouragement and thought it was helpful, I have decided to rewrite the tool, in this regard to use charges:

[McpServerTool,
Description(
@"Find out what is the how a specific code location is being used and by
which other services/code.
Useful in order to detect possible breaking changes, to check whether
the generated code will fit the current usages,
to generate tests based on the runtime usage of this method,
or to check for related issues on the endpoints triggering this code
after any change to ensure it didnt impact it"

Updating the text helped the agent realize why the information was useful. For example, before making this change, the agent would not even trigger the tool in response to a prompt similar to the one below. Now, it has become completely seamless, without the user having to directly mention that this tool should be used:

Image by author

Lesson 5: Document your JSON responses

The JSON standard, at least officially, does not support comments. That means that if the JSON is all the agent has to go on, it might be missing some clues about the context of the data you’re returning. For example, in my aggregated error response, I returned the following score object:

"Score": {"Score":21,
"ScoreParams":{ "Occurrences":1,
"Trend":0,
"Recent":20,
"Unhandled":0,
"Unexpected":0}}

Without proper documentation, any non-clairvoyant agent would be hard pressed to make sense of what these numbers mean. Thankfully, it is easy to add a comment element at the beginning of the JSON file with additional information about the data provided:

"_comment": "Each error contains a link to the error trace,
which can be retrieved using the GetTrace tool,
information about the affected endpoints the code and the
relevant stacktrace.
Each error in the list represents numerous instances
of the same error and is given a score after its been
prioritized.
The score reflects the criticality of the error.
The number is between 0 and 100 and is comprised of several
parameters, each can contribute to the error criticality,
all are normalized in relation to the system
and the other methods.
The score parameters value represents its contributation to the
overall score, they include:

1. 'Occurrences', representing the number of instances of this error
compared to others.
2. 'Trend' whether this error is escalating in its
frequency.
3. 'Unhandled' represents whether this error is caught
internally or poropagates all the way
out of the endpoint scope
4. 'Unexpected' are errors that are in high probability
bugs, for example NullPointerExcetion or
KeyNotFound",
"EnvironmentErrors":[]

This enables the agent to be explained to the user what it means to ask, but also feed this definition in its pronunciation and its recommendation.

Choosing the right construction: Sse vs Stdio,

There are two structures you can use in the development of the MCP server. The most common and widely supported activation makes your server available as command caused a MCP client. This can be any mature command; NPX, Doccerbeside python Some common examples. In this configuration, all communication is done through the process Officialand the process itself works on a customer machine. The client is dealing with startup and maintaining the MCP server life.

Photo by the writer

This side Artitecture has one major return in my view: As the implementation of the MCP server is driven by customer on the area of ​​the area, it is very difficult to remove new updates or skills. However, such a problem is resolved, a strong integration between the MCP server and the Backend Apis server depends on our application to develop this model and background.

For these reasons, I have chosen the second type of MCP server – the SSE server held as part of our app services. This removes any conflict from CLI CLU Mission, and allows me to renew and update and the application code used. In this case, the client is given through the URL of the SSE ENDPOINT. Although not all clients support this option, there is a funny Commandmpp called SuperGateWay can be used as a SSE server representative. That means users can no longer add more supportive stdio and still use the functionality held on your SSE Peyaclend.

Photo by the writer

The MCPS is Sasha

There are many lessons and nuances to use this simple technology of deception. I have found that there is a big gap between using a practical MCP that is able to merge the user's needs and situations that are used, even more than you expect. Hopefully, as technology is matured, we will see much posts with excellent habits and

You want to connect? You can reach me on Twitter at @Doppleware or with LinkedIn.
Follow myMCP Estimated code analysis using observation to

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button