Generative AI

How to create a Custom Model Contactor Protector Protocol (MCP) Gemini Client

In this lesson, we will be using a custom model's operator policy (MCP) using Gemini. At the end of this lesson, you will be able to connect your AI programs to the MCP server, unlock the powerful skills to find your projects.

Gemini API

We will be using the Gemini 2.0 Flash Model of this lesson.

For your Gemini API key, visit API's API's Age Age and follow the instructions.

Once you have the key, save safely – you will need you later.

Node.js

Other MCP servers need Node.js to work. Download the latest version of node.js from NOODJS.org

  • Run the Installer.
  • Leave all settings as default and complete the installation.

National Park Services API

In this lesson, we will expose the National Park Services server server for our customer. To use National Park Service API, you can request API key by visiting this link and fill in a short form. Once sent, the API key will be sent to your email.

Make sure to keep this key accessible – we will use it soon.

Posting Python Liapts

In Command Prompt, enter the following code to include Python Information Libraries:

pip install mcp python-dotenv google-genai

To create a MCP.json file

Next, Create a File named mcp.json.

This file will keep configuration information about the MCP servers your client will connect.

Once the file is created, add the following content to initial:

{
    "mcpServers": {
      "nationalparks": {
        "command": "npx",
        "args": ["-y", "mcp-server-nationalparks"],
        "env": {
            "NPS_API_KEY": <”YOUR_NPS_API_KEY”>
        }
      }
    }
}

Locate with the key you made.

To create a .env file

Create a .env file in the same directory and MCP.json file and enter the following code:

Locate with the key you made.

Now we will build a the client. Our MCP client user usage file. Make sure this file is in the same indicator mcp.json including .een

Basic Customer Building

We will first import the required libraries and create a basic client class

import asyncio
import json
import os
from typing import List, Optional
from contextlib import AsyncExitStack
import warnings

from google import genai
from google.genai import types
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from dotenv import load_dotenv

load_dotenv()
warnings.filterwarnings("ignore", category=ResourceWarning)

def clean_schema(schema): # Cleans the schema by keeping only allowed keys
    allowed_keys = {"type", "properties", "required", "description", "title", "default", "enum"}
    return {k: v for k, v in schema.items() if k in allowed_keys}

class MCPGeminiAgent:
    def __init__(self):
        self.session: Optional[ClientSession] = None
        self.exit_stack = AsyncExitStack()
        self.genai_client = genai.Client(api_key=os.getenv("GEMINI_API_KEY"))
        self.model = "gemini-2.0-flash"
        self.tools = None
        self.server_params = None
        self.server_name = None

The IT Initi method begins MCPGiniginigent by setting asynchronous Mission Manager, loading the Gemini API client, and preparing for server configuration fans, tools, and server details.

It places the basis for managing server communication and communication with Gemini Model.

Choosing MCP Server

async def select_server(self):
        with open('mcp.json', 'r') as f:
            mcp_config = json.load(f)
        servers = mcp_config['mcpServers']
        server_names = list(servers.keys())
        print("Available MCP servers:")
        for idx, name in enumerate(server_names):
            print(f"  {idx+1}. {name}")
        while True:
            try:
                choice = int(input(f"Please select a server by number [1-{len(server_names)}]: "))
                if 1 <= choice <= len(server_names):
                    break
                else:
                    print("That number is not valid. Please try again.")
            except ValueError:
                print("Please enter a valid number.")
        self.server_name = server_names[choice-1]
        server_cfg = servers[self.server_name]
        command = server_cfg['command']
        args = server_cfg.get('args', [])
        env = server_cfg.get('env', None)
        self.server_params = StdioServerParameters(
            command=command,
            args=args,
            env=env
        )

This method moves the user to select the server from the available elections included in MCP.json. Loading and preparing parameters of selected server connections for later use.

Connecting to the MCP server

async def connect(self):
        await self.select_server()
        self.stdio_transport = await self.exit_stack.enter_async_context(stdio_client(self.server_params))
        self.stdio, self.write = self.stdio_transport
        self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
        await self.session.initialize()
        print(f"Successfully connected to: {self.server_name}")
        # List available tools for this server
        mcp_tools = await self.session.list_tools()
        print("nAvailable MCP tools for this server:")
        for tool in mcp_tools.tools:
            print(f"- {tool.name}: {tool.description}")

This initiates asynchronous connection to the selected MCP server using the ISTDIO Transport. Starts the MCP session and returns the tools available on the server.

Managing user question and toolbar calls

async def agent_loop(self, prompt: str) -> str:
        contents = [types.Content(role="user", parts=[types.Part(text=prompt)])]
        mcp_tools = await self.session.list_tools()
        tools = types.Tool(function_declarations=[
            {
                "name": tool.name,
                "description": tool.description,
                "parameters": clean_schema(getattr(tool, "inputSchema", {}))
            }
            for tool in mcp_tools.tools
        ])
        self.tools = tools
        response = await self.genai_client.aio.models.generate_content(
            model=self.model,
            contents=contents,
            config=types.GenerateContentConfig(
                temperature=0,
                tools=[tools],
            ),
        )
        contents.append(response.candidates[0].content)
        turn_count = 0
        max_tool_turns = 5
        while response.function_calls and turn_count < max_tool_turns:
            turn_count += 1
            tool_response_parts: List[types.Part] = []
            for fc_part in response.function_calls:
                tool_name = fc_part.name
                args = fc_part.args or {}
                print(f"Invoking MCP tool '{tool_name}' with arguments: {args}")
                tool_response: dict
                try:
                    tool_result = await self.session.call_tool(tool_name, args)
                    print(f"Tool '{tool_name}' executed.")
                    if tool_result.isError:
                        tool_response = {"error": tool_result.content[0].text}
                    else:
                        tool_response = {"result": tool_result.content[0].text}
                except Exception as e:
                    tool_response = {"error":  f"Tool execution failed: {type(e).__name__}: {e}"}
                tool_response_parts.append(
                    types.Part.from_function_response(
                        name=tool_name, response=tool_response
                    )
                )
            contents.append(types.Content(role="user", parts=tool_response_parts))
            print(f"Added {len(tool_response_parts)} tool response(s) to the conversation.")
            print("Requesting updated response from Gemini...")
            response = await self.genai_client.aio.models.generate_content(
                model=self.model,
                contents=contents,
                config=types.GenerateContentConfig(
                    temperature=1.0,
                    tools=[tools],
                ),
            )
            contents.append(response.candidates[0].content)
        if turn_count >= max_tool_turns and response.function_calls:
            print(f"Stopped after {max_tool_turns} tool calls to avoid infinite loops.")
        print("All tool calls complete. Displaying Gemini's final response.")
        return response

This method sends the user's optempy to Gemini, processing any of the model-recovered tools, issue the compatible Tools for MCP, and evaluate the answer. Controls many partnership between Gemini and Server tools.

Chat Chat Loop

async def chat(self):
        print(f"nMCP-Gemini Assistant is ready and connected to: {self.server_name}")
        print("Enter your question below, or type 'quit' to exit.")
        while True:
            try:
                query = input("nYour query: ").strip()
                if query.lower() == 'quit':
                    print("Session ended. Goodbye!")
                    break
                print(f"Processing your request...")
                res = await self.agent_loop(query)
                print("nGemini's answer:")
                print(res.text)
            except KeyboardInterrupt:
                print("nSession interrupted. Goodbye!")
                break
            except Exception as e:
                print(f"nAn error occurred: {str(e)}")

This provides an accounting interface when users can move questions and find answers from Gemini, continuously until they leave the course.

Cleaning resources

async def cleanup(self):
        await self.exit_stack.aclose()

This closes asynchronous context and clean all sources such as the session and connect stack with kindness.

The point of the main entrance

async def main():
    agent = MCPGeminiAgent()
    try:
        await agent.connect()
        await agent.chat()
    finally:
        await agent.cleanup()

if __name__ == "__main__":
    import sys
    import os
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        print("Session interrupted. Goodbye!")
    finally:
        sys.stderr = open(os.devnull, "w")

This is the main creativity of logical.

Out of the first (), all other methods are part of Mcpginaginagent category. You can find the complete client file .py here.

Run the following next to terminal to use your client:

Client will:

  • Read the MCP.json file to write different MCP servers.
  • Search the user to select one of the lists listed.
  • Connect to the selected MCP server using a given suspension and natural settings.
  • Meet the Gemini model with a series of questions and answers.
  • Allow users to issue arguments, issue tools, and process the ITerataly and model.
  • Give the interface of the command line for users to work with the program and receive real-time effects.
  • Verify the appropriate performance of resources after the end of the session, to turn off the memory discharge.


I am the student of the community engineering (2022) from Jamia Millia Islamia, New Delhi, and I am very interested in data science, especially neural networks and their application at various locations.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button