Generative AI

Using a LLM agent for access to tools using MCP's use

The use of the MCP is an open database that allows you to connect to any llm in any MCP server as a MCP server as a web site, file functionality, and more – all – everything without leaning on sealed customers. In this lesson, we will use Langchain-Groq and a built-in-built-in-built-in dialog with MCP to create a simple Chatbot that does not work with MCP tools.

Installing UV package manager

We will start to set up our environment and start by installing UV package manager. With Mac or Linux:

curl -LsSf  | sh 

Of Windows (PowerShip):

powershell -ExecutionPolicy ByPass -c "irm  | iex"

To create a new directory and activate visual nature

We will create a new project directory and start with UV

uv init mcp-use-demo
cd mcp-use-demo

We can now create and use physical environment. With Mac or Linux:

uv venv
source .venv/bin/activate

In Windows:

uv venv
.venvScriptsactivate

Installing Python Leaning

We will now include the leaning needed

uv add mcp-use langchain-groq python-dotenv

API Key of Groq

To use Grip llms:

  1. Visit Griq Console and produce API key.
  2. Create a .env file in your project directory and add the following line:

Locate With a recently produced lock.

API search key

This lesson uses Search for Brave Search MCP Server.

  1. Find your brave's brave's API key from: Brave Search API
  2. Create a file called MCP.json from Project Root with the following content:
{
  "mcpServers": {
    "brave-search": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-brave-search"
      ],
      "env": {
        "BRAVE_API_KEY": ""
      }
    }
  }
}

Locate With your real API keype.

Node Js

Other MCP servers (including courageous search) require NPX, come with node.js.

  • Download the latest version of node.js from Node Node Node .org
  • Run the Installer.
  • Leave all settings as default and complete the installation

Using some servers

If you would like to use a different MCP server, just replace MCP.json content with the configuration of the nobody.

Create a App.py file in Directory and add the following content:

Import libraries

from dotenv import load_dotenv
from langchain_groq import ChatGroq
from mcp_use import MCPAgent, MCPClient
import os
import sys
import warnings

warnings.filterwarnings("ignore", category=ResourceWarning)

This section is a natural variablance and importing the necessary modules in Langchain, the use of MCP, and Gro. It also offers pure output resources.

Setting Chatbot

async def run_chatbot():
    """ Running a chat using MCPAgent's built in conversation memory """
    load_dotenv()
    os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")

    configFile = "mcp.json"
    print("Starting chatbot...")

    # Creating MCP client and LLM instance
    client = MCPClient.from_config_file(configFile)
    llm = ChatGroq(model="llama-3.1-8b-instant")

    # Creating an agent with memory enabled
    agent = MCPAgent(
        llm=llm,
        client=client,
        max_steps=15,
        memory_enabled=True,
        verbose=False
    )

This section is uploading the Groq API key from the .NV file and begins the MCP client using the configuration provided by EMCP.json. Then he puts the Langchain Groq LLM and creates a memory agent to manage conversations.

Using Chatbot

# Add this in the run_chatbot function
    print("n-----Interactive MCP Chat----")
    print("Type 'exit' or 'quit' to end the conversation")
    print("Type 'clear' to clear conversation history")

    try:
        while True:
            user_input = input("nYou: ")

            if user_input.lower() in ["exit", "quit"]:
                print("Ending conversation....")
                break
           
            if user_input.lower() == "clear":
                agent.clear_conversation_history()
                print("Conversation history cleared....")
                continue
           
            print("nAssistant: ", end="", flush=True)

            try:
                response = await agent.run(user_input)
                print(response)
           
            except Exception as e:
                print(f"nError: {e}")

    finally:
        if client and client.sessions:
            await client.close_all_sessions()

This section makes practical conversation, allowing the user to include questions and find the answers to the assistant. It also supports you delete the chat history when requested. Help answers are displayed in real time, and the code ensures that all MCP times are closed are clean when the conversation ends or interrupted.

Uses the app

if __name__ == "__main__":
    import asyncio
    try:
        asyncio.run(run_chatbot())
    except KeyboardInterrupt:
        print("Session interrupted. Goodbye!")
   
    finally:
        sys.stderr = open(os.devnull, "w")

This section conducts asynchronous Chatbot Loop, manage continuous communication with the user. It also provides keyboard disturbance in kindness, to ensure that the system is out of the errors when the user completes the session.

You can get the rest of the code here

Using the app, run the following command

This will start the app, and you can contact Chatbot and use the session server


I am the student of the community engineering (2022) from Jamia Millia Islamia, New Delhi, and I am very interested in data science, especially neural networks and their application at various locations.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button