Generative AI

Creating a Reabled Style Agent Using Firefights AI in Langchain taken by the data, generates memory at Bombeery, and keeps the modification memory

In this lesson, we will investigate how we can increase firefighting skills Ai for building wise, agents enabled in Langchain. Starting by Lelangchain-Fireworks and set up your Fireworks API Fire, we will set up Chatfireworks LLM model, enabled by LLAMA-V30B teaching model. On the way, we will explain custom tools such as the FTCTED URL to postpone the text of the Web page. At the end, we will have a fully functional agent for the reaction of the powerful Ugnida.

!pip install -qU langchain langchain-fireworks requests beautifulsoup4

We Bootstrap Natles by including all the required packages of spython, including Langchain, integration of explosives, and the same items such as requests and beauty. This ensures that we have latest versions of all the elements necessary to carry the entire other letter from the seams.

import requests
from bs4 import BeautifulSoup
from langchain.tools import BaseTool
from langchain.agents import initialize_agent, AgentType
from langchain_fireworks import ChatFireworks
from langchain import LLMChain, PromptTemplate
from langchain.memory import ConversationBufferMemory
import getpass
import os

We bring all imports necessary: ​​HTTP clients (requests, beautions), Langchain Agent Framework, and the usual safety modules and environmental management.

os.environ["FIREWORKS_API_KEY"] = getpass("🚀 Enter your Fireworks API key: ")

Now, it motivates you to install your Fireworks API key via Grass safely and put it on the environment. This step ensures that the following calls to Chatfireworks model are authorized without disclosing your key to the text.

llm = ChatFireworks(
    model="accounts/fireworks/models/llama-v3-70b-instruct",
    temperature=0.6,
    max_tokens=1024,
    stop=["nn"]
)

It shows how Chaldireworks llm should be installed in terms of teaching, using Llama-v3-70b-educational temperature, and the Token Level, which allows you to start immediately from the model.

prompt = [
    {"role":"system","content":"You are an expert data-scientist assistant."},
    {"role":"user","content":"Analyze the sentiment of this review:nn"
                           ""The new movie was breathtaking, but a bit too long.""}
]
resp = llm.invoke(prompt)
print("Sentiment Analysis →", resp.content)

Next, we show a simple example of analyzing – an example of analyzing: Formal messages as messages described in the role, embassy LLM.Invoke ().

template = """
You are a data-science assistant. Keep track of the convo:


{history}
User: {input}
Assistant:"""


prompt = PromptTemplate(input_variables=["history","input"], template=template)
memory = ConversationBufferMemory(memory_key="history")


chain = LLMChain(llm=llm, prompt=prompt, memory=memory)


print(chain.run(input="Hey, what can you do?"))
print(chain.run(input="Analyze: 'The product arrived late, but support was helpful.'"))
print(chain.run(input="Based on that, would you recommend the service?"))

We show how to add a chat memory, which includes the quickest template that includes the previous exchange, setting up the conversationbord, and to include everything and the LLMCCHAIN. Running a couple of sample inserts indicates how the model keeps the change of context across.

class FetchURLTool(BaseTool):
    name: str = "fetch_url"
    description: str = "Fetch the main text (first 500 chars) from a webpage."


    def _run(self, url: str) -> str:
        resp = requests.get(url, timeout=10)
        doc = BeautifulSoup(resp.text, "html.parser")
        paras = [p.get_text() for p in doc.find_all("p")][:5]
        return "nn".join(paras)


    async def _arun(self, url: str) -> str:
        raise NotImplementedError

It describes the custom Fungravingool for Subclassing Basetool. This tool downloads the first few paragraphs from any URL using applications and beauty, making it easier for your agent to find live web content.

class GenerateSQLTool(BaseTool):
    name: str = "generate_sql"
    description: str = "Generate a BigQuery SQL query (with comments) from a text description."


    def _run(self, text: str) -> str:
        prompt = f"""
-- Requirement:
-- {text}


-- Write a BigQuery SQL query (with comments) to satisfy the above.
"""
        return llm.invoke([{"role":"user","content":prompt}]).content


    async def _arun(self, text: str) -> str:
        raise NotImplementedError


tools = [FetchURLTool(), GenerateSQLTool()]


agent = initialize_agent(
    tools,
    llm,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)


result = agent.run(
    "Fetch  "
    "and then generate a BigQuery SQL query that counts how many times "
    "the word 'model' appears in the page text."
)


print("n🔍 Generated SQL:n", result)

Finally, Generesqltool is one of the below Base Tool Subclass Subclass Subclass LLM to transform the obvious English requirements to comment on the BioBquery SQL. It has threatened both tools to be a style agent with an IDanize_agent, using an example of downloaded to creativity, and prints the Outgoing SQL question.

In conclusion, we covered firefights Ai with Langchain tool for Langchain and agent's power agent, unlock a variable platform for AI extemporaneous generation programs. We can expand for agent skills by adding certain Domain, customizing tools, and good memory ethics, all when firefights. As the following steps, examine advanced features such as calling work, multiplying many agents, or installing the Vector-based Red Retrieval and experienced craft.


Look Brochure here. Also, don't forget to follow Sane and join ours Telegraph station including LinkedIn Grtopic. Don't forget to join ours 90k + ml subreddit.

🔥 [Register Now] Summit of the Minicon Virtual in Agentic AI: Free Registration + Certificate of Before Hour 4 Hour Court (May 21, 9 AM


Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button