Generative AI

Guide to enter the step-by-time step to combine real dapier Ai's and compliment tools with Opelai's Chat API

In this lesson, we will learn how to combine Dapier Ai, Suite of Real-Time Search and recommendation tools, develop our chat systems. By combining the Dapier's Realtor in the edge of Dapier we direct you by step-by-step by setting our Google Colab environment, to install API keys safely, and implement each Dapier module. We will then combine these tools with an open chat model (eg GPT-3.5-Turbo), form fastest chain, and remove the end questions, all within notebook nine cells. Whether we need minutes of minutes restoration minutes

!pip install -qU langchain-dappier langchain langchain-openai langchain-community langchain-core openai

We Bootstrap Our Colab Nature by installing core langchain libraries, both Dapier extensions and public integration, next to the official Openaai client. With these local packages, we will have seamless access to real Dapier search and commendation tools, Langchain Rubis, and Opelai API TIMES, everything in one place.

import os
from getpass import getpass


os.environ["DAPPIER_API_KEY"] = getpass("Enter our Dappier API key: ")


os.environ["OPENAI_API_KEY"] = getpass("Enter our OpenAI API key: ")

We safeguard our Dapier and Acabela Auti Determination at work, and thus we avoid difficult inclusion of critical keys in our book. Using Grasspass, the motivating confirms that our input is hidden, and encryption is the environmental variety that makes them available for all cells without exposing them from logs.

from langchain_dappier import DappierRealTimeSearchTool


search_tool = DappierRealTimeSearchTool()
print("Real-time search tool ready:", search_tool)

We import a Dapier Dapier's Dapier's Dapier's Dapier's Dapier. The print statement will ensure the tool is successfully established and ready to manage search applications.

from langchain_dappier import DappierAIRecommendationTool


recommendation_tool = DappierAIRecommendationTool(
    data_model_id="dm_01j0pb465keqmatq9k83dthx34",
    similarity_top_k=3,
    ref="sportsnaut.com",
    num_articles_ref=2,
    search_algorithm="most_recent",
)
print("Recommendation tool ready:", recommendation_tool)

We put the Dapture Compliment engine for describing our custom data model, the number of similar returns, as well as the Domain of the Moto Source. The Daptorytolemendoryooping Condition will now use the “Most_recent” algorithm to pull in the top tops of K (here, two) from our specified form, ready for questioning suggestions.

from langchain.chat_models import init_chat_model


llm = init_chat_model(
    model="gpt-3.5-turbo",
    model_provider="openai",
    temperature=0,
)
llm_with_tools = llm.bind_tools([search_tool])
print("✅ llm_with_tools ready")

We create an example of the Openai Chat model using GPT-3.5-Turbo with 0 temperature to ensure a fixed answer, and tie the previous search tool to the llm may reduce actual search. The final print statement confirms that our llm is ready to call Dapier tools within our conversation.

import datetime
from langchain_core.prompts import ChatPromptTemplate


today = datetime.datetime.today().strftime("%Y-%m-%d")
prompt = ChatPromptTemplate([
    ("system", f"we are a helpful assistant. Today is {today}."),
    ("human", "{user_input}"),
    ("placeholder", "{messages}"),
])


llm_chain = prompt | llm_with_tools
print("✅ llm_chain built")

We create a “Chain” to create ChatprottempteMlemplate building that includes the current date in the fast-speed system and defines user installation slots and previous messages. By hitting our template (|) in our LLM_With, we create an automated librarian indicator, giving a real llm (for the last printing of the seamless work.

from langchain_core.runnables import RunnableConfig, chain


@chain
def tool_chain(user_input: str, config: RunnableConfig):
    ai_msg = llm_chain.invoke({"user_input": user_input}, config=config)
    tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config)
    return llm_chain.invoke(
        {"user_input": user_input, "messages": [ai_msg, *tool_msgs]},
        config=config
    )


print("✅ tool_chain defined")

We define the final quality of the end_and time sending the first time to the llm (captured any requested calls), then release those calls with Search_batch, and finally eat both LLM answers to answer. The @chain decoration converts this to one pipe, which cannot allow us to simply call the tool_chain.invoke (…) to treat both thinking and search in one stick.

res = search_tool.invoke({"query": "What happened at the last Wrestlemania"})
print("🔍 Search:", res)

We show a direct question to the original search engine, asking “What happened to Wrestlemania,” and print a formal result. Easily indicates whether we can easily be able to read Search_invox reading

rec = recommendation_tool.invoke({"query": "latest sports news"})
print("📄 Recommendation:", rec)


out = tool_chain.invoke("Who won the last Nobel Prize?")
print("🤖 Chain output:", out)

Finally, we show both our recommendations and the movement of the full work. First, it costs the recommendations_Tool.

In conclusion, now we have a powerful basis for embrying the Dappiers Ai power in any conversation work. We have seen that the real depapire search gives the llm to find new facts, while the recommendation tool enables us to bring understanding of the principles of data. From here, we can customize new parameters (eg.


Look Dapier platform including Brochure here. Also, don't forget to follow Sane and join ours Telegraph station including LinkedIn Grtopic. Don't forget to join ours 90k + ml subreddit.

🔥 [Register Now] Summit of the Minicon Virtual in Agentic AI: Free Registration + Certificate of Before Hour 4 Hour Court (May 21, 9 AM


Nikhil is a student of students in MarktechPost. Pursuing integrated graduates combined in the Indian Institute of Technology, Kharagpur. Nikhl is a UI / ML enthusiasm that searches for applications such as biomoutomostoments and biomedical science. After a solid in the Material Science, he examines new development and developing opportunities to contribute.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button