Applying the redemption process using large models of Language llMs

This lesson shows how we can use the redemption process using major language models (llMS) with Mirascope in the Defect Engineering strategy when the model tests, and improves its response based on the answer. This refinence loop can be repeatedly to improve the quality and accuracy of the final response.
The self-esteem is especially effective in activities that involve the thinking, production of codes, and contentment, where development increases leads to the best results. Look Full codes here
To include leaning
!pip install "mirascope[openai]"
Opelai API key
Finding the Opelai API key, visit and generate a new key. If you are a new user, you may need to add payment information and make a minimum payment of $ 5 to activate API access. Look Full codes here
import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')
Basic implementation of self-examination
We start by using the redemption process using the mirascos the process begins by generating the first user's initial response. This answer was evaluated by the model itself, providing a positive response. Finally, the model uses this answer to produce an advanced response. Self_refine function allows us to repeat this process for an analysis of the Itemations, developing a quality out of each cycle. Look Full codes here
from mirascope.core import openai, prompt_template
from mirascope.core.openai import OpenAICallResponse
@openai.call(model="gpt-4o-mini")
def call(query: str) -> str:
return query
@openai.call(model="gpt-4o-mini")
@prompt_template(
"""
Here is a query and a response to the query. Give feedback about the answer,
noting what was correct and incorrect.
Query:
{query}
Response:
{response}
"""
)
def evaluate_response(query: str, response: OpenAICallResponse): ...
@openai.call(model="gpt-4o-mini")
@prompt_template(
"""
For this query:
{query}
The following response was given:
{response}
Here is some feedback about the response:
{feedback}
Consider the feedback to generate a new response to the query.
"""
)
def generate_new_response(
query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
feedback = evaluate_response(query, response)
return {"computed_fields": {"feedback": feedback}}
def self_refine(query: str, depth: int) -> str:
response = call(query)
for _ in range(depth):
response = generate_new_response(query, response)
return response.content
query = "A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?"
print(self_refine(query, 1))
Improving pride in response model
In this improved translation, it describes a systematic order of the mathsolution using the pydantic to capture both of the recipes for recipes and the final order of numbers. Advanced Work_Gerance_New_response process the result by entering the model generated and formatted reply to an advanced response in well-defined schema. This approach guarantees the clarification, consistency, and better useful use of a refinement response – especially the functions such as mathematical problems. Look Full codes here
from pydantic import BaseModel, Field
class MathSolution(BaseModel):
steps: list[str] = Field(..., description="The steps taken to solve the problem")
final_answer: float = Field(..., description="The final numerical answer")
@openai.call(model="gpt-4o-mini", response_model=MathSolution)
@prompt_template(
"""
For this query:
{query}
The following response was given:
{response}
Here is some feedback about the response:
{feedback}
Consider the feedback to generate a new response to the query.
Provide the solution steps and the final numerical answer.
"""
)
def enhanced_generate_new_response(
query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
feedback = evaluate_response(query, response)
return {"computed_fields": {"feedback": feedback}}
def enhanced_self_refine(query: str, depth: int) -> MathSolution:
response = call(query)
for _ in range(depth):
solution = enhanced_generate_new_response(query, response)
response = f"Steps: {solution.steps}nFinal Answer: {solution.final_answer}"
return solution
# Example usage
result = enhanced_self_refine(query, 1)
print(result)
The advanced self-improved self-behavior has successfully prioritized solving the problem with accuracy of the issue provided:
“The train travels on 120 km at a speed. If the speed was 20 km / h soon, it would take 30 minutes a little covering the same distance.” What is the first speed of train? “
Using one of the refinements, the model has brought the Steam noise limit and step by step by the step leading to the appropriate reply of 60 km / h. This shows a few important benefits of analyzing:
- Advanced accuracy of the development conducted by the answer.
- Poor consultation steps, including variable settings, the construction of the equation, and the quadratic solution solution.
- Major obvious, making it easier for users to understand and rely on the solution.
In broader apps, this method holds a solid promise of jobs that require accuracy, make-up, and effective improvement – from developing technology to resolve the writing and writing. However, users should maintain a milestone in trading at Computalational cost and properly clear the depth and the response to show similarity in their case.
Look Full codes here. All credit for this study goes to research for this project. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper.
FAQ: Is MarkteachPost help me to encourage my AI product and place it in front of AI Devs and data engineer?
Ass: Yes, MarkteachPost can help promote your AI product by publishing spurious articles, cases, or product characteristics, directing international AI enhancers and data engineerers. The MTP platform is widely read about technical experts, which increases the appearance of your product and standing within the AI community. [SET UP A CALL]
I am the student of the community engineering (2022) from Jamia Millia Islamia, New Delhi, and I am very interested in data science, especially neural networks and their application at various locations.




