Machine Learning

Real-Time Iventiactive analysis in Python

What is the best part of being engineer? You just build things. It's like a strong person. One afternoon afternoon I received this organized view of a clear idea of ​​texting with a smile that smiled faces is basic for a good place. The best text, which makes a smile to smile. There are exciting ideas for reading here, so let me direct you the way this book works!

Requirements

Following, you need the following packages:

  • Customkinter
  • OpenCv-Python
  • torch
  • Converts

Use UvYou can add reliance on the following command:

uv add customtkinter opencv-Python torch transformers

Note: When you use Uv Torch you need to specify the package indicator. Eg if you want to use a cuda, you need the following to your pyproject.toml:

[[tool.uv.index]]
name = "pytorch-cu118"
url = "
explicit = true

[tool.uv.sources]
torch = [{ index = "pytorch-cu118" }]
torchvision = [{ index = "pytorch-cu118" }]

Ui Layout Skeleton

In these project species I always would like to start with the fastest planning of UI components. In this case make-up will be very easy, there is a text box with a single line at the top of the top and below it fills the universe available. This will be when we draw a smiling face 🙂

Use customtkinterWe can write the composition as follows:

import customtkinter

class App(customtkinter.CTk):
    def __init__(self) -> None:
        super().__init__()

        self.title("Sentiment Analysis")
        self.geometry("800x600")

        self.grid_columnconfigure(0, weight=1)
        self.grid_rowconfigure(0, weight=0)
        self.grid_rowconfigure(1, weight=1)

        self.sentiment_text_var = customtkinter.StringVar(master=self, value="Love")

        self.textbox = customtkinter.CTkEntry(
            master=self,
            corner_radius=10,
            font=("Consolas", 50),
            justify="center",
            placeholder_text="Enter text here...",
            placeholder_text_color="gray",
            textvariable=self.sentiment_text_var,
        )
        self.textbox.grid(row=0, column=0, padx=20, pady=20, sticky="nsew")
        self.textbox.focus()

        self.image_display = CTkImageDisplay(self)
        self.image_display.grid(row=1, column=0, padx=20, pady=20, sticky="nsew")

Unfortunately there is nothing good to the Defendable OpenCV drawing solution in UI item, so I build my own CTkImageDisplay. If you want to read in detail how it works, check my previous posts. In short, I use a CTKLabel The part and decrease the rope that rests the image from the GUI line using the sync line.

Smiley car

With our smiling faces, we can use different discrete images to express feelings, for example, by example of three observations stored negative, neutral including True. However, to find the more beautiful viewpoint, we need more photos and immediately becomes an insect and we cannot be changed between these pictures.

Discrete feelings with a smile of face

The best way to produce a smile of a smile on a smile on the process. To keep it simple, we will only change the background color of smiley, and the curve of its mouth.

Singing Symptoms of Smoking Imagers

First we need to produce a canvas picture, where we can draw a smile.

def create_sentiment_image(positivity: float, image_size: tuple[int, int]) -> np.ndarray:
    """
    Generates a sentiment image based on the positivity score.
    This draws a smiley with its expression based on the positivity score.

    Args:
        positivity: A float representing the positivity score in the range [-1, 1].
        image_size: A tuple representing the size of the image (width, height).

    Returns:
        A string representing the path to the generated sentiment image.
    """
    width, height = image_size
    frame = np.zeros((height, width, 4), dtype=np.uint8)

    # TODO: draw smiley

    return frame

Our picture should be obvious without a smile face, so we need 4 colored stations, the last will be a alpha channel. As OpenCV images are represented as numpy Shipment with 8-bit IncerWe create a picture using the np.uint8 Type of data. Remember that Arrays are stored y-firstthus height for image_size It is first passed on the creation of some

We can explain some of the size and colors of our smile that will be useful while drawing.

    color_outline = (80,) * 3 + (255,)  # gray
    thickness_outline = min(image_size) // 30
    center = (width // 2, height // 2)
    radius = min(image_size) // 2 - thickness_outline

The background color of a smile must be red with bad cattle and green to find good feelings. To achieve this in the same light across the Reformation, we can use HSV color space and simply keep consistent with Hue between 0% and 30%.

color_bgr = color_hsv_to_bgr(
    hue=(positivity + 1) / 6, # positivity [-1,1] -> hue [0,1/3]
    saturation=0.5,
    value=1,
)
color_bgra = color_bgr + (255,)

We need to be sure to make full color to Paque by adding a 100% total of alpha to the fourth station. Now we can draw our smile on a border face.

cv2.circle(frame, center, radius, color_bgra, -1) # Fill
cv2.circle(frame, center, radius, color_outline, thickness_outline) # Border

So far the best, we can now add eyes. We count offset from the center on the left and right to place two eyes equally.

# calculate the position of the eyes
eye_radius = radius // 5
eye_offset_x = radius // 3
eye_offset_y = radius // 4
eye_left = (center[0] - eye_offset_x, center[1] - eye_offset_y)
eye_right = (center[0] + eye_offset_x, center[1] - eye_offset_y)

cv2.circle(frame, eye_left, eye_radius, color_outline, -1)
cv2.circle(frame, eye_right, eye_radius, color_outline, -1)

Now it is for the hard part, the mouth. The formation of oral will be parabola installed correctly. We can simply repeat the typical parabola y=x² with positivity score.

Finally the line will be drawn using cv2.polylinesthat requires xy pairs. Use np.linspace We produce 100 points in X-Axis and polyval Employee to calculate the prices of polygon.

# mouth parameters
mouth_wdith = radius // 2
mouth_height = radius // 3
mouth_offset_y = radius // 3
mouth_center_y = center[1] + mouth_offset_y + positivity * mouth_height // 2
mouth_left = (center[0] - mouth_wdith, center[1] + mouth_offset_y)
mouth_right = (center[0] + mouth_wdith, center[1] + mouth_offset_y)

# calculate points of polynomial for the mouth
ply_points_t = np.linspace(-1, 1, 100)
ply_points_y = np.polyval([positivity, 0, 0], ply_points_t) # y=positivity*x²

ply_points = np.array(
    [
        (
            mouth_left[0] + i * (mouth_right[0] - mouth_left[0]) / 100,
            mouth_center_y - ply_points_y[i] * mouth_height,
        )
        for i in range(len(ply_points_y))
    ],
    dtype=np.int32,
)

# draw the mouth
cv2.polylines(
    frame,
    [ply_points],
    isClosed=False,
    color=color_outline,
    thickness=int(thickness_outline * 1.5),
)

Et vacà, have strongest symptoms!

Examination, I wrote a quick test case using pytest Smalled face with different scores:

from pathlib import Path

import cv2
import numpy as np
import pytest

from sentiment_analysis.utils import create_sentiment_image

IMAGE_SIZE = (512, 512)


@pytest.mark.parametrize(
    "positivity",
    np.linspace(-1, 1, 5),
)
def test_sentiments(visual_output_path: Path, positivity: float) -> None:
    """
    Test the smiley face generation.
    """
    image = create_sentiment_image(positivity, IMAGE_SIZE)

    assert image.shape == (IMAGE_SIZE[1], IMAGE_SIZE[0], 4)

    # assert center pixel is opaque
    assert image[IMAGE_SIZE[1] // 2, IMAGE_SIZE[0] // 2, 3] == 255

    # save the image for visual inspection
    positivity_num_0_100 = int((positivity + 1) * 50)
    image_fn = f"smiley_{positivity_num_0_100}.png"
    cv2.imwrite(str(visual_output_path / image_fn), image)

Feeling analysis

To see how happy you are or sad our smile should look, first need to analyze text input and count mercy. This work is called Feeling analysis. We will use the model of the previously trained transformer to predict the classification scope Negative, Neutral including True. We can then invest in the confidence scores of these classes to calculate the final points of learning between 1 and + 1.

Using a pipe from the Transformer library, we can explain to processing a pipeline based on the front trained model from GUGGIGINF. You use top_k Parameter, we can say how many different distortions of separation should be restored. As you want all three classes, we put you in 3.

from transformers import pipeline

model_name = "cardiffnlp/twitter-roberta-base-sentiment"

sentiment_pipeline = pipeline(
    task="sentiment-analysis",
    model=model_name,
    top_k=3,
)

Implement sensifical analysis, we can call a pipe through a string dispute. This will return the results list of one thing, so we need to remove the first thing.

results = self.sentiment_pipeline(text)

# [
#     [
#         {"label": "LABEL_2", "score": 0.5925878286361694},
#         {"label": "LABEL_1", "score": 0.3553399443626404},
#         {"label": "LABEL_0", "score": 0.05207228660583496},
#     ]
# ]

for label_score_dict in results[0]:
    label: str = label_score_dict["label"]
    score: float = label_score_dict["score"]

We can describe the map label, which we tell us how each scorage each one affects the final idea. Then we can combine good combinations over all the scores of confidence.

label_mapping = {"LABEL_0": -1, "LABEL_1": 0, "LABEL_2": 1}

positivity = 0.0
for label_score_dict in results[0]:
    label: str = label_score_dict["label"]
    score: float = label_score_dict["score"]

    if label in label_mapping:
        positivity += label_mapping[label] * score

Viewing our pipe, we are wrapped in the classroom and use some tests using pytest. We make sure that the sentences with a good idea has a larger number than zero and the negative phrases of negative feelings should have a school under zero.

import pytest

from sentiment_analysis.sentiment_pipeline import SentimentAnalysisPipeline


@pytest.fixture
def sentiment_pipeline() -> SentimentAnalysisPipeline:
    """
    Fixture to create a SentimentAnalysisPipeline instance.
    """
    return SentimentAnalysisPipeline(
        model_name="cardiffnlp/twitter-roberta-base-sentiment",
        label_mapping={"LABEL_0": -1.0, "LABEL_1": 0.0, "LABEL_2": 1.0},
    )


@pytest.mark.parametrize(
    "text_input",
    [
        "I love this!",
        "This is awesome!",
        "I am so happy!",
        "This is the best day ever!",
        "I am thrilled with the results!",
    ],
)
def test_sentiment_analysis_pipeline_positive(
    sentiment_pipeline: SentimentAnalysisPipeline, text_input: str
) -> None:
    """
    Test the sentiment analysis pipeline with a positive input.
    """
    assert (
        sentiment_pipeline.run(text_input) > 0.0
    ), "Expected positive sentiment score."


@pytest.mark.parametrize(
    "text_input",
    [
        "I hate this!",
        "This is terrible!",
        "I am so sad!",
        "This is the worst day ever!",
        "I am disappointed with the results!",
    ],
)
def test_sentiment_analysis_pipeline_negative(
    sentiment_pipeline: SentimentAnalysisPipeline, text_input: str
) -> None:
    """
    Test the sentiment analysis pipeline with a negative input.
    """
    assert (
        sentiment_pipeline.run(text_input) < 0.0
    ), "Expected negative sentiment score."

Compilation to complete

Now the last part of the lost part, simply installed the text box on our Sintle Pipeline and renew the displayed image in the same. We can add a trace The text variable, which will run a sound pipeline at a new pool held by the rope, blocking UI from the oversight while the pipe works.

class App(customtkinter.CTk):
    def __init__(self, sentiment_analysis_pipeline: SentimentAnalysisPipeline) -> None:
        super().__init__()
        self.sentiment_analysis_pipeline = sentiment_analysis_pipeline

        ...

        self.sentiment_image = None

        self.sentiment_text_var = customtkinter.StringVar(master=self, value="Love")
        self.sentiment_text_var.trace_add("write", lambda *_: self.on_sentiment_text_changed())

        ...

        self.update_sentiment_pool = ThreadPool(processes=1)

        self.on_sentiment_text_changed()

    def on_sentiment_text_changed(self) -> None:
        """
        Callback function to handle text changes in the textbox.
        """
        new_text = self.sentiment_text_var.get()

        self.update_sentiment_pool.apply_async(
            self._update_sentiment,
            (new_text,),
        )

    def _update_sentiment(self, new_text: str) -> None:
        """
        Update the sentiment image based on the new text input.
        This function is run in a separate process to avoid blocking the main thread.

        Args:
            new_text: The new text input from the user.
        """
        positivity = self.sentiment_analysis_pipeline.run(new_text)

        self.sentiment_image = create_sentiment_image(
            positivity,
            self.image_display.display_size,
        )

        self.image_display.update_frame(self.sentiment_image)


def main() -> None:
    # Initialize the sentiment analysis pipeline
    sentiment_analysis = SentimentAnalysisPipeline(
        model_name="cardiffnlp/twitter-roberta-base-sentiment",
        label_mapping={"LABEL_0": -1, "LABEL_1": 0, "LABEL_2": 1},
    )

    app = App(sentiment_analysis)
    app.mainloop()

And finally a smile is shown in the app and is modified by texture input!



With full implementation and many details, check the last project location in GitTub:


Everything is seen in this post was created by the writer.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button