Machine Learning

Creating a modern dashboard with Python Netay

This is the third article in a short series of data Dashboard development tools for Python Development tools, broadcasting, gradio and taYy.

The source data set for each Dueboard are the same, but stored in different formats. As much as possible, I will try to make real Dashboard structures that we are like and have similar performance.

I have already written streamlit and gradio versions. Streamlit version receives its source data from Postgres Database. Gradio Netay versions receive their data in the CSV file. You can find links to those other articles at the end of this.

What is taYy?

TAYPY is a Python-based web-based web that became outstanding for a few years ago. According to its website, Taily is …

Open Python Library to create manufacturer's production – Ready – the end of the front-end & ex-end within time. No website development information is required!

TaYy intended audiences are a data scientist, the machine-learning device engineers may not have a number of complete final experiences, but usually there are python. TAYPY makes it easier to create front-ends using the python, so that is defeated.

You can start using TAYPY FREE. If you need to use it as part of the business, a dedicated support and disability, payments are available monthly or annually. Their website provides some details, which I will coordinate at the end of this article.

Why did you use taYy more than gradio or stream?

As I showed on this two other articles, you can create a very similar result using all three spaces, so urgently urges the question of why you use one more.

While gradio passes quickly to create demons in ML and streamlit brightly by means of the effective data, they both work with the desire of your desire to rise as your desire for the desire to get up as your application grows. TAYPY comes into the picture where your project needs to complete the simple or short script into a solid, effective, and active.

You have to consider firmly in choosing taYy more than distributions / gradio if,

  • The operation of your application is important
  • Your one text file starts to lose control and complex.
  • You need to create multiple pages with a complicated navigation.
  • Your application requires that “performing” status “or the murder of complicated pipeline.
  • To create a production tool for business users, not just the internal checkboards.
  • She works in a group and you need a clean, continuing code.

In short, select The growing moon with demons. Designate Support in practical examination. Designate Taiphety If you are ready to create a higher, marketing, and production of business data requests.

What to Improve

We are upgrading the Data Dashboard. Our source data will be one CSV file that contains 100,000 sales records.

The real source of data is not that It's important. It can be easily stored as a parequet file, in sqlites or postgres, any database you can connect to.

This is what our last dashboard will look.

Photo by the writer

There are four main categories.

  • The top line allows the user to select the first time and endings and / or product stages using time-dropped lists, respectively.
  • The second line, “Important Matters, It provides a high-level summary of the selected data.
  • This page Pioneer The category allows the user to choose one of the three graphs to show the installation data.
  • This page Raw data Category is exactly what it says. This Tabular template template selected We successfully look forward to the CSV data file.

Using the dashboard is easy. Initially, the whole data set statistics are displayed. The user may then reduce the data focus using 3 options for selection at the top of the template. Graphs, key metrics, and parts of renewable green data to update to display user choices.

Source information

As mentioned, Dashboard source data contained in one commander of the Comma-separated by the comma (CSV). Details contains records related to sales relating to the performance. Here are the first ten file records.

+----------+------------+------------+----------------+------------+---------------+------------+----------+-------+--------------------+
| order_id | order_date | customer_id| customer_name  | product_id | product_names | categories | quantity | price | total              |
+----------+------------+------------+----------------+------------+---------------+------------+----------+-------+--------------------+
| 0        | 01/08/2022 | 245        | Customer_884   | 201        | Smartphone    | Electronics| 3        | 90.02 | 270.06             |
| 1        | 19/02/2022 | 701        | Customer_1672  | 205        | Printer       | Electronics| 6        | 12.74 | 76.44              |
| 2        | 01/01/2017 | 184        | Customer_21720 | 208        | Notebook      | Stationery | 8        | 48.35 | 386.8              |
| 3        | 09/03/2013 | 275        | Customer_23770 | 200        | Laptop        | Electronics| 3        | 74.85 | 224.55             |
| 4        | 23/04/2022 | 960        | Customer_23790 | 210        | Cabinet       | Office     | 6        | 53.77 | 322.62             |
| 5        | 10/07/2019 | 197        | Customer_25587 | 202        | Desk          | Office     | 3        | 47.17 | 141.51             |
| 6        | 12/11/2014 | 510        | Customer_6912  | 204        | Monitor       | Electronics| 5        | 22.5  | 112.5              |
| 7        | 12/07/2016 | 150        | Customer_17761 | 200        | Laptop        | Electronics| 9        | 49.33 | 443.97             |
| 8        | 12/11/2016 | 997        | Customer_23801 | 209        | Coffee Maker  | Electronics| 7        | 47.22 | 330.54             |
| 9        | 23/01/2017 | 151        | Customer_30325 | 207        | Pen           | Stationery | 6        | 3.5   | 21                 |
+----------+------------+------------+----------------+------------+---------------+------------+----------+-------+--------------------+

And here is another Python code to use to produce dataset. Using the field libraries and Pandas Python information, so make sure they are both installed before using the code.

# generate the 100000 record CSV file
#
import polars as pl
import numpy as np
from datetime import datetime, timedelta

def generate(nrows: int, filename: str):
    names = np.asarray(
        [
            "Laptop",
            "Smartphone",
            "Desk",
            "Chair",
            "Monitor",
            "Printer",
            "Paper",
            "Pen",
            "Notebook",
            "Coffee Maker",
            "Cabinet",
            "Plastic Cups",
        ]
    )
    categories = np.asarray(
        [
            "Electronics",
            "Electronics",
            "Office",
            "Office",
            "Electronics",
            "Electronics",
            "Stationery",
            "Stationery",
            "Stationery",
            "Electronics",
            "Office",
            "Sundry",
        ]
    )
    product_id = np.random.randint(len(names), size=nrows)
    quantity = np.random.randint(1, 11, size=nrows)
    price = np.random.randint(199, 10000, size=nrows) / 100
    # Generate random dates between 2010-01-01 and 2023-12-31
    start_date = datetime(2010, 1, 1)
    end_date = datetime(2023, 12, 31)
    date_range = (end_date - start_date).days
    # Create random dates as np.array and convert to string format
    order_dates = np.array([(start_date + timedelta(days=np.random.randint(0, date_range))).strftime('%Y-%m-%d') for _ in range(nrows)])
    # Define columns
    columns = {
        "order_id": np.arange(nrows),
        "order_date": order_dates,
        "customer_id": np.random.randint(100, 1000, size=nrows),
        "customer_name": [f"Customer_{i}" for i in np.random.randint(2**15, size=nrows)],
        "product_id": product_id + 200,
        "product_names": names[product_id],
        "categories": categories[product_id],
        "quantity": quantity,
        "price": price,
        "total": price * quantity,
    }
    # Create Polars DataFrame and write to CSV with explicit delimiter
    df = pl.DataFrame(columns)
    df.write_csv(filename, separator=',',include_header=True)  # Ensure comma is used as the delimiter
# Generate 100,000 rows of data with random order_date and save to CSV
generate(100_000, "/mnt/d/sales_data/sales_data.csv")

Installing and Using TaYy

Installing TAYY is easy, but before installing codes, it is the best way to set a different Python area to all your activities. I use a minicondonda for this purpose, but feel free to use any way that suits your spill.

If you want to follow a minicondond route and you don't have any one you have, you should first add a minicondonda.

Where nature is created, switch to it using 'Active' order, and run 'PIP Install' above Apply our required Python libraries.

#create our test environment
(base) C:Usersthoma>conda create -n taipy_dashboard python=3.12 -y

# Now activate it
(base) C:Usersthoma>conda activate taipy_dashboard

# Install python libraries, etc ...
(taipy_dashboard) C:Usersthoma>pip install taipy pandas

Code

I will decrease the code into categories and explain each one as we continue.

Section 1

from taipy.gui import Gui
import pandas as pd
import datetime

# Load CSV data
csv_file_path = r"d:sales_datasales_data.csv"

try:
    raw_data = pd.read_csv(
        csv_file_path,
        parse_dates=["order_date"],
        dayfirst=True,
        low_memory=False  # Suppress dtype warning
    )
    if "revenue" not in raw_data.columns:
        raw_data["revenue"] = raw_data["quantity"] * raw_data["price"]
    print(f"Data loaded successfully: {raw_data.shape[0]} rows")
except Exception as e:
    print(f"Error loading CSV: {e}")
    raw_data = pd.DataFrame()

categories = ["All Categories"] + raw_data["categories"].dropna().unique().tolist()

# Define the visualization options as a proper list
chart_options = ["Revenue Over Time", "Revenue by Category", "Top Products"]

This document prepares sales data to be used in our TAYY Vialinzation app. Makes the following,

  1. It match the required external libraries and responsibilities and prioritize our source data from CSV installation.
  2. Calculates the metrics taken as income.
  3. Uninstall appropriate filter options (paragraphs).
  4. Describe options available to look at the eyes.

Section 2

start_date = raw_data["order_date"].min().date() if not raw_data.empty else datetime.date(2020, 1, 1)
end_date = raw_data["order_date"].max().date() if not raw_data.empty else datetime.date(2023, 12, 31)
selected_category = "All Categories"
selected_tab = "Revenue Over Time"  # Set default selected tab
total_revenue = "$0.00"
total_orders = 0
avg_order_value = "$0.00"
top_category = "N/A"
revenue_data = pd.DataFrame(columns=["order_date", "revenue"])
category_data = pd.DataFrame(columns=["categories", "revenue"])
top_products_data = pd.DataFrame(columns=["product_names", "revenue"])

def apply_changes(state):
    filtered_data = raw_data[
        (raw_data["order_date"] >= pd.to_datetime(state.start_date)) &
        (raw_data["order_date"] <= pd.to_datetime(state.end_date))
    ]
    if state.selected_category != "All Categories":
        filtered_data = filtered_data[filtered_data["categories"] == state.selected_category]

    state.revenue_data = filtered_data.groupby("order_date")["revenue"].sum().reset_index()
    state.revenue_data.columns = ["order_date", "revenue"]
    print("Revenue Data:")
    print(state.revenue_data.head())

    state.category_data = filtered_data.groupby("categories")["revenue"].sum().reset_index()
    state.category_data.columns = ["categories", "revenue"]
    print("Category Data:")
    print(state.category_data.head())

    state.top_products_data = (
        filtered_data.groupby("product_names")["revenue"]
        .sum()
        .sort_values(ascending=False)
        .head(10)
        .reset_index()
    )
    state.top_products_data.columns = ["product_names", "revenue"]
    print("Top Products Data:")
    print(state.top_products_data.head())

    state.raw_data = filtered_data
    state.total_revenue = f"${filtered_data['revenue'].sum():,.2f}"
    state.total_orders = filtered_data["order_id"].nunique()
    state.avg_order_value = f"${filtered_data['revenue'].sum() / max(filtered_data['order_id'].nunique(), 1):,.2f}"
    state.top_category = (
        filtered_data.groupby("categories")["revenue"].sum().idxmax()
        if not filtered_data.empty else "N/A"
    )

def on_change(state, var_name, var_value):
    if var_name in {"start_date", "end_date", "selected_category", "selected_tab"}:
        print(f"State change detected: {var_name} = {var_value}")  # Debugging
        apply_changes(state)

def on_init(state):
    apply_changes(state)

import taipy.gui.builder as tgb

def get_partial_visibility(tab_name, selected_tab):
    return "block" if tab_name == selected_tab else "none"

Sets the automatic start and end of the first category. Also, the first chart will be displayed as Revenue for period on time. BreaderDers prices and initial prices are set up for the following: –

  • Totack_Revenue. Set to "$0.00".
  • Number_aashi. Set to 0.
  • Avg_orner_value. Set to "$0.00".
  • Top_category study. Set to "N/A".

Blank Dataframes are set: –

  • Income_data. Columns ["order_date", "revenue"].
  • Class_data. Columns ["categories", "revenue"].
  • Top_products_Data. Columns ["product_names", "revenue"].

This page Apply_Change The work is defined. This function is created to revitalize the situation in which filters (such as Date or section) are used. Update the following: –

  • Time-Series Revenue time.
  • Money distribution in total phases.
  • 10 higher products for income.
  • Matterts Matterts (Perfect Money, Perfect Orders, Maximum Information Order, Top section).

This page on_Search Work fire whenever the user-cheated parts are changed

This page on_init Activity fires when the app starts to work.

This page Get_phartial_vility Work decides CSS display The property of the UI components based on the selected tab.

Section 3

with tgb.Page() as page:
    tgb.text("# Sales Performance Dashboard", mode="md")
    
    # Filters section
    with tgb.part(class_name="card"):
        with tgb.layout(columns="1 1 2"):  # Arrange elements in 3 columns
            with tgb.part():
                tgb.text("Filter From:")
                tgb.date("{start_date}")
            with tgb.part():
                tgb.text("To:")
                tgb.date("{end_date}")
            with tgb.part():
                tgb.text("Filter by Category:")
                tgb.selector(
                    value="{selected_category}",
                    lov=categories,
                    dropdown=True,
                    width="300px"
                )
   
    # Metrics section
    tgb.text("## Key Metrics", mode="md")
    with tgb.layout(columns="1 1 1 1"):
        with tgb.part(class_name="metric-card"):
            tgb.text("### Total Revenue", mode="md")
            tgb.text("{total_revenue}")
        with tgb.part(class_name="metric-card"):
            tgb.text("### Total Orders", mode="md")
            tgb.text("{total_orders}")
        with tgb.part(class_name="metric-card"):
            tgb.text("### Average Order Value", mode="md")
            tgb.text("{avg_order_value}")
        with tgb.part(class_name="metric-card"):
            tgb.text("### Top Category", mode="md")
            tgb.text("{top_category}")

    tgb.text("## Visualizations", mode="md")
    # Selector for visualizations with reduced width
    with tgb.part(style="width: 50%;"):  # Reduce width of the dropdown
        tgb.selector(
            value="{selected_tab}",
            lov=["Revenue Over Time", "Revenue by Category", "Top Products"],
            dropdown=True,
            width="360px",  # Reduce width of the dropdown
        )

    # Conditional rendering of charts based on selected_tab
    with tgb.part(render="{selected_tab == 'Revenue Over Time'}"):
        tgb.chart(
            data="{revenue_data}",
            x="order_date",
            y="revenue",
            type="line",
            title="Revenue Over Time",
        )

    with tgb.part(render="{selected_tab == 'Revenue by Category'}"):
        tgb.chart(
            data="{category_data}",
            x="categories",
            y="revenue",
            type="bar",
            title="Revenue by Category",
        )

    with tgb.part(render="{selected_tab == 'Top Products'}"):
        tgb.chart(
            data="{top_products_data}",
            x="product_names",
            y="revenue",
            type="bar",
            title="Top Products",
        )

    # Raw Data Table
    tgb.text("## Raw Data", mode="md")
    tgb.table(data="{raw_data}")

This part of the code describes the appearance of the whole page and is divided into a few sections below

Home Page

TGP.Page (). Represents a large dashboard container, describing the building and the components of the page.

Dashboard structure

  • Showing the subject: “Selling Working Dashboard” In marking marker (mode="md").

Sorting class

  • Placed within a The part of the card using a 3-color formation –tgb.layout(columns="1 1 2")– Planning filters.

Features of Solving

  1. Start date. Choosing Date tgb.date("{start_date}")by choosing the start of distance distance.
  2. The end date. Choosing Date tgb.date("{end_date}") by selecting the end of the distance distance.
  3. The paragraph filter.
  • Chosen Designated tgb.selector Sorting data into categories.
  • Included using categories eg, "All Categories" and categories available from the Database.

Metetmely class

Displays four summary statistics Metric Cards Edited in the 4th column structure:

  • Current money. Indicates total_revenue the value.
  • Orders are perfect. Displays the number of different orders (total_orders).
  • The average order value. Indicates avg_order_value.
  • The top class. Displays the name of the phase that includes very income.

Visual phase

  • The lower selection permits users to change between different views (eg the time income, the income in section, “” high products “).
  • The wide range is reduced with an integrated UI.

Conditional Offers for charts

  • Money for the proceedings. Displays a line chart revenue_data Displays the tendency of income later.
  • Income with a paragraph. Displays the bar chart category_data visualization of the scope of paragraphs.
  • High products. Displays the bar chart top_products_data Displays 10 higher products for income.

Table of green data

  • Displays green dataset in the table format.
  • Powerful Renewal Revenue Deleted User (eg Day Day, Section).

Article 4

Gui(page).run(
    title="Sales Dashboard",
    dark_mode=False,
    debug=True,
    port="auto",
    allow_unsafe_werkzeug=True,
    async_mode="threading"
)

This last, short phase provides a display page in the browser.

Running code

Collect all the codes of the code above and save them to file, e.g. Tappy -App.ppy. Make sure your data source file is in the correct place and properly targeted your code. Then you run a module such as any other Python code by typing this on the only-line-terminal.

python taipy-app.py

After a second or two, you should see the browser window open with your data shown app.

Summary

In this article, I have tried to provide the full Guide Performance Dashboard with TAYPHEY ATTYER File as its source data as its source data as its source data as its source data as its source data as its source data as its source data as its source data.

I have described tapy is an open source framework, based on Python-based on Python that enhances the creation of data drives and applications. I also gave specific suggestions on why you can want to use TAYPY in other two popular, gradio and streamlit.

I'm upgraded to users to sort out data for long distances and products, evaluate higher metrics such as top lessons and high products, and roam with green data via PageNage.

This guide provides a comprehensive implementation, covering the whole process from creating a sample data in creating Python jobs for the data to ask, to generate the user's installation. This method of step-by-step shows how you can see Tay's Katai skills to create easy and dynamic devices, making it good for data engineering and scientists want to create effective data apps.

Although I have used the CSV file for my data source, I adjust the code to use another data source, such as Database Management as SQLITE, should understand.

For more information about TAYY, their website is

To view some of my TDS construction DDS Data Discounts using Gradio and redirected, click the links below.

GROO CASHBOBOBLE

Streamlit Dashboard

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button