Project Setup for Python 2026: uv + Ruff + Ty + Polars

Photo by Editor
# Introduction
Python setting up a project used to mean making a dozen small decisions before you wrote your first useful line of code. Which area manager? What is the tool of dependence? What format? Which linter? What kind of test? And if your project touched data, you had to start with it the pandas, DuckDBor something new?
By 2026, that setup might be a lot easier.
For most new projects, a clean default stack is:
- uv Python installation, environments, dependency management, locking, and command execution.
- The Ruff for line and formatting.
- Ty type testing.
- Timber by dataframe function.
This stack is fast, modern, and remarkably compatible. Three of the four tools (uv, Ruff, and Ty) are actually from the same company, Astralwhich means they seamlessly integrate with each other and yours pyproject.toml.
# Understanding Why This Stack Works
A default setup usually looks like this:
pyenv + pip + venv + pip-tools or Poetry + Black + isort + Flake8 + mypy + pandas
This worked, but it created massive stacking, inconsistencies, and high resolution. You had different tools for positioning, locking dependencies, formatting, import filtering, grouping, and typing. Every new project started with a selective explosion. The 2026 auto stack wraps all that up. The result is fewer tools, fewer configuration files, and less friction when onboarding contributors or integrating continuous improvement (CI). Before jumping into the setup, let's take a quick look at what each tool in the 2026 stack does:
- uv: This is the basis of your project setup. It creates a project, manages versions, handles dependencies, and runs your code. Instead of manually setting up virtual spaces and installing packages, Uv handles the heavy lifting. It keeps your location consistent using the lock file and makes sure everything is correct before running any command.
- The Ruff: This is an all-in-one tool for code quality. It's very fast, checks for problems, fixes many of them automatically, and even formats your code. You can use it instead of tools like Black, isort, Flake8, and others.
- Ty: This is a new type testing tool. It helps catch errors by checking your code types and works with different editors. While newer there are tools like mypy or Pyrightoptimized for modern workflows.
- Polars: This is a modern library for working with dataframes. It focuses on active data processing using lazy execution, which means it prepares queries before running them. This makes it faster and more efficient than pandas, especially for large data operations.
# Required Review
Setup is easy. Here are a few things you need to get started:
- Terminal: macOS Terminal, Windows PowerShell, or any Linux shell.
- Internet connection: It is required for one-time uv installer and package download.
- Code editor: VS code recommended because it works well with Ruff and Ty, but any editor is fine.
- Git: Required for version control; note that uv starts a Git cache by default.
That's it. You do it not you need Python pre-installed. You do it not you need pip, venv, pyenv, or conda. uv handles the installation and management of your environment.
# Step 1: Installing the uv
uv provides a standalone installer that runs on macOS, Linux, and Windows without requiring Python or Rust to be present on your device.
macOS and Linux:
curl -LsSf | sh
Windows PowerShell:
powershell -ExecutionPolicy ByPass -c "irm | iex"
After installation, restart your terminal and verify:
Output:
uv 0.8.0 (Homebrew 2025-07-17)
This single binary now replaces pyenv, pip, venv, pip-tools, and the Poetry project management layer.
# Step 2: Creating a New Project
Navigate to your project directory and open a new one:
uv init my-project
cd my-project
uv creates the first clean structure:
my-project/
├── .python-version
├── pyproject.toml
├── README.md
└── main.py
Reset it to a src/ structure, which improves import, packaging, test classification, and type checker configuration:
mkdir -p src/my_project tests data/raw data/processed
mv main.py src/my_project/main.py
touch src/my_project/__init__.py tests/test_main.py
Your layout should now look like this:
my-project/
├── .python-version
├── README.md
├── pyproject.toml
├── uv.lock
├── src/
│ └── my_project/
│ ├── __init__.py
│ └── main.py
├── tests/
│ └── test_main.py
└── data/
├── raw/
└── processed/
If you need a specific version (eg 3.12), uv can install it and pin it:
uv python install 3.12
uv python pin 3.12
I pin command writes the version to .python-versionto ensure that all team members use the same translator.
# Step 3: Add Dependents
Dependent addition is a single command that solves, inserts, and locks at the same time:
uv automatically creates a virtual environment (.venv/) if not present, resolves the dependency tree, installs packages, and updates uv.lock in direct, pinned versions.
For tools that are only needed during development, use the --dev flag:
uv add --dev ruff ty pytest
This puts them in a different position [dependency-groups] section on pyproject.tomlto keep the dependence on production low. You don't have to run at all source .venv/bin/activate; if you use uv runactivate the correct location automatically.
# Step 4: Setting Up the Ruff (Merging and Formatting)
Ruff is prepared right inside your pyproject.toml. Add the following sections:
[tool.ruff]
line-length = 100
target-version = "py312"
[tool.ruff.lint]
select = ["E4", "E7", "E9", "F", "B", "I", "UP"]
[tool.ruff.format]
docstring-code-format = true
quote-style = "double"
A line length of 100 characters is a good compromise for modern screens. Governing bodies flake8-bugbear (B), sort (I), and improve (UP) adds real value without cramming new cache.
Running Ruff:
# Lint your code
uv run ruff check .
# Auto-fix issues where possible
uv run ruff check --fix .
# Format your code
uv run ruff format .
Notice the pattern: uv run . You don't install tools globally or activate areas manually.
# Step 5: Preparing Ty for type testing
Ty is also prepared in pyproject.toml. Add these sections:
[tool.ty.environment]
root = ["./src"]
[tool.ty.rules]
all = "warn"
[[tool.ty.overrides]]
include = ["src/**"]
[tool.ty.overrides.rules]
possibly-unresolved-reference = "error"
[tool.ty.terminal]
error-on-warning = false
output-format = "full"
This setting starts Ty in alert mode, ready for discovery. He fixes the obvious problems first, then gradually develops the rules into mistakes. Lasting data/** not included prevents type checker noise from non-code reference.
# Step 6: Configuring pytest
Add a pytest class:
[tool.pytest.ini_options]
testpaths = ["tests"]
Run the test suite with:
# Step 7: Testing Complete pyproject.toml
Here's what your final configuration looks like with everything wired – one file, all tools configured, no scattered configuration files:
[project]
name = "my-project"
version = "0.1.0"
description = "Modern Python project with uv, Ruff, Ty, and Polars"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"polars>=1.39.3",
]
[dependency-groups]
dev = [
"pytest>=9.0.2",
"ruff>=0.15.8",
"ty>=0.0.26",
]
[tool.ruff]
line-length = 100
target-version = "py312"
[tool.ruff.lint]
select = ["E4", "E7", "E9", "F", "B", "I", "UP"]
[tool.ruff.format]
docstring-code-format = true
quote-style = "double"
[tool.ty.environment]
root = ["./src"]
[tool.ty.rules]
all = "warn"
[[tool.ty.overrides]]
include = ["src/**"]
[tool.ty.overrides.rules]
possibly-unresolved-reference = "error"
[tool.ty.terminal]
error-on-warning = false
output-format = "full"
[tool.pytest.ini_options]
testpaths = ["tests"]
# Step 8: Write Code in Polars
Replace the contents of the src/my_project/main.py in code that uses the Polars side of the stack:
"""Sample data analysis with Polars."""
import polars as pl
def build_report(path: str) -> pl.DataFrame:
"""Build a revenue summary from raw data using the lazy API."""
q = (
pl.scan_csv(path)
.filter(pl.col("status") == "active")
.with_columns(
revenue_per_user=(pl.col("revenue") / pl.col("users")).alias("rpu")
)
.group_by("segment")
.agg(
pl.len().alias("rows"),
pl.col("revenue").sum().alias("revenue"),
pl.col("rpu").mean().alias("avg_rpu"),
)
.sort("revenue", descending=True)
)
return q.collect()
def main() -> None:
"""Entry point with sample in-memory data."""
df = pl.DataFrame(
{
"segment": ["Enterprise", "SMB", "Enterprise", "SMB", "Enterprise"],
"status": ["active", "active", "churned", "active", "active"],
"revenue": [12000, 3500, 8000, 4200, 15000],
"users": [120, 70, 80, 84, 150],
}
)
summary = (
df.lazy()
.filter(pl.col("status") == "active")
.with_columns(
(pl.col("revenue") / pl.col("users")).round(2).alias("rpu")
)
.group_by("segment")
.agg(
pl.len().alias("rows"),
pl.col("revenue").sum().alias("total_revenue"),
pl.col("rpu").mean().round(2).alias("avg_rpu"),
)
.sort("total_revenue", descending=True)
.collect()
)
print("Revenue Summary:")
print(summary)
if __name__ == "__main__":
main()
Before working, you need a built-in system pyproject.toml so uv installs your project as a package. We will use Hatching:
cat >> pyproject.toml << 'EOF'
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/my_project"]
EOF
Then sync and run:
uv sync
uv run python -m my_project.main
You should see a formatted Polar table:
Revenue Summary:
shape: (2, 4)
┌────────────┬──────┬───────────────┬─────────┐
│ segment ┆ rows ┆ total_revenue ┆ avg_rpu │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ u32 ┆ i64 ┆ f64 │
╞════════════╪══════╪═══════════════╪═════════╡
│ Enterprise ┆ 2 ┆ 27000 ┆ 100.0 │
│ SMB ┆ 2 ┆ 7700 ┆ 50.0 │
└────────────┴──────┴───────────────┴─────────┘
# Managing Daily Workflow
Once the project is set up, the daily loop is straightforward:
# Pull latest, sync dependencies
git pull
uv sync
# Write code...
# Before committing: lint, format, type-check, test
uv run ruff check --fix .
uv run ruff format .
uv run ty check
uv run pytest
# Commit
git add .
git commit -m "feat: add revenue report module"
# Changing the Way You Write Python with Polar
The biggest change of thought in this stack is on the data side. With Polar, your default should be:
- Speeches in addition to intelligent performance in the line. Polar expressions allow the engine to express and match performance. Avoid user-defined functions (UDFs) unless there is no native alternative, as UDFs are very slow.
- Lazy execution over eager loading. Use it
scan_csv()instead ofread_csv(). This creates aLazyFramethat creates a query plan, allowing the optimizer to push filters down and remove unused columns. - Parquet-first workflows over heavy CSV pipelines. A good pattern for internal data preparation looks like this.
# Testing Where This Setup Is Not The Most Appropriate
You may want to make a different choice if:
- Your team has a mature poetry or abstract workflow that works well.
- Your codebase is heavily dependent on pandas-specific APIs or ecosystem libraries.
- Your organization is ranked on Pyright.
- You are working in a legacy storage environment where changing tools will cause more disruption than value.
# Using Expert Tips
- Never activate the physical areas manually. Use it
uv runto ensure that you are using the correct location. - Always commit
uv.lockversion control. This ensures that the project runs the same on every machine. - Use it
--frozenof CI. This installs dependencies from the lock file for fast, reliable builds. - Use it
uvxwith one-off tools. Run the tools without installing them in your project. - Use Ruff's
--fixfreely flag. It can automatically fix unused imports, outdated syntax, and more. - Choose a lazy API by default. Use it
scan_csv()then call only.collect()in the end. - Enter the setting. Use it
pyproject.tomlas the single source of truth for all tools.
# Concluding thoughts
The 2026 Python automation stack reduces setup effort and promotes better practices: locked environments, a single configuration file, faster response, and improved data pipelines. Try it; if you experience environment-agnostic execution, you will understand why developers change.
Kanwal Mehreen is a machine learning engineer and technical writer with a deep passion for data science and the intersection of AI and medicine. He co-authored the ebook “Increasing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, he strives for diversity and academic excellence. He has also been recognized as a Teradata Diversity in Tech Scholar, a Mitacs Globalink Research Scholar, and a Harvard WeCode Scholar. Kanwal is a passionate advocate for change, having founded FEMCodes to empower women in STEM fields.



