Generative AI

Implementation of the Pytest Code for building customized and default plugins, repairs, and reporting JSON

In this lesson, we examine the advanced skills of PytestOne of Python's most powerful test structures. We create a full Mini-from starting project indicating the correction, marks, plugins, parameters, and custom configuration. We focus on showing how the Pytest can appear from the alternative testing runner into a solid, exported system of the world's original claims. At the end, we did not just understand how to write exams, but how to control and customize PYTEST Equality with any project requirements. Look Full codes here.

import sys, subprocess, os, textwrap, pathlib, json


subprocess.run([sys.executable, "-m", "pip", "install", "-q", "pytest>=8.0"], check=True)


root = pathlib.Path("pytest_advanced_tutorial").absolute()
if root.exists():
   import shutil; shutil.rmtree(root)
(root / "calc").mkdir(parents=True)
(root / "app").mkdir()
(root / "tests").mkdir()

We start by setting our environment, importing important Python libraries of files and the SubproSS Management. It includes the latest Yettest version to ensure compliance and create a clean project structure with our main code folders, app modules, and experiments. This gives us a solid basis for organizing all well before writing any examination. Look Full codes here.

(root / "pytest.ini").write_text(textwrap.dedent("""
[pytest]
addopts = -q -ra --maxfail=1 -m "not slow"
testpaths = tests
markers =
   slow: slow tests (use --runslow to run)
   io: tests hitting the file system
   api: tests patching external calls
""").strip()+"n")


(root / "conftest.py").write_text(textwrap.dedent(r'''
import os, time, pytest, json
def pytest_addoption(parser):
   parser.addoption("--runslow", action="store_true", help="run slow tests")
def pytest_configure(config):
   config.addinivalue_line("markers", "slow: slow tests")
   config._summary = {"passed":0,"failed":0,"skipped":0,"slow_ran":0}
def pytest_collection_modifyitems(config, items):
   if config.getoption("--runslow"):
       return
   skip = pytest.mark.skip(reason="need --runslow to run")
   for item in items:
       if "slow" in item.keywords: item.add_marker(skip)
def pytest_runtest_logreport(report):
   cfg = report.config._summary
   if report.when=="call":
       if report.passed: cfg["passed"]+=1
       elif report.failed: cfg["failed"]+=1
       elif report.skipped: cfg["skipped"]+=1
       if "slow" in report.keywords and report.passed: cfg["slow_ran"]+=1
def pytest_terminal_summary(terminalreporter, exitstatus, config):
   s=config._summary
   terminalreporter.write_sep("=", "SESSION SUMMARY (custom plugin)")
   terminalreporter.write_line(f"Passed: {s['passed']} | Failed: {s['failed']} | Skipped: {s['skipped']}")
   terminalreporter.write_line(f"Slow tests run: {s['slow_ran']}")
   terminalreporter.write_line("PyTest finished successfully ✅" if s["failed"]==0 else "Some tests failed ❌")


@pytest.fixture(scope="session")
def settings(): return {"env":"prod","max_retries":2}
@pytest.fixture(scope="function")
def event_log(): logs=[]; yield logs; print("\nEVENT LOG:", logs)
@pytest.fixture
def temp_json_file(tmp_path):
   p=tmp_path/"data.json"; p.write_text('{"msg":"hi"}'); return p
@pytest.fixture
def fake_clock(monkeypatch):
   t={"now":1000.0}; monkeypatch.setattr(time,"time",lambda: t["now"]); return t
'''))

Now we make our Pytest configurations and plugin files. In the PYTEST.I, we define marks, default options, and testing methods to control how and how to be found. To the ConfectTest.py, we use a customized custom plugin, failed, and exceeds the exams, adds an -Runslow option, and provides the repair of applicable testing resources. This helps to extend the primary moral character when we keep our setup clean and tightly. Look Full codes here.

(root/"calc"/"__init__.py").write_text(textwrap.dedent('''
from .vector import Vector
def add(a,b): return a+b
def div(a,b):
   if b==0: raise ZeroDivisionError("division by zero")
   return a/b
def moving_avg(xs,k):
   if k<=0 or k>len(xs): raise ValueError("bad window")
   out=[]; s=sum(xs[:k]); out.append(s/k)
   for i in range(k,len(xs)):
       s+=xs[i]-xs[i-k]; out.append(s/k)
   return out
'''))


(root/"calc"/"vector.py").write_text(textwrap.dedent('''
class Vector:
   __slots__=("x","y","z")
   def __init__(self,x=0,y=0,z=0): self.x,self.y,self.z=float(x),float(y),float(z)
   def __add__(self,o): return Vector(self.x+o.x,self.y+o.y,self.z+o.z)
   def __sub__(self,o): return Vector(self.x-o.x,self.y-o.y,self.z-o.z)
   def __mul__(self,s): return Vector(self.x*s,self.y*s,self.z*s)
   __rmul__=__mul__
   def norm(self): return (self.x**2+self.y**2+self.z**2)**0.5
   def __eq__(self,o): return abs(self.x-o.x)<1e-9 and abs(self.y-o.y)<1e-9 and abs(self.z-o.z)<1e-9
   def __repr__(self): return f"Vector({self.x:.2f},{self.y:.2f},{self.z:.2f})"
'''))

We now develop a main calculation module of our project. CALC packages, explaining simple statistics resources, including adding, separating accidental management, dynamic work, to show logical test. According to this side, we form a Vector class supporting arithmetic jobs, Equality Functions, and regular integration, the full example of evaluation and comparisons using the pytest. Look Full codes here.

(root/"app"/"io_utils.py").write_text(textwrap.dedent('''
import json, pathlib, time
def save_json(path,obj):
   path=pathlib.Path(path); path.write_text(json.dumps(obj)); return path
def load_json(path): return json.loads(pathlib.Path(path).read_text())
def timed_operation(fn,*a,**kw):
   t0=time.time(); out=fn(*a,**kw); t1=time.time(); return out,t1-t0
'''))
(root/"app"/"api.py").write_text(textwrap.dedent('''
import os, time, random
def fetch_username(uid):
   if os.environ.get("API_MODE")=="offline": return f"cached_{uid}"
   time.sleep(0.001); return f"user_{uid}_{random.randint(100,999)}"
'''))


(root/"tests"/"test_calc.py").write_text(textwrap.dedent('''
import pytest, math
from calc import add,div,moving_avg
from calc.vector import Vector
@pytest.mark.parametrize("a,b,exp",[(1,2,3),(0,0,0),(-1,1,0)])
def test_add(a,b,exp): assert add(a,b)==exp
@pytest.mark.parametrize("a,b,exp",[(6,3,2),(8,2,4)])
def test_div(a,b,exp): assert div(a,b)==exp
@pytest.mark.xfail(raises=ZeroDivisionError)
def test_div_zero(): div(1,0)
def test_avg(): assert moving_avg([1,2,3,4,5],3)==[2,3,4]
def test_vector_ops(): v=Vector(1,2,3)+Vector(4,5,6); assert v==Vector(5,7,9)
'''))


(root/"tests"/"test_io_api.py").write_text(textwrap.dedent('''
import pytest, os
from app.io_utils import save_json,load_json,timed_operation
from app.api import fetch_username
@pytest.mark.io
def test_io(temp_json_file,tmp_path):
   d={"x":5}; p=tmp_path/"a.json"; save_json(p,d); assert load_json(p)==d
   assert load_json(temp_json_file)=={"msg":"hi"}
def test_timed(capsys):
   val,dt=timed_operation(lambda x:x*3,7); print("dt=",dt); out=capsys.readouterr().out
   assert "dt=" in out and val==21
@pytest.mark.api
def test_api(monkeypatch):
   monkeypatch.setenv("API_MODE","offline")
   assert fetch_username(9)=="cached_9"
'''))


(root/"tests"/"test_slow.py").write_text(textwrap.dedent('''
import time, pytest
@pytest.mark.slow
def test_slow(event_log,fake_clock):
   event_log.append(f"start@{fake_clock['now']}")
   fake_clock["now"]+=3.0
   event_log.append(f"end@{fake_clock['now']}")
   assert len(event_log)==2
'''))

We include jobs for unsighting apps of JSON I / O and a funny API of real body exercising without external services. We write the focus exams that use parandetric, XFail, marks, TMP_PATH, CAPS, and monkeypatch, is to ensure negative and negative effects. We include Slow Tered Wired from our Seple_log -og and illegal adjustment to indicate a controlled time period. Look Full codes here.

print("📦 Project created at:", root)
print("n▶️ RUN #1 (default, skips @slow)n")
r1=subprocess.run([sys.executable,"-m","pytest",str(root)],text=True)
print("n▶️ RUN #2 (--runslow)n")
r2=subprocess.run([sys.executable,"-m","pytest",str(root),"--runslow"],text=True)


summary_file=root/"summary.json"
summary={
   "total_tests":sum("test_" in str(p) for p in root.rglob("test_*.py")),
   "runs": ["default","--runslow"],
   "results": ["success" if r1.returncode==0 else "fail",
               "success" if r2.returncode==0 else "fail"],
   "contains_slow_tests": True,
   "example_event_log":["[email protected]","[email protected]"]
}
summary_file.write_text(json.dumps(summary,indent=2))
print("n📊 FINAL SUMMARY")
print(json.dumps(summary,indent=2))
print("n✅ Tutorial completed — all tests & summary generated successfully.")

We now use our assessment suites after running, producing a JVS summary containing the test results, total the total test files, and a sample event. This final summary gives us a clear summary of our assessment project, which ensures all things work inappropriately from the beginning to the end.

In conclusion, we see how Pytest helps us to test smarter, not difficult. We designed the leading plugin for the results, using the repairs of the administration, and controlling slow trials with custom choices, all during the maintenance of work is clean and tensely. We are in charge of the JSSon detailed summary that shows how easily the Pytest can meet the modern CI and Analytics pipes. Through this basis, we are now confident to extend the pytest, to combine covering, surveillance, or similar to the main test, higher examination.


Look Full codes here. Feel free to look our GITHUB page for tutorials, codes and letters of writing. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper. Wait! Do you with a telegram? Now you can join us with a telegram.


Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.

Follow MarkteachPost: We have added like a favorite source to Google.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button