A hands-on guide to new structured exit skills

It announced the systematic results of its top models in its API, a new feature designed to ensure that the results produced by the model are exactly the same as the JSON SHEMAS provided by the developers.
This solves the problem many developers face when a program or process consumes LLM output to continue. It is important for that program to “Know” what to expect as its input so that it can process it accordingly.
Similarly, when displaying model output to the user, you want this to be in the same format each time.
Until now, it has been a pain to ensure consistent output formats from anthropic models. However, it turns out that anthropic has now solved this problem with its advanced models anyway. From their announcement (linked at the end of the article), they say,
The Claude developer platform now supports Claude Sonnet 4.5 and Opus 4.1 structured effects. Available in public beta, this feature always validates API responses to match jussons or tool definitions.
Now, one thing to remember before we look at some example code, is that anthropic guarantees that the output of the model will follow a specified format, -I That any output will be 100% accurate. Models can and do plan from time to time.
So you might get wrong answers in order!
Setting up our Dev environment
Before we look at some sample Python Code, it is best to create a separate development environment where you can install any required software and test the codes. Now, anything you do in this area will be moved and won't affect your other projects.
I'll be using miniconda for this, but you can use whatever method you're most familiar with.
If you want to download miniconda router and you don't already have it, you have to install it first. Find out using this link:
Following along with my examples, you will also need an anthth API key and some credit in your account. For reference, I used 12 cents to run the code in this article. If you already have an anthropic account, you can get an API key using the anthropic console at
1 / Create our new Dev environment and install the required libraries
This is WSL2 Ubuntu for Windows.
(base) $ conda create -n anth_test python=3.13 -y
(base) $ conda activate anth_test
(anth_test) $ pip install anthropic beautifulsoup4 requests
(anth_test) $ pip install httpx jupyter
2 / Start Jupyter
Now type in 'Jupyter Notebook' in your Command Prompt. You should see the jupyter notebook open in your browser. If that doesn't happen automatically, you may see an information screen after the command. Near the bottom, you will find a URL to copy and paste into your browser. It will look something like this:
Code examples
For our two coding examples, we'll use the new output_format parameter available in the beta api. When specifying a structured result, we can use two different styles.
1. Raw Jon Schema.
As the name suggests, the structure is defined by the JONS SCONEMA I-Block which is directly transferred to the release definition.
2. Phase of the Pydantic model.
This standard Python class uses a Pydantic basemodel that specifies the information we want the model to output. It's a much more polished way of defining structure than json schema.
Example code 1 – Document Summary
This is useful if you have a bunch of different documents that you want to summarize, but you want the summaries to have the same structure. In this example, we will analyze the Wikipedia entries of some famous scientists and retrieve some key facts about them in a very structured way.
In our summary, we want to extract the following structure of each scientist,
- The name of the scientist
- When and where were they born
- Their main claim to fame
- The year they won the Nobel Prize
- When and where did they die?
NOTE: Most of the text on Wikipedia, except for citations, is licensed under the Creative Commons Attribution-ShareAlike 4.0 Generative License (CFDL) and the GNU Free Documentation License (GFDL) In short, this is free:
Sharing – Copy and paste the material in any medium or format
adaptability – Remix, modify, and build on that
Any purpose, even commercial.
Let's break the code down into manageable chunks, each with a definition.
First, we import the required external libraries and set up a connection to anthropic using our API key.
import anthropic
import httpx
import requests
import json
import os
from bs4 import BeautifulSoup
http_client = httpx.Client()
api_key = 'YOUR_API_KEY'
client = anthropic.Anthropic(
api_key=api_key,
http_client=http_client
)
This is a project that will improve Wikipedia for us.
def get_article_content(url):
try:
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.content, "html.parser")
article = soup.find("div", class_="mw-body-content")
if article:
content = "n".join(p.text for p in article.find_all("p"))
return content[:15000]
else:
return ""
except Exception as e:
print(f"Error scraping {url}: {e}")
return ""
Next, we define our JSON Schema, which defines the exact format of the model output.
summary_schema = {
"type": "object",
"properties": {
"name": {"type": "string", "description": "The name of the Scientist"},
"born": {"type": "string", "description": "When and where the scientist was born"},
"fame": {"type": "string", "description": "A summary of what their main claim to fame is"},
"prize": {"type": "integer", "description": "The year they won the Nobel Prize. 0 if none."},
"death": {"type": "string", "description": "When and where they died. 'Still alive' if living."}
},
"required": ["name", "born", "fame", "prize", "death"],
"additionalProperties": False
}
This function acts as a connector between our Python script and the anthropic api. Its main goal is to take an abstract text (an article) and force the AI to return a structured object (JSON) that contains certain fields, such as the scientist's name, date of birth, and Nobel prizes.
The function calls clients.Mesages.create to send the request to the model. It sets the temperature to 0.2, which lowers the intelligence of the model to ensure that the output data is true and accurate. An extra parameter By passing the anthropic-beta header with the value scheduled-output-2025-11-13, the code tells the API to process the scheduled output for this particular request, forcing it to generate a valid JSON.
Because the output_format parameter is used, the model returns a raw string that is validated as JSON. Line json.loads(response.Content[0].Text) Cast this string into a native Python dictionary, making the data ready for good use.
def get_article_summary(text: str):
if not text: return None
try:
response = client.messages.create(
model="claude-sonnet-4-5", # Use the latest available model
max_tokens=1024,
temperature=0.2,
messages=[
{"role": "user", "content": f"Summarize this article:nn{text}"}
],
# Enable the beta feature
extra_headers={
"anthropic-beta": "structured-outputs-2025-11-13"
},
# Pass the new parameter here
extra_body={
"output_format": {
"type": "json_schema",
"schema": summary_schema
}
}
)
# The API returns the JSON directly in the text content
return json.loads(response.content[0].text)
except anthropic.BadRequestError as e:
print(f"API Error: {e}")
return None
except Exception as e:
print(f"Error: {e}")
return None
This is where we pull everything together. The various URLs we want to protect are defined. Their content is transferred to the processing model, before the end results are displayed.
urls = [
"
"
"
"
]
print("Scraping and analyzing articles...")
for i, url in enumerate(urls):
print(f"n--- Processing Article {i+1} ---")
content = get_article_content(url)
if content:
summary = get_article_summary(content)
if summary:
print(f"Scientist: {summary.get('name')}")
print(f"Born: {summary.get('born')}")
print(f"Fame: {summary.get('fame')}")
print(f"Nobel: {summary.get('prize')}")
print(f"Died: {summary.get('death')}")
else:
print("Failed to generate summary.")
else:
print("Skipping (No content)")
print("nDone.")
When I ran the above code, I got this result.
Scraping and analyzing articles...
--- Processing Article 1 ---
Scientist: Albert Einstein
Born: 14 March 1879 in Ulm, Kingdom of Württemberg, German Empire
Fame: Developing the theory of relativity and the mass-energy equivalence formula E = mc2, plus contributions to quantum theory including the photoelectric effect
Nobel: 1921
Died: 18 April 1955
--- Processing Article 2 ---
Scientist: Richard Phillips Feynman
Born: May 11, 1918, in New York City
Fame: Path integral formulation of quantum mechanics, quantum electrodynamics, Feynman diagrams, and contributions to particle physics including the parton model
Nobel: 1965
Died: February 15, 1988
--- Processing Article 3 ---
Scientist: James Clerk Maxwell
Born: 13 June 1831 in Edinburgh, Scotland
Fame: Developed the classical theory of electromagnetic radiation, unifying electricity, magnetism, and light through Maxwell's equations. Also key contributions to statistical mechanics, color theory, and numerous other fields of physics and mathematics.
Nobel: 0
Died: 5 November 1879
--- Processing Article 4 ---
Scientist: Alan Harvey Guth
Born: February 27, 1947 in New Brunswick, New Jersey
Fame: Pioneering the theory of cosmic inflation, which proposes that the early universe underwent a phase of exponential expansion driven by positive vacuum energy density
Nobel: 0
Died: Still alive
Done.
Not too shabby! Alan Guth will be glad he's still alive, but alas, he hasn't won a Nobel prize. Also, note that Maxwell Maxwell Maxwele died before the Nobel Prize was awarded.
Example Code 2 – Automated Code Security and Analysis Agent.
Here is a completely different use case and a very useful example of software engineering. Usually, when you ask LLM to “Edit the code,” it gives you a chatty response mixed with code blocks. This makes it difficult to integrate into a CI / CD Pipeline or Ide plugin.
Using structured results, we can force the model to return Clean Codea List of specific bugs foundno Security risk assessment in a single, machine-readable json object.
The situation
We will feed the model a Python function that contains dangerous SQL vulnerabilities and bad coding practices. The model should identify certain errors and safely rewrite the code.
import anthropic
import httpx
import os
import json
from pydantic import BaseModel, Field, ConfigDict
from typing import List, Literal
# --- SETUP ---
http_client = httpx.Client()
api_key = 'YOUR_API_KEY'
client = anthropic.Anthropic(api_key=api_key, http_client=http_client)
# Intentionally bad code
bad_code_snippet = """
import sqlite3
def get_user(u):
conn = sqlite3.connect('app.db')
c = conn.cursor()
# DANGER: Direct string concatenation
query = "SELECT * FROM users WHERE username = '" + u + "'"
c.execute(query)
return c.fetchall()
"""
# --- DEFINE SCHEMA WITH STRICT CONFIG ---
# We add model_config = ConfigDict(extra="forbid") to ensure
# "additionalProperties": false is generated in the schema.
class BugReport(BaseModel):
model_config = ConfigDict(extra="forbid")
severity: Literal["Low", "Medium", "High", "Critical"]
line_number_approx: int = Field(description="The approximate line number where the issue exists.")
issue_type: str = Field(description="e.g., 'Security', 'Performance', 'Style'")
description: str = Field(description="Short explanation of the bug.")
class CodeReviewResult(BaseModel):
model_config = ConfigDict(extra="forbid")
is_safe_to_run: bool = Field(description="True only if no Critical/High security risks exist.")
detected_bugs: List[BugReport]
refactored_code: str = Field(description="The complete, fixed Python code string.")
explanation: str = Field(description="A brief summary of changes made.")
# --- API CALL ---
try:
print("Analyzing code for security vulnerabilities...n")
response = client.messages.create(
model="claude-sonnet-4-5",
max_tokens=2048,
temperature=0.0,
messages=[
{
"role": "user",
"content": f"Review and refactor this Python code:nn{bad_code_snippet}"
}
],
extra_headers={
"anthropic-beta": "structured-outputs-2025-11-13"
},
extra_body={
"output_format": {
"type": "json_schema",
"schema": CodeReviewResult.model_json_schema()
}
}
)
# Parse Result
result = json.loads(response.content[0].text)
# --- DISPLAY OUTPUT ---
print(f"Safe to Run: {result['is_safe_to_run']}")
print("-" * 40)
print("BUGS DETECTED:")
for bug in result['detected_bugs']:
# Color code the severity (Red for Critical)
prefix = "🔴" if bug['severity'] in ["Critical", "High"] else "🟡"
print(f"{prefix} [{bug['severity']}] Line {bug['line_number_approx']}: {bug['description']}")
print("-" * 40)
print("REFACTORED CODE:")
print(result['refactored_code'])
except anthropic.BadRequestError as e:
print(f"API Schema Error: {e}")
except Exception as e:
print(f"Error: {e}")
This code acts as an automatic security auditor. Instead of asking the AI to “chat” in code, it forces the AI to fill out a hard, digital form that contains specific information about bugs and security risks.
Here's how it works in three easy steps.
- First, the code explains exactly how the answer was realized using Python classes in conjunction with Pydantic. It tells the AI: “Give me a JSON object that contains a list of bugs, a severity rating (such as 'critical' or 'low') for each, and a string of fixed codes.”
- When sending vulnerable code to the API, it passes the Pydantic writer using the output_format parameter. This strongly constrains the model, prevented from attacking or inserting a conversion filler. It it's right Restore valid data corresponding to your BluepPrint.
- The script receives the AI's response, which is guaranteed to be machine-readable Json. It then allows this data to automatically display a clean report, attack SQL injection as a “critical” problem for example and print a safe, compatible version of the code.
Here is the output I got after running the code.
Analyzing code for security vulnerabilities...
Safe to Run: False
----------------------------------------
BUGS DETECTED:
🔴 [Critical] Line 7: SQL injection vulnerability due to direct string concatenation in query construction. Attacker can inject malicious SQL code through the username parameter.
🟡 [Medium] Line 4: Database connection and cursor are not properly closed, leading to potential resource leaks.
🟡 [Low] Line 1: Function parameter name 'u' is not descriptive. Should use meaningful variable names.
----------------------------------------
REFACTORED CODE:
import sqlite3
from contextlib import closing
def get_user(username):
"""
Retrieve user information from the database by username.
Args:
username (str): The username to search for
Returns:
list: List of tuples containing user data, or empty list if not found
"""
with sqlite3.connect('app.db') as conn:
with closing(conn.cursor()) as cursor:
# Use parameterized query to prevent SQL injection
query = "SELECT * FROM users WHERE username = ?"
cursor.execute(query, (username,))
return cursor.fetchall()
Why is this powerful?
Integration – You are ready. You can run this script on GitHub. If -_Safe_to_run is false, you can automatically block the pull request.
Separation of concerns. You get metadata (bugs, bugs) that are separate from the content (code). You don't have to use Regex to get REMPIP Out “Here's your edited code” from responding.
Strong typing. The severity field is enforced by certain Enum values (critical, maximum, etc.
To put it briefly
The anthropic release of traditional structured effects is a game changer for developers who need credibility, not just conversation. By strictly enforcing Json Schemas, we can now model large language models less like chatbots and more like deterministic software components.
In this article, I've shown how to use this new beta feature to streamline data entry and exit, and build an automated workflow that integrates seamlessly with Python code. If you're an Anthropic API user, the days of writing flimsy regex to deflect AI responses are finally coming to an end.
For more information on this new beta feature, click the link below to visit Anthropics' official website.



