introduced Structured Outputs for its prime fashions in its API, a brand new function designed to make sure that model-generated outputs precisely match the JSON Schemas supplied by builders.
This solves an issue many builders face when a system or course of consumes an LLM’s output for additional processing. It’s important for that system to “know” what to anticipate as its enter so it will probably course of it accordingly.
Equally, when displaying mannequin output to a person, you need this to be in the identical format every time.
To date, it’s been a ache to make sure constant output codecs from Anthropic fashions. Nonetheless, it seems that Anthropic has now solved this downside for its prime fashions anyway. From their announcement (linked on the finish of the article), they are saying,
The Claude Developer Platform now helps structured outputs for Claude Sonnet 4.5 and Opus 4.1. Obtainable in public beta, this function ensures API responses all the time match your specified JSON schemas or instrument definitions.
Now, one factor to recollect earlier than we take a look at some instance code, is that Anthropic ensures that the mannequin’s output will adhere to a specified format, not that any output will probably be 100% correct. The fashions can and should hallucinate sometimes.
So you may get completely formatted incorrect solutions!
Establishing our dev atmosphere
Earlier than we take a look at some pattern Python code, it’s finest follow to create a separate improvement atmosphere the place you may set up any obligatory software program and experiment with coding. Now, something you do on this atmosphere will probably be siloed and received’t affect any of your different initiatives.
I’ll be utilizing Miniconda for this, however you need to use no matter technique you’re most conversant in.
If you wish to go down the Miniconda route and don’t have already got it, it’s essential to set up it first. Get it utilizing this hyperlink:
https://docs.anaconda.com/miniconda/
To observe together with my examples, you’ll additionally want an Anthropic API key and a few credit score in your account. For reference, I used 12 cents to run the code on this article. If you have already got an Anthropic account, you may get an API key utilizing the Anthropic console at https://console.anthropic.com/settings/keys.
1/ Create our new dev atmosphere and set up the required libraries
this on WSL2 Ubuntu for Home windows.
(base) $ conda create -n anth_test python=3.13 -y
(base) $ conda activate anth_test
(anth_test) $ pip set up anthropic beautifulsoup4 requests
(anth_test) $ pip set up httpx jupyter
2/ Begin Jupyter
Now kind in ‘jupyter pocket book’ into your command immediate. It’s best to see a jupyter pocket book open in your browser. If that doesn’t occur mechanically, you’ll probably see a screenful of knowledge after the command. Close to the underside, you’ll discover a URL to repeat and paste into your browser. It’ll look much like this:
http://127.0.0.1:8888/tree?token=3b9f7bd07b6966b41b68e2350721b2d0b6f388d248cc69
Code Examples
In our two coding examples, we are going to use the brand new output_format parameter accessible within the beta API. When specifying the structured output, we are able to use two completely different kinds.
1. Uncooked JSON Schema.
Because the identify suggests, the construction is outlined by a JSON schema block handed on to the output format definition.
2. A Pydantic mannequin class.
It is a common Python class utilizing Pydantic’s BaseModel that specifies the info we wish the mannequin to output. It’s a way more compact solution to outline a construction than a JSON schema.
Instance code 1 — Textual content summarisation
That is helpful in case you have a bunch of various texts you need to summarise, however need the summaries to have the identical construction. On this instance, we’ll course of the Wikipedia entries for some well-known scientists and retrieve particular key details about them in a extremely organised method.
In our abstract, we need to output the next construction for every scientist,
- The identify of the Scientist
- When and the place they had been born
- Their important declare to fame
- The 12 months they received the Nobel Prize
- When and the place they died
Observe: Most textual content in Wikipedia, excluding quotations, has been launched beneath the Artistic Commons Attribution-Sharealike 4.0 Worldwide License (CC-BY-SA) and the GNU Free Documentation License (GFDL) In brief because of this you might be free:
to Share — copy and redistribute the fabric in any medium or format
to Adapt — remix, rework, and construct upon the fabric
for any goal, even commercially.
Let’s break the code into manageable sections, every with an evidence.
First, we import the required third-party libraries and arrange connections to Anthropic utilizing our API Key.
import anthropic
import httpx
import requests
import json
import os
from bs4 import BeautifulSoup
http_client = httpx.Consumer()
api_key = 'YOUR_API_KEY'
consumer = anthropic.Anthropic(
api_key=api_key,
http_client=http_client
)
That is the perform that may scrape Wikipedia for us.
def get_article_content(url):
attempt:
headers = {'Person-Agent': 'Mozilla/5.0 (Home windows NT 10.0; Win64; x64)'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.content material, "html.parser")
article = soup.discover("div", class_="mw-body-content")
if article:
content material = "n".be part of(p.textual content for p in article.find_all("p"))
return content material[:15000]
else:
return ""
besides Exception as e:
print(f"Error scraping {url}: {e}")
return ""
Subsequent, we outline our JSON schema, which specifies the precise format for the mannequin’s output.
summary_schema = {
"kind": "object",
"properties": {
"identify": {"kind": "string", "description": "The identify of the Scientist"},
"born": {"kind": "string", "description": "When and the place the scientist was born"},
"fame": {"kind": "string", "description": "A abstract of what their important declare to fame is"},
"prize": {"kind": "integer", "description": "The 12 months they received the Nobel Prize. 0 if none."},
"demise": {"kind": "string", "description": "When and the place they died. 'Nonetheless alive' if residing."}
},
"required": ["name", "born", "fame", "prize", "death"],
"additionalProperties": False
}
This perform serves because the interface between our Python script and the Anthropic API. Its main purpose is to take unstructured textual content (an article) and drive the AI to return a structured information object (JSON) containing particular fields, such because the scientist’s identify, delivery date, and Nobel Prize particulars.
The perform calls consumer.messages.create to ship a request to the mannequin. It units the temperature to 0.2, which lowers the mannequin’s creativity to make sure the extracted information is factual and exact. The extra_headers parameter permits a selected beta function that isn’t but normal. By passing the anthropic-beta header with the worth structured-outputs-2025-11-13, the code tells the API to activate the Structured Outputs logic for this particular request, forcing it to provide legitimate JSON that matches your outlined construction.
As a result of the output_format parameter is used, the mannequin returns a uncooked string that’s assured to be legitimate JSON. The road json.hundreds(response.content material[0].textual content) parses this string right into a native Python dictionary, making the info instantly prepared for programmatic use.
def get_article_summary(textual content: str):
if not textual content: return None
attempt:
response = consumer.messages.create(
mannequin="claude-sonnet-4-5", # Use the most recent accessible mannequin
max_tokens=1024,
temperature=0.2,
messages=[
{"role": "user", "content": f"Summarize this article:nn{text}"}
],
# Allow the beta function
extra_headers={
"anthropic-beta": "structured-outputs-2025-11-13"
},
# Move the brand new parameter right here
extra_body={
"output_format": {
"kind": "json_schema",
"schema": summary_schema
}
}
)
# The API returns the JSON immediately within the textual content content material
return json.hundreds(response.content material[0].textual content)
besides anthropic.BadRequestError as e:
print(f"API Error: {e}")
return None
besides Exception as e:
print(f"Error: {e}")
return None
That is the place we pull all the pieces collectively. The varied URLs we need to scrape are outlined. Their contents are handed to the mannequin for processing, earlier than the tip outcomes are displayed.
urls = [
"https://en.wikipedia.org/wiki/Albert_Einstein",
"https://en.wikipedia.org/wiki/Richard_Feynman",
"https://en.wikipedia.org/wiki/James_Clerk_Maxwell",
"https://en.wikipedia.org/wiki/Alan_Guth"
]
print("Scraping and analyzing articles...")
for i, url in enumerate(urls):
print(f"n--- Processing Article {i+1} ---")
content material = get_article_content(url)
if content material:
abstract = get_article_summary(content material)
if abstract:
print(f"Scientist: {abstract.get('identify')}")
print(f"Born: {abstract.get('born')}")
print(f"Fame: {abstract.get('fame')}")
print(f"Nobel: {abstract.get('prize')}")
print(f"Died: {abstract.get('demise')}")
else:
print("Did not generate abstract.")
else:
print("Skipping (No content material)")
print("nDone.")
After I ran the above code, I bought this output.
Scraping and analyzing articles...
--- Processing Article 1 ---
Scientist: Albert Einstein
Born: 14 March 1879 in Ulm, Kingdom of Württemberg, German Empire
Fame: Creating the speculation of relativity and the mass-energy equivalence method E = mc2, plus contributions to quantum concept together with the photoelectric impact
Nobel: 1921
Died: 18 April 1955
--- Processing Article 2 ---
Scientist: Richard Phillips Feynman
Born: Could 11, 1918, in New York Metropolis
Fame: Path integral formulation of quantum mechanics, quantum electrodynamics, Feynman diagrams, and contributions to particle physics together with the parton mannequin
Nobel: 1965
Died: February 15, 1988
--- Processing Article 3 ---
Scientist: James Clerk Maxwell
Born: 13 June 1831 in Edinburgh, Scotland
Fame: Developed the classical concept of electromagnetic radiation, unifying electrical energy, magnetism, and lightweight by means of Maxwell's equations. Additionally key contributions to statistical mechanics, coloration concept, and quite a few different fields of physics and arithmetic.
Nobel: 0
Died: 5 November 1879
--- Processing Article 4 ---
Scientist: Alan Harvey Guth
Born: February 27, 1947 in New Brunswick, New Jersey
Fame: Pioneering the speculation of cosmic inflation, which proposes that the early universe underwent a part of exponential enlargement pushed by constructive vacuum vitality density
Nobel: 0
Died: Nonetheless alive
Accomplished.
Not too shabby! Alan Guth will probably be delighted that he’s nonetheless alive, however alas, he hasn’t but received a Nobel Prize. Additionally, be aware that James Clerk Maxwell had died earlier than the Nobel Prize was in operation.
Instance code 2 — Automated Code Safety & Refactoring Agent.
Here’s a fully completely different use case and a really sensible instance for software program engineering. Often, while you ask an LLM to “repair code,” it offers you a conversational response blended with code blocks. This makes it laborious to combine right into a CI/CD pipeline or an IDE plugin.
By utilizing Structured Outputs, we are able to drive the mannequin to return the clear code, a record of particular bugs discovered, and a safety threat evaluation in a single, machine-readable JSON object.
The Situation
We’ll feed the mannequin a Python perform containing a harmful SQL Injection vulnerability and poor coding practices. The mannequin should establish the precise flaws and rewrite the code securely.
import anthropic
import httpx
import os
import json
from pydantic import BaseModel, Subject, ConfigDict
from typing import Checklist, Literal
# --- SETUP ---
http_client = httpx.Consumer()
api_key = 'YOUR_API_KEY'
consumer = anthropic.Anthropic(api_key=api_key, http_client=http_client)
# Deliberately dangerous code
bad_code_snippet = """
import sqlite3
def get_user(u):
conn = sqlite3.join('app.db')
c = conn.cursor()
# DANGER: Direct string concatenation
question = "SELECT * FROM customers WHERE username = '" + u + "'"
c.execute(question)
return c.fetchall()
"""
# --- DEFINE SCHEMA WITH STRICT CONFIG ---
# We add model_config = ConfigDict(additional="forbid") to make sure
# "additionalProperties": false is generated within the schema.
class BugReport(BaseModel):
model_config = ConfigDict(additional="forbid")
severity: Literal["Low", "Medium", "High", "Critical"]
line_number_approx: int = Subject(description="The approximate line quantity the place the problem exists.")
issue_type: str = Subject(description="e.g., 'Safety', 'Efficiency', 'Fashion'")
description: str = Subject(description="Brief clarification of the bug.")
class CodeReviewResult(BaseModel):
model_config = ConfigDict(additional="forbid")
is_safe_to_run: bool = Subject(description="True provided that no Crucial/Excessive safety dangers exist.")
detected_bugs: Checklist[BugReport]
refactored_code: str = Subject(description="The entire, fastened Python code string.")
clarification: str = Subject(description="A short abstract of modifications made.")
# --- API CALL ---
attempt:
print("Analyzing code for safety vulnerabilities...n")
response = consumer.messages.create(
mannequin="claude-sonnet-4-5",
max_tokens=2048,
temperature=0.0,
messages=[
{
"role": "user",
"content": f"Review and refactor this Python code:nn{bad_code_snippet}"
}
],
extra_headers={
"anthropic-beta": "structured-outputs-2025-11-13"
},
extra_body={
"output_format": {
"kind": "json_schema",
"schema": CodeReviewResult.model_json_schema()
}
}
)
# Parse End result
end result = json.hundreds(response.content material[0].textual content)
# --- DISPLAY OUTPUT ---
print(f"Secure to Run: {end result['is_safe_to_run']}")
print("-" * 40)
print("BUGS DETECTED:")
for bug in end result['detected_bugs']:
# Colour code the severity (Pink for Crucial)
prefix = "🔴" if bug['severity'] in ["Critical", "High"] else "🟡"
print(f"{prefix} [{bug['severity']}] Line {bug['line_number_approx']}: {bug['description']}")
print("-" * 40)
print("REFACTORED CODE:")
print(end result['refactored_code'])
besides anthropic.BadRequestError as e:
print(f"API Schema Error: {e}")
besides Exception as e:
print(f"Error: {e}")
This code acts as an automatic safety auditor. As an alternative of asking the AI to “chat” about code, it forces the AI to fill out a strict, digital type containing particular particulars about bugs and safety dangers.
Right here is the way it works in three easy steps.
- First, the code defines precisely what the reply should appear like utilizing Python courses along with Pydantic. It tells the AI: “Give me a JSON object containing an inventory of bugs, a severity ranking (like ‘Crucial’ or ‘Low’) for every, and the fastened code string.”
- When sending the weak code to the API, it passes the Pydantic blueprint utilizing the output_format parameter. This strictly constrains the mannequin, stopping it from hallucinating or including conversational filler. It should return legitimate information matching your blueprint.
- The script receives the AI’s response, which is assured to be machine-readable JSON. It then mechanically parses this information to show a clear report, flagging the SQL injection as a “Crucial” concern for instance and printing the safe, refactored model of the code.
Right here is the output I obtained after working the code.
Analyzing code for safety vulnerabilities...
Secure to Run: False
----------------------------------------
BUGS DETECTED:
🔴 [Critical] Line 7: SQL injection vulnerability as a result of direct string concatenation in question building. Attacker can inject malicious SQL code by means of the username parameter.
🟡 [Medium] Line 4: Database connection and cursor will not be correctly closed, resulting in potential useful resource leaks.
🟡 [Low] Line 1: Perform parameter identify 'u' shouldn't be descriptive. Ought to use significant variable names.
----------------------------------------
REFACTORED CODE:
import sqlite3
from contextlib import closing
def get_user(username):
"""
Retrieve person info from the database by username.
Args:
username (str): The username to seek for
Returns:
record: Checklist of tuples containing person information, or empty record if not discovered
"""
with sqlite3.join('app.db') as conn:
with closing(conn.cursor()) as cursor:
# Use parameterized question to forestall SQL injection
question = "SELECT * FROM customers WHERE username = ?"
cursor.execute(question, (username,))
return cursor.fetchall()
Why is that this highly effective?
Integration-ready. You may run this script in a GitHub Motion. If is_safe_to_run is False, you may mechanically block a Pull Request.
Separation of issues. You get the metadata (bugs, severity) separate from the content material (the code). You don’t have to make use of Regex to strip out “Right here is your fastened code” textual content from the response.
Strict typing. The severity discipline is constrained to particular Enum values (Crucial, Excessive, and many others.), guaranteeing your downstream logic doesn’t break when the mannequin returns “Extreme” as an alternative of “Crucial” for instance.
Abstract
Anthropic’s launch of native Structured Outputs is a game-changer for builders who want reliability, not simply dialog. By implementing strict JSON schemas, we are able to now deal with Massive Language Fashions much less like chatbots and extra like deterministic software program elements.
On this article, I demonstrated the best way to use this new beta function to streamline information extraction and output, and construct automated workflows that combine seamlessly with Python code. In case you’re a person of Anthropic’s API, the times of writing fragile Regex to parse AI responses are lastly over.
For extra details about this new beta function, click on the hyperlink beneath to go to Anthropics’ official documentation web page.
https://platform.claude.com/docs/en/build-with-claude/structured-outputs

