Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add itechmeat/llm-code --skill "pydantic-ai"
Install specific skill from multi-skill repository
# Description
Build production AI agents with Pydantic AI: type-safe tools, structured output, embeddings, MCP, 30+ model providers, evals, graphs, and observability.
# SKILL.md
name: pydantic-ai
description: "Build production AI agents with Pydantic AI: type-safe tools, structured output, embeddings, MCP, 30+ model providers, evals, graphs, and observability."
version: "1.47.0"
release_date: "2026-01-23"
Pydantic AI
Python agent framework for building production-grade GenAI applications with the "FastAPI feeling".
Quick Navigation
| Topic | Reference |
|---|---|
| Agents | agents.md |
| Tools | tools.md |
| Models | models.md |
| Embeddings | embeddings.md |
| Evals | evals.md |
| Integrations | integrations.md |
| Graphs | graphs.md |
| UI Streams | ui.md |
When to Use
- Building AI agents with structured output
- Need type-safe, IDE-friendly agent development
- Require dependency injection for tools
- Multi-model support (OpenAI, Anthropic, Gemini, etc.)
- Production observability with Logfire
- Complex workflows with graphs
Installation
Requires Python 3.10+
# Full install (all model dependencies)
pip install pydantic-ai
# With examples
pip install "pydantic-ai[examples]"
Slim Install
Use pydantic-ai-slim for minimal dependencies:
# Single model
pip install "pydantic-ai-slim[openai]"
# Multiple models
pip install "pydantic-ai-slim[openai,anthropic,logfire]"
Optional Groups:
| Group | Dependency |
|---|---|
openai |
OpenAI models & embeddings |
anthropic |
Anthropic Claude |
google |
Google Gemini & embeddings |
xai |
xAI Grok (native SDK) |
groq |
Groq models |
mistral |
Mistral models |
bedrock |
AWS Bedrock |
vertexai |
Google Vertex AI |
cohere |
Cohere models & embeddings |
huggingface |
Hugging Face Inference |
voyageai |
VoyageAI embeddings |
sentence-transformers |
Local embeddings |
logfire |
Pydantic Logfire |
evals |
Pydantic Evals |
mcp |
MCP protocol |
fastmcp |
FastMCP |
a2a |
Agent-to-Agent |
tavily |
Tavily search |
duckduckgo |
DuckDuckGo search |
exa |
Exa neural search |
cli |
CLI tools |
dbos |
DBOS durable execution |
prefect |
Prefect durable execution |
Quick Start
Basic Agent
from pydantic_ai import Agent
agent = Agent(
'openai:gpt-4o',
instructions='Be concise, reply with one sentence.'
)
result = agent.run_sync('Where does "hello world" come from?')
print(result.output)
With Structured Output
from pydantic import BaseModel
from pydantic_ai import Agent
class CityInfo(BaseModel):
name: str
country: str
population: int
agent = Agent('openai:gpt-4o', output_type=CityInfo)
result = agent.run_sync('Tell me about Paris')
print(result.output) # CityInfo(name='Paris', country='France', population=2161000)
With Tools and Dependencies
from dataclasses import dataclass
from pydantic_ai import Agent, RunContext
@dataclass
class Deps:
user_id: int
agent = Agent('openai:gpt-4o', deps_type=Deps)
@agent.tool
async def get_user_name(ctx: RunContext[Deps]) -> str:
"""Get the current user's name."""
return f"User #{ctx.deps.user_id}"
result = agent.run_sync('What is my name?', deps=Deps(user_id=123))
Key Features
| Feature | Description |
|---|---|
| Type-safe | Full IDE support, type checking |
| Model-agnostic | 30+ providers supported |
| Dependency Injection | Pass context to tools |
| Structured Output | Pydantic model validation |
| Embeddings | Multi-provider vector support |
| Logfire Integration | Built-in observability |
| MCP Support | External tools and data |
| Evals | Systematic testing |
| Graphs | Complex workflow support |
Supported Models
| Provider | Models |
|---|---|
| OpenAI | GPT-4o, GPT-4, o1, o3 |
| Anthropic | Claude 4, Claude 3.5 |
| Gemini 2.0, Gemini 1.5 | |
| xAI | Grok-4 (native SDK) |
| Groq | Llama, Mixtral |
| Mistral | Mistral Large, Codestral |
| Azure | Azure OpenAI |
| Bedrock | AWS Bedrock + Nova 2.0 |
| SambaNova | SambaNova models |
| Ollama | Local models |
Best Practices
- Use type hints β enables IDE support and validation
- Define output types β guarantees structured responses
- Use dependencies β inject context into tools
- Add tool docstrings β LLM uses them as descriptions
- Enable Logfire β for production observability
- Use
run_syncfor simple cases βrunfor async - Override deps for testing β
agent.override(deps=...) - Set usage limits β prevent infinite loops with
UsageLimits
Prohibitions
- Do not expose API keys in code
- Do not skip output validation in production
- Do not ignore tool errors
- Do not use
run_streamwithout handling partial outputs - Do not forget to close MCP connections (
async with agent)
Common Patterns
Streaming Response
async with agent.run_stream('Query') as response:
async for text in response.stream_text():
print(text, end='')
Fallback Models
from pydantic_ai.models.fallback import FallbackModel
fallback = FallbackModel(openai_model, anthropic_model)
agent = Agent(fallback)
MCP Integration
from pydantic_ai.mcp import MCPServerStdio
server = MCPServerStdio('python', args=['mcp_server.py'])
agent = Agent('openai:gpt-4o', toolsets=[server])
Testing with TestModel
from pydantic_ai.models.test import TestModel
agent = Agent(model=TestModel())
result = agent.run_sync('test') # Deterministic output
Embeddings
from pydantic_ai import Embedder
embedder = Embedder('openai:text-embedding-3-small')
# Embed search query
result = await embedder.embed_query('What is ML?')
# Embed documents for indexing
docs = ['Doc 1', 'Doc 2', 'Doc 3']
result = await embedder.embed_documents(docs)
See embeddings.md for providers and settings.
xAI Provider
from pydantic_ai import Agent
agent = Agent('xai:grok-4-1-fast-non-reasoning')
See models.md for configuration details.
Exa Neural Search
import os
from pydantic_ai import Agent
from pydantic_ai.common_tools.exa import ExaToolset
api_key = os.getenv('EXA_API_KEY')
toolset = ExaToolset(api_key, num_results=5, include_search=True)
agent = Agent('openai:gpt-4o', toolsets=[toolset])
See tools.md for all Exa tools.
Links
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.