-
Notifications
You must be signed in to change notification settings - Fork 996
Insights: pydantic/pydantic-ai
Overview
Could not load contribution data
Please try again later
2 Releases published by 1 person
-
v0.3.5 v0.3.5 (2025-06-30)
published
Jun 30, 2025 -
v0.3.6 v0.3.6 (2025-07-04)
published
Jul 4, 2025
15 Pull requests merged by 12 people
-
Fix GitHub Models docs links
#2135 merged
Jul 4, 2025 -
Add GitHub Models provider
#2114 merged
Jul 4, 2025 -
Add
model_request_stream_sync
to direct API#2116 merged
Jul 4, 2025 -
Revert "Use contextvars for tracking the MCP sampling model"
#2132 merged
Jul 4, 2025 -
simplify weather example
#2129 merged
Jul 4, 2025 -
Fix model parameters not being customized in fallback model request stream
#2120 merged
Jul 3, 2025 -
Use contextvars for tracking the MCP sampling model
#2117 merged
Jul 2, 2025 -
Use contextvars for agent overriding, rather than a local attribute
#2118 merged
Jul 2, 2025 -
Record tool response in tool run span
#2109 merged
Jul 1, 2025 -
Add support for predicted outputs in OpenAIModelSettings
#2106 merged
Jul 1, 2025 -
Update client.md - Typo
#2105 merged
Jul 1, 2025 -
Update starlette subdomain in docs
#2099 merged
Jul 1, 2025 -
Deprecate
{FunctionToolCallEvent,FunctionToolResultEvent}.call_id
in favor oftool_call_id
#2028 merged
Jul 1, 2025
12 Pull requests opened by 12 people
-
Bugfix: avoid race condition when refreshing google token
#2100 opened
Jun 30, 2025 -
feat: AG-UI adapter (toolsets)
#2101 opened
Jun 30, 2025 -
Builtin tool
#2102 opened
Jun 30, 2025 -
Upgrade fasta2a to A2A v0.2.3 and Enable Dependency Injection
#2103 opened
Jul 1, 2025 -
Added support for google specific arguments for video analysis
#2110 opened
Jul 1, 2025 -
feat: add HistoryProcessors wrapper
#2124 opened
Jul 3, 2025 -
Add temporal as a dependency group
#2126 opened
Jul 3, 2025 -
Add context dependencies to Tool.from_schema()
#2130 opened
Jul 4, 2025 -
Add: structured output support via StructuredOutput type
#2133 opened
Jul 4, 2025 -
Add `settings` to Model base class
#2136 opened
Jul 5, 2025 -
Adding CountToken to Gemini
#2137 opened
Jul 5, 2025 -
Fix/cli refactor
#2138 opened
Jul 6, 2025
8 Issues closed by 5 people
-
sync method for direct api `model_request_stream`
#2096 closed
Jul 4, 2025 -
Automatically retry Gemini requests that fail with a 500 INTERNAL
#2115 closed
Jul 3, 2025 -
How to disable thinking for Qwen/Qwen3-32B (deployed by vllm) in Pydantic AI?
#2121 closed
Jul 3, 2025 -
LLM misinterprets "Plain text responses are not permitted" message as coming from the user
#1993 closed
Jul 2, 2025 -
Show tool response in the running tool span
#2004 closed
Jul 1, 2025 -
Best Practice for Streaming Output from Nested Agents
#2107 closed
Jul 1, 2025 -
Add support for predicted outputs in OpenAIModelSettings
#2098 closed
Jul 1, 2025
16 Issues opened by 14 people
-
Support more audio format in Gemini models
#2143 opened
Jul 7, 2025 -
Making File Downloading Model Profile Compatible
#2142 opened
Jul 7, 2025 -
Greater introspection into the successful model when using FallbackModel
#2141 opened
Jul 7, 2025 -
ModelResponsePart should support multimodal data
#2140 opened
Jul 7, 2025 -
Add optional src or metadata field to BinaryContent
#2139 opened
Jul 7, 2025 -
LLM thinking blocks displayed in Logfire message view
#2131 opened
Jul 4, 2025 -
Support Anthropic tool result citations
#2128 opened
Jul 3, 2025 -
Support OpenAI encrypted thinking tokens
#2127 opened
Jul 3, 2025 -
[Suggestion] Documenting how to implement 12 Factor Agents
#2125 opened
Jul 3, 2025 -
Agent tool calling works with mistral but fails with llama models (on AWS Bedrock)
#2123 opened
Jul 3, 2025 -
Get dependencies in function Tool.from_schema()
#2122 opened
Jul 3, 2025 -
FallbackModel to allow model_settings specific to each model
#2119 opened
Jul 2, 2025 -
Feature Request: Realtime/Live API integration
#2113 opened
Jul 2, 2025 -
Feature Request: Enhancements for Test Orchestration, Non-Determinism, and Extensibility
#2112 opened
Jul 2, 2025 -
Support for Long running / Background Agents
#2111 opened
Jul 1, 2025 -
Show output functions span in logfire
#2108 opened
Jul 1, 2025
38 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Toolsets
#2024 commented on
Jul 4, 2025 • 41 new comments -
Implemented a convenient way to use ACI.dev Tools in PydanticAI
#2093 commented on
Jul 4, 2025 • 22 new comments -
Update tests to be compatible with new OpenAI, MistralAI and MCP versions
#2094 commented on
Jul 5, 2025 • 6 new comments -
Add Hugging Face as a provider
#1911 commented on
Jul 2, 2025 • 5 new comments -
Anthropic extended thinking: invalid request error when using tools
#2040 commented on
Jul 3, 2025 • 0 new comments -
Pre-validation tool response modifications
#1985 commented on
Jul 3, 2025 • 0 new comments -
Add option to filter reasoning content from message history
#1157 commented on
Jul 3, 2025 • 0 new comments -
Remove deprecated preview models from LatestGeminiModelNames
#2016 commented on
Jul 4, 2025 • 0 new comments -
Extracting `new_messages()` in the middle of an `Agent.iter` run to support user-in-the-loop approval of tool calls
#1995 commented on
Jul 4, 2025 • 0 new comments -
Gemini unable to stream structured output
#1237 commented on
Jul 5, 2025 • 0 new comments -
Native temporal support
#1975 commented on
Jul 5, 2025 • 0 new comments -
Save `history_processors`'s result for next round model request
#2095 commented on
Jul 5, 2025 • 0 new comments -
Add Resources, Prompts to MCP Client
#1558 commented on
Jul 5, 2025 • 0 new comments -
Adding LiteLLM as model wrap just like how google-adk does it.
#1496 commented on
Jul 5, 2025 • 0 new comments -
Handoffs / sub-agent delegation
#1978 commented on
Jul 6, 2025 • 0 new comments -
Add a setting to remove prompts and completions from tracing
#1571 commented on
Jul 7, 2025 • 0 new comments -
Support image output
#1130 commented on
Jun 30, 2025 • 0 new comments -
Add `builtin_tools` to `Agent`
#1722 commented on
Jul 1, 2025 • 0 new comments -
Dockerise the MCP Run Python server (#1837)
#2090 commented on
Jun 30, 2025 • 0 new comments -
BinaryContent is naively parsed when include_input is used in LLMJudge
#2089 commented on
Jun 30, 2025 • 0 new comments -
Feature Request: Support Both JSON & YAML as Tool Output Formats
#2074 commented on
Jun 30, 2025 • 0 new comments -
Running many models on Bedrock with `output_type` fails with "This model doesn't support the toolConfig.toolChoice.any field."
#2091 commented on
Jun 30, 2025 • 0 new comments -
MALFORMED_FUNCTION finishReason in Gemini candidate
#631 commented on
Jun 30, 2025 • 0 new comments -
[OpenAI] Error Handling for Exceeding Maximum Token Limits
#1098 commented on
Jun 30, 2025 • 0 new comments -
Messages passed to history_processor functions include current run messages
#2050 commented on
Jun 30, 2025 • 0 new comments -
add the ability to set a list of allowed tools on an mcp server
#1219 commented on
Jun 30, 2025 • 0 new comments -
For list-like `result_models`, LLM tries to call `final_result` multiple times
#1429 commented on
Jul 1, 2025 • 0 new comments -
Feature Request: mcp-run-python support streamable http
#2059 commented on
Jul 1, 2025 • 0 new comments -
Gemini causes 'Event loop is closed' when running inside an async context
#748 commented on
Jul 1, 2025 • 0 new comments -
run_stream not working properly with tools in streaming mode
#1007 commented on
Jul 1, 2025 • 0 new comments -
Gemini 2.5 Pro Streamed response has no content field (just streamed thinking parts)
#2097 commented on
Jul 1, 2025 • 0 new comments -
Support `o3-deep-research` & `o4-mini-deep-research`
#2086 commented on
Jul 1, 2025 • 0 new comments -
Response parts are not timestamped
#1364 commented on
Jul 1, 2025 • 0 new comments -
Add media processing settings to GoogleModelSettings
#2017 commented on
Jul 2, 2025 • 0 new comments -
Azure OpenAI API Streaming Response Causes AttributeError in pydantic-ai
#797 commented on
Jul 2, 2025 • 0 new comments -
[Feature Request] Stateful Python Execution Environment for Agents in run_python MCP server (Jupyter-like Workspace)
#2070 commented on
Jul 3, 2025 • 0 new comments -
Feature Request: Error handling multiple MCP Servers
#2083 commented on
Jul 3, 2025 • 0 new comments -
support batch processing
#1771 commented on
Jul 3, 2025 • 0 new comments