-
Notifications
You must be signed in to change notification settings - Fork 2k
Replace TypedDict with Pydantic BaseModel #3974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
I don't think we want to make this change. People use differnet versions of pydantic and run into random compatibility issues, so I feel more comfortable using it less. |
agreed, i don't think it's necessary |
@hinthornw actually i think it should be fine, will double check that this doesn't cause any issues |
TypedDict gives some warnings and at the end resource exhausted when using with Gemini. ResourceExhausted: 429 Resource has been exhausted (e.g. check quota). However when using Pydantic BaseModel, there are no warnings and resource exhaustion doesn't occur. And this works fine with other models. |
https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/429 @SandaruRF You should try changing back to back. It's unlikely that the 429 is related to typeddict vs. pydantic model unless it's formulating an incorrect payload and failing w/ some 4xx (and we're automatically retrying in the tool node @vbarda ) @SandaruRF could you confirm that you can repeat the failure by changing from typeddict to pydantic and back? |
@eyurtsev I ran tests comparing TypedDict and Pydantic BaseModel. When using TypedDict I get a resource exhausted error on the first run. But when I use Pydantic it runs successfully. I think there is a possible issue with request formatting, retries or payload handling. This happens when using Gemini. |
@SandaruRF can you please share some details about the model you are instantiating? e.g., if you are using |
@ccurme I am using langchain-google-genai with gemini-2.0-flash model. The versions of langchain packages are, |
@SandaruRF could you open an issue with a minimal reproducible example? I'm unable to reproduce using latest from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.prebuilt import create_react_agent
from typing_extensions import TypedDict
llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash-001")
def get_weather(location: str) -> str:
"""Get weather at a location."""
return "It's sunny."
class ResponseFormat(TypedDict):
"""Respond to the user in this format."""
my_special_output: str
graph = create_react_agent(
llm,
tools=[get_weather],
response_format=ResponseFormat
)
state = graph.invoke({"messages": "What's the weather in Boston?"})
state["structured_response"] # {'my_special_output': "It's sunny in Boston"} |
Sure |
Switched TypedDict to Pydantic BaseModel for structured output.
This improves validation and better type enforcement.
TypeDict gives errors when using Gemini-2.0-flash model.
This is compatible with Anthropic, OpenAI, and Gemini models.