Skip to content

Does Letta support specifying json_schema in the create message api #2508

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tbashor opened this issue Mar 20, 2025 · 3 comments
Open

Does Letta support specifying json_schema in the create message api #2508

tbashor opened this issue Mar 20, 2025 · 3 comments
Assignees

Comments

@tbashor
Copy link

tbashor commented Mar 20, 2025

I can't figure out how to get structured output when using an agent that uses an openai gpt4o model for example. The openai API has you add a response_format key to the in the completions call. The Letta API doesn't mention this option.

@sarahwooders
Copy link
Collaborator

What we recommend for this is to create a tool that has the schema you want - for example:

def respond_to_user(response: MyPydanticModel): 
    """ Respond to the user with a structured response """
   return response.model_dump()

@tbashor
Copy link
Author

tbashor commented Mar 20, 2025

Thanks, I'll give that a shot.

@tbashor tbashor closed this as completed Mar 20, 2025
@github-project-automation github-project-automation bot moved this from To triage to Done in 🐛 MemGPT issue tracker Mar 20, 2025
@tbashor
Copy link
Author

tbashor commented Apr 11, 2025

I've run into an issue setting up the custom tool. Here's what I've done so far.

  1. I extended the Letta docker image by installing the pydantic dependency.
  2. I created the following analyze_meeting.py custom tool directly in the ADE.
from typing import List, Optional
from pydantic import BaseModel, Field

class ActionItem(BaseModel):
    description: str = Field(..., description="The task or action item content")
    owner: Optional[str] = Field(None, description="Person responsible for the task")
    due_date: Optional[str] = Field(None, description="Due date for the task")

class KeyQuestion(BaseModel):
    question: str = Field(..., description="The content of the question, concern, or objection")

class KeyRequirement(BaseModel):
    requirement: str = Field(..., description="The business, technical, or functional requirement")

class MeetingAnalysisInput(BaseModel):
    transcript: str = Field(..., description="The meeting transcript text to analyze")

class MeetingAnalysisOutput(BaseModel):
    summary: str = Field(..., description="A short paragraph (3-5 sentences) summarizing the overall meeting")
    action_items: List[ActionItem] = Field(..., description="List of tasks with owners and due dates")
    key_questions_or_objections: List[KeyQuestion] = Field(..., description="List of important questions, concerns, or objections raised")
    key_requirements: List[KeyRequirement] = Field(..., description="List of business, technical, or functional requirements")
    
def respond_to_user(response: MeetingAnalysisOutput):
    """Respond to the user with a structured response"""
    return response.model_dump()

def analyze_meeting(transcript: str) -> MeetingAnalysisOutput:
    """
    You are an expert meeting assistant.

    Given the following transcript, generate a structured JSON response with the following fields:

    1. "summary" – A short paragraph (3–5 sentences) summarizing the overall meeting.
    2. "action_items" – A list of tasks with "description", "owner" (person responsible, or null if unknown), and "due_date" (as a string or null).
    3. "key_questions_or_objections" – A list of important questions, concerns, or objections raised.
    4. "key_requirements" – A list of business, technical, or functional requirements mentioned.
    
    Only return a valid JSON object in this format. Do not include any commentary, markdown, or explanation.

    Args:
        transcript (str): The meeting transcript text to analyze

    Returns:
        MeetingAnalysisOutput: Structured analysis of the meeting
    """
    # Placeholder implementation - replace with actual analysis logic
    return MeetingAnalysisOutput(
        summary="Meeting summary placeholder",
        action_items=[],
        key_questions_or_objections=[],
        key_requirements=[]
    )
  1. I attached the tool analyze_meeting tool to the agent.
  2. I set a tool rule to Exit loop when using analyze_meeting (I kept the exit loop for send_message).
  3. I provided a partial transcript in the chat.

I got the following error after the agent used the analyze_meeting tool:

Traceback (most recent call last):
  File "/app/letta/services/tool_execution_sandbox.py", line 247, in run_local_dir_sandbox_runpy
    func_return, agent_state = self.parse_best_effort(func_result)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/services/tool_execution_sandbox.py", line 379, in parse_best_effort
    result = pickle.loads(base64.b64decode(text))
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named '<run_path>'

Can you help me understand what is wrong with the structure of the custom tool code?

@tbashor tbashor reopened this Apr 11, 2025
@github-project-automation github-project-automation bot moved this from Done to Backlog in 🐛 MemGPT issue tracker Apr 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants