Skip to content

Google Models - 'role' Key Error on LLM Response #2534

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks
RaspberryPicardBox opened this issue Apr 1, 2025 · 1 comment
Open
2 tasks

Google Models - 'role' Key Error on LLM Response #2534

RaspberryPicardBox opened this issue Apr 1, 2025 · 1 comment
Assignees

Comments

@RaspberryPicardBox
Copy link

Describe the bug
Seemingly at random points when a heartbeat message is issued, the LLM response is misconfigured and does not include a 'role' key.

I suspect this is due to the rate limits of Gemini returning an unexpected message after the rate limit is exceeded on the free plan.

Please describe your setup

  • How did you install letta?
    • Official Docker image.
  • Describe your setup
    • MacOS Sequoia 15.3.1
    • Letta is ran as a Docker image, with a Google Gemini API key, and Postgresql persistence.

Screenshots

Image

Image

Additional context
Unfortunately, I am unable to see the generated message. It could be that the message was not generated at all due to rate limits, and therefore the 'role' key is not there as the message is blank.
This is likely the cause, but I am unable to debug further currently.

LLM details

Running with Google's gemini-2.0-flash-001 model, and the letta-free embedding model.

@sarahwooders
Copy link
Collaborator

Thanks for reporting this - looking into it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants