You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Seemingly at random points when a heartbeat message is issued, the LLM response is misconfigured and does not include a 'role' key.
I suspect this is due to the rate limits of Gemini returning an unexpected message after the rate limit is exceeded on the free plan.
Please describe your setup
How did you install letta?
Official Docker image.
Describe your setup
MacOS Sequoia 15.3.1
Letta is ran as a Docker image, with a Google Gemini API key, and Postgresql persistence.
Screenshots
Additional context
Unfortunately, I am unable to see the generated message. It could be that the message was not generated at all due to rate limits, and therefore the 'role' key is not there as the message is blank.
This is likely the cause, but I am unable to debug further currently.
LLM details
Running with Google's gemini-2.0-flash-001 model, and the letta-free embedding model.
The text was updated successfully, but these errors were encountered:
Describe the bug
Seemingly at random points when a heartbeat message is issued, the LLM response is misconfigured and does not include a 'role' key.
I suspect this is due to the rate limits of Gemini returning an unexpected message after the rate limit is exceeded on the free plan.
Please describe your setup
Screenshots
Additional context
Unfortunately, I am unable to see the generated message. It could be that the message was not generated at all due to rate limits, and therefore the 'role' key is not there as the message is blank.
This is likely the cause, but I am unable to debug further currently.
LLM details
Running with Google's gemini-2.0-flash-001 model, and the letta-free embedding model.
The text was updated successfully, but these errors were encountered: