You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
LLMCallEvent fails to log "tools" passed to BaseOpenAIChatCompletionClient in autogen_ext.models.openai._openai_client.BaseOpenAIChatCompletionClient
classBaseOpenAIChatCompletionClient(ChatCompletionClient):
............
asyncdefcreate(
self,
messages: Sequence[LLMMessage],
*,
tools: Sequence[Tool|ToolSchema] = [], # tools are not being logged in the LLMCallEvent belowjson_output: Optional[bool|type[BaseModel]] =None,
extra_create_args: Mapping[str, Any] = {},
cancellation_token: Optional[CancellationToken] =None,
) ->CreateResult:
...........
logger.info(
LLMCallEvent(
messages=cast(List[Dict[str, Any]], create_params.messages),
response=result.model_dump(),
prompt_tokens=usage.prompt_tokens,
completion_tokens=usage.completion_tokens,
# where are the tools to be chosen from being logged?
)
)
Expected behavior
Logs should include tools available to the LLM to choose from along with messages and response
Which packages was the bug in?
Python Extensions (autogen-ext)
AutoGen library version.
Python dev (main branch)
Other library version.
No response
Model used
No response
Model provider
OpenAI
Other model provider
No response
Python version
None
.NET version
None
Operating system
None
The text was updated successfully, but these errors were encountered:
Fix for LLMCallEvent failing to log "tools" passed to BaseOpenAIChatCompletionClient in autogen_ext.models.openai._openai_client.BaseOpenAIChatCompletionClient
microsoft#6531
This bug creates problems inspecting why a certain tool was selected/not selected by the LLM as the list of tools available to the LLM is not present in the logs
Fix for LLMCallEvent failing to log "tools" passed to
BaseOpenAIChatCompletionClient in
autogen_ext.models.openai._openai_client.BaseOpenAIChatCompletionClient
This bug creates problems inspecting why a certain tool was selected/not
selected by the LLM as the list of tools available to the LLM is not
present in the logs
## Why are these changes needed?
Added "tools" to the LLMCallEvent to log tools available to the LLM as
these were being missed causing difficulties during debugging LLM tool
calls.
## Related issue number
[<!-- For example: "Closes#1234"
-->](#6531)
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
What happened?
Describe the bug
LLMCallEvent fails to log "tools" passed to BaseOpenAIChatCompletionClient in autogen_ext.models.openai._openai_client.BaseOpenAIChatCompletionClient
Expected behavior
Logs should include tools available to the LLM to choose from along with messages and response
Which packages was the bug in?
Python Extensions (autogen-ext)
AutoGen library version.
Python dev (main branch)
Other library version.
No response
Model used
No response
Model provider
OpenAI
Other model provider
No response
Python version
None
.NET version
None
Operating system
None
The text was updated successfully, but these errors were encountered: