Skip to content

Commit 1eb7f93

Browse files
ChrisBlaaChris WieczorekSongChiYoungekzhumehrsa
authored
add tool_call_summary_msg_format_fct and test (microsoft#6460)
## Why are these changes needed? This change introduces support for dynamic formatting of tool call summary messages by allowing a user-defined `tool_call_summary_format_fct`. Instead of relying solely on a static string template, this function enables runtime generation of summary messages based on the specific tool call and its result. This provides greater flexibility and cleaner integration without introducing any breaking changes. ### My Use Case / Problem In my use case, I needed concise summaries for successful tool calls and detailed messages for failures. The existing static summary string didn't allow conditional formatting, which led to overly verbose success messages or inconsistent failure outputs. This change allows customizing summaries per result type, solving that limitation cleanly. ## Related issue number Closes microsoft#6426 ## Checks - [x] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. --------- Co-authored-by: Chris Wieczorek <Chris.Wieczorek@iav.de> Co-authored-by: EeS <chiyoung.song@motov.co.kr> Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> Co-authored-by: Mehrsa Golestaneh <mehrsa.golestaneh@gmail.com> Co-authored-by: Mehrsa Golestaneh <mgolestaneh@microsoft.com> Co-authored-by: Zhenyu <81767213+Dormiveglia-elf@users.noreply.github.com>
1 parent aa22b62 commit 1eb7f93

File tree

2 files changed

+111
-21
lines changed

2 files changed

+111
-21
lines changed

python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py

Lines changed: 52 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -131,17 +131,26 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
131131
132132
* If the model returns no tool call, then the response is immediately returned as a :class:`~autogen_agentchat.messages.TextMessage` or a :class:`~autogen_agentchat.messages.StructuredMessage` (when using structured output) in :attr:`~autogen_agentchat.base.Response.chat_message`.
133133
* When the model returns tool calls, they will be executed right away:
134-
- When `reflect_on_tool_use` is False, the tool call results are returned as a :class:`~autogen_agentchat.messages.ToolCallSummaryMessage` in :attr:`~autogen_agentchat.base.Response.chat_message`. `tool_call_summary_format` can be used to customize the tool call summary.
134+
- When `reflect_on_tool_use` is False, the tool call results are returned as a :class:`~autogen_agentchat.messages.ToolCallSummaryMessage` in :attr:`~autogen_agentchat.base.Response.chat_message`. You can customise the summary with either a static format string (`tool_call_summary_format`) **or** a callable (`tool_call_summary_formatter`); the callable is evaluated once per tool call.
135135
- When `reflect_on_tool_use` is True, the another model inference is made using the tool calls and results, and final response is returned as a :class:`~autogen_agentchat.messages.TextMessage` or a :class:`~autogen_agentchat.messages.StructuredMessage` (when using structured output) in :attr:`~autogen_agentchat.base.Response.chat_message`.
136136
- `reflect_on_tool_use` is set to `True` by default when `output_content_type` is set.
137137
- `reflect_on_tool_use` is set to `False` by default when `output_content_type` is not set.
138138
* If the model returns multiple tool calls, they will be executed concurrently. To disable parallel tool calls you need to configure the model client. For example, set `parallel_tool_calls=False` for :class:`~autogen_ext.models.openai.OpenAIChatCompletionClient` and :class:`~autogen_ext.models.openai.AzureOpenAIChatCompletionClient`.
139139
140140
.. tip::
141-
By default, the tool call results are returned as response when tool calls are made.
142-
So it is recommended to pay attention to the formatting of the tools return values,
143-
especially if another agent is expecting them in a specific format.
144-
Use `tool_call_summary_format` to customize the tool call summary, if needed.
141+
142+
By default, the tool call results are returned as the response when tool
143+
calls are made, so pay close attention to how the tools’ return values
144+
are formatted—especially if another agent expects a specific schema.
145+
146+
* Use **`tool_call_summary_format`** for a simple static template.
147+
* Use **`tool_call_summary_formatter`** for full programmatic control
148+
(e.g., “hide large success payloads, show full details on error”).
149+
150+
*Note*: `tool_call_summary_formatter` is **not serializable** and will
151+
be ignored when an agent is loaded from, or exported to, YAML/JSON
152+
configuration files.
153+
145154
146155
**Hand off behavior:**
147156
@@ -199,13 +208,22 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
199208
If this is set, the agent will respond with a :class:`~autogen_agentchat.messages.StructuredMessage` instead of a :class:`~autogen_agentchat.messages.TextMessage`
200209
in the final response, unless `reflect_on_tool_use` is `False` and a tool call is made.
201210
output_content_type_format (str | None, optional): (Experimental) The format string used for the content of a :class:`~autogen_agentchat.messages.StructuredMessage` response.
202-
tool_call_summary_format (str, optional): The format string used to create the content for a :class:`~autogen_agentchat.messages.ToolCallSummaryMessage` response.
203-
The format string is used to format the tool call summary for every tool call result.
204-
Defaults to "{result}".
205-
When `reflect_on_tool_use` is `False`, a concatenation of all the tool call summaries, separated by a new line character ('\\n')
206-
will be returned as the response.
207-
Available variables: `{tool_name}`, `{arguments}`, `{result}`.
208-
For example, `"{tool_name}: {result}"` will create a summary like `"tool_name: result"`.
211+
tool_call_summary_format (str, optional): Static format string applied to each tool call result when composing the :class:`~autogen_agentchat.messages.ToolCallSummaryMessage`.
212+
Defaults to ``"{result}"``. Ignored if `tool_call_summary_formatter` is provided. When `reflect_on_tool_use` is ``False``, the summaries for all tool
213+
calls are concatenated with a newline ('\\n') and returned as the response. Placeholders available in the template:
214+
`{tool_name}`, `{arguments}`, `{result}`, `{is_error}`.
215+
tool_call_summary_formatter (Callable[[FunctionCall, FunctionExecutionResult], str] | None, optional):
216+
Callable that receives the ``FunctionCall`` and its ``FunctionExecutionResult`` and returns the summary string.
217+
Overrides `tool_call_summary_format` when supplied and allows conditional logic — for example, emitting static string like
218+
``"Tool FooBar executed successfully."`` on success and a full payload (including all passed arguments etc.) only on failure.
219+
220+
**Limitation**: The callable is *not serializable*; values provided via YAML/JSON configs are ignored.
221+
222+
.. note::
223+
224+
`tool_call_summary_formatter` is intended for in-code use only. It cannot currently be saved or restored via
225+
configuration files.
226+
209227
memory (Sequence[Memory] | None, optional): The memory store to use for the agent. Defaults to `None`.
210228
metadata (Dict[str, str] | None, optional): Optional metadata for tracking.
211229
@@ -652,6 +670,7 @@ def __init__(
652670
model_client_stream: bool = False,
653671
reflect_on_tool_use: bool | None = None,
654672
tool_call_summary_format: str = "{result}",
673+
tool_call_summary_formatter: Callable[[FunctionCall, FunctionExecutionResult], str] | None = None,
655674
output_content_type: type[BaseModel] | None = None,
656675
output_content_type_format: str | None = None,
657676
memory: Sequence[Memory] | None = None,
@@ -756,6 +775,7 @@ def __init__(
756775
stacklevel=2,
757776
)
758777
self._tool_call_summary_format = tool_call_summary_format
778+
self._tool_call_summary_formatter = tool_call_summary_formatter
759779
self._is_running = False
760780

761781
@property
@@ -803,6 +823,7 @@ async def on_messages_stream(
803823
model_client_stream = self._model_client_stream
804824
reflect_on_tool_use = self._reflect_on_tool_use
805825
tool_call_summary_format = self._tool_call_summary_format
826+
tool_call_summary_formatter = self._tool_call_summary_formatter
806827
output_content_type = self._output_content_type
807828
format_string = self._output_content_type_format
808829

@@ -873,6 +894,7 @@ async def on_messages_stream(
873894
model_client_stream=model_client_stream,
874895
reflect_on_tool_use=reflect_on_tool_use,
875896
tool_call_summary_format=tool_call_summary_format,
897+
tool_call_summary_formatter=tool_call_summary_formatter,
876898
output_content_type=output_content_type,
877899
format_string=format_string,
878900
):
@@ -976,6 +998,7 @@ async def _process_model_result(
976998
model_client_stream: bool,
977999
reflect_on_tool_use: bool,
9781000
tool_call_summary_format: str,
1001+
tool_call_summary_formatter: Callable[[FunctionCall, FunctionExecutionResult], str] | None,
9791002
output_content_type: type[BaseModel] | None,
9801003
format_string: str | None = None,
9811004
) -> AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None]:
@@ -1078,6 +1101,7 @@ async def _process_model_result(
10781101
inner_messages=inner_messages,
10791102
handoffs=handoffs,
10801103
tool_call_summary_format=tool_call_summary_format,
1104+
tool_call_summary_formatter=tool_call_summary_formatter,
10811105
agent_name=agent_name,
10821106
)
10831107

@@ -1232,22 +1256,30 @@ def _summarize_tool_use(
12321256
inner_messages: List[BaseAgentEvent | BaseChatMessage],
12331257
handoffs: Dict[str, HandoffBase],
12341258
tool_call_summary_format: str,
1259+
tool_call_summary_formatter: Callable[[FunctionCall, FunctionExecutionResult], str] | None,
12351260
agent_name: str,
12361261
) -> Response:
12371262
"""
12381263
If reflect_on_tool_use=False, create a summary message of all tool calls.
12391264
"""
12401265
# Filter out calls which were actually handoffs
12411266
normal_tool_calls = [(call, result) for call, result in executed_calls_and_results if call.name not in handoffs]
1242-
tool_call_summaries: List[str] = []
1243-
for tool_call, tool_call_result in normal_tool_calls:
1244-
tool_call_summaries.append(
1245-
tool_call_summary_format.format(
1246-
tool_name=tool_call.name,
1247-
arguments=tool_call.arguments,
1248-
result=tool_call_result.content,
1249-
)
1267+
1268+
def default_tool_call_summary_formatter(call: FunctionCall, result: FunctionExecutionResult) -> str:
1269+
return tool_call_summary_format
1270+
1271+
summary_formatter = tool_call_summary_formatter or default_tool_call_summary_formatter
1272+
1273+
tool_call_summaries = [
1274+
summary_formatter(call, result).format(
1275+
tool_name=call.name,
1276+
arguments=call.arguments,
1277+
result=result.content,
1278+
is_error=result.is_error,
12501279
)
1280+
for call, result in normal_tool_calls
1281+
]
1282+
12511283
tool_call_summary = "\n".join(tool_call_summaries)
12521284
return Response(
12531285
chat_message=ToolCallSummaryMessage(

python/packages/autogen-agentchat/tests/test_assistant_agent.py

Lines changed: 59 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
SystemMessage,
3333
UserMessage,
3434
)
35-
from autogen_core.models._model_client import ModelFamily
35+
from autogen_core.models._model_client import ModelFamily, ModelInfo
3636
from autogen_core.tools import BaseTool, FunctionTool, StaticWorkbench
3737
from autogen_ext.models.openai import OpenAIChatCompletionClient
3838
from autogen_ext.models.replay import ReplayChatCompletionClient
@@ -56,10 +56,68 @@ async def _fail_function(input: str) -> str:
5656
return "fail"
5757

5858

59+
async def _throw_function(input: str) -> str:
60+
raise ValueError("Helpful debugging information what went wrong.")
61+
62+
5963
async def _echo_function(input: str) -> str:
6064
return input
6165

6266

67+
@pytest.fixture
68+
def model_info_all_capabilities() -> ModelInfo:
69+
return {
70+
"function_calling": True,
71+
"vision": True,
72+
"json_output": True,
73+
"family": ModelFamily.GPT_4O,
74+
"structured_output": True,
75+
}
76+
77+
78+
@pytest.mark.asyncio
79+
async def test_run_with_tool_call_summary_format_function(model_info_all_capabilities: ModelInfo) -> None:
80+
model_client = ReplayChatCompletionClient(
81+
[
82+
CreateResult(
83+
finish_reason="function_calls",
84+
content=[
85+
FunctionCall(id="1", arguments=json.dumps({"input": "task"}), name="_pass_function"),
86+
FunctionCall(id="2", arguments=json.dumps({"input": "task"}), name="_throw_function"),
87+
],
88+
usage=RequestUsage(prompt_tokens=10, completion_tokens=5),
89+
thought="Calling pass and fail function",
90+
cached=False,
91+
),
92+
],
93+
model_info=model_info_all_capabilities,
94+
)
95+
96+
def conditional_string_templates(function_call: FunctionCall, function_call_result: FunctionExecutionResult) -> str:
97+
if not function_call_result.is_error:
98+
return "SUCCESS: {tool_name} with {arguments}"
99+
100+
else:
101+
return "FAILURE: {result}"
102+
103+
agent = AssistantAgent(
104+
"tool_use_agent",
105+
model_client=model_client,
106+
tools=[_pass_function, _throw_function],
107+
tool_call_summary_formatter=conditional_string_templates,
108+
)
109+
result = await agent.run(task="task")
110+
111+
first_tool_call_summary = next((x for x in result.messages if isinstance(x, ToolCallSummaryMessage)), None)
112+
if first_tool_call_summary is None:
113+
raise AssertionError("Expected a ToolCallSummaryMessage but found none")
114+
115+
assert (
116+
first_tool_call_summary.content
117+
== 'SUCCESS: _pass_function with {"input": "task"}\nFAILURE: Helpful debugging information what went wrong.'
118+
)
119+
120+
63121
@pytest.mark.asyncio
64122
async def test_run_with_tools(monkeypatch: pytest.MonkeyPatch) -> None:
65123
model_client = ReplayChatCompletionClient(

0 commit comments

Comments
 (0)