Skip to content

Add chat history logging to Prototyper #981

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

myanvoos
Copy link
Contributor

@myanvoos myanvoos commented Apr 11, 2025

Related issue #969

This PR adds chat history logging to the Prototyper agent (and Enhancer as well, since Enhancers inherit from Prototypers). Unlike the initial suggestion in the issue, I thought it'd be more appropriate to put the chat history adding logic in execute instead of _container_handle_conclusion because it's the main orchestration method.

Copy link
Collaborator

@DonggeLiu DonggeLiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @myanvoos for addressing this issue!
I am unsure if this is ready to review yet, but I took the liberty of leaving some comments to share some thoughts.
Hope you won't mind : )


build_result.chat_history[
self.
name] += f'Step #{cur_round} - "agent-step": <CHAT PROMPT:ROUND {cur_round:02d}>{prompt.get()}</CHAT PROMPT:ROUND {cur_round:02d}>\n'
Copy link
Collaborator

@DonggeLiu DonggeLiu Apr 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor notes:

  1. We don't have to follow this format (e.g., Step#). It used to be this format because that's hardcoded by Cloud Build, now we are adding this by ourselves so we can be more flexible : )
  2. Originally Step # is not followed by {cur_round}, but by the step number in CloudBuild, e.g.,
    # Step 5: Run the Python script with the dill files.

    We don't have to worry about this since we construct this part by ourselves.
  3. With that said, I do think we can consider encoding cur_round here for our own needs. E.g., present the round number on the report, or other agents in the workflow.
  4. If we add these new chat_history lines here (which is preferred), we will need to remove this line to avoid duplicated chat history in cloud experiments:
  5. result.chat_history = {agent.name: cloud_build_log}
  6. I would suggest constructing the line in a function hosted by base_agent, because:
    • This will likely be widely used at multiple places by multiple agent.
    • It's critical to keep the line's format consistent, because we will need to parse it to generate report.
    • We can extend this to log more than llm-tool interactions. E.g., Text to present on reports or share with other agents in later workflow.

@DonggeLiu DonggeLiu marked this pull request as draft April 11, 2025 05:39
@myanvoos
Copy link
Contributor Author

myanvoos commented Apr 11, 2025

Hey @DonggeLiu, thanks for the comments! And no worries, I did mean this to be a draft PR for more discussion (I'll mark them as draft properly in the future) ^^

What I've done is I made this function in base_agent.py

def _format_chat_history_line(self, cur_round: int, prompt_or_response: str, 
                                agent_type: str, content: str) -> str:
    """Formats a chat history line."""
    return (f'Round {cur_round:02d}: <{agent_type.upper()} {prompt_or_response.upper()}:ROUND {cur_round:02d}>\n'
            f'{content}\n'
            f'</{agent_type.upper()} {prompt_or_response.upper()}:ROUND {cur_round:02d}>\n')

So that in prototyper.py we can call

build_result.chat_history[self.name] += self._format_chat_history_line(
            cur_round, 'PROMPT', self.name, prompt.get())

build_result.chat_history[self.name] += self._format_chat_history_line(
            cur_round, 'RESPONSE', self.name, response)

And deleted the redundant line in cloud_builder.py.

I included self.name in the line to make it easier to know which agent is at any prompt or response when reading the logs i.e. <PROTOTYPER PROMPT:ROUND 02>

Is this kind of what you were thinking of?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants