Skip to content

Fix VertexAILLM #342

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Fix VertexAILLM #342

wants to merge 8 commits into from

Conversation

stellasia
Copy link
Contributor

@stellasia stellasia commented May 20, 2025

Description

The main reason for this PR is that, even if Google GenerativeModel accepts a list of tools as parameter, this is not how we should provide multiple tools. We should instead provide a single tool with multiple function declarations. This PR addresses this behavior.

It also addresses a few other problems related to the way GenerativeModel is instantiated. The usage is:

    # config for chat mode (aka invoke)
    generation_config = GenerationConfig(
        temperature=0.0,
        response_mime_type="application/json",
    )
    # config for tool mode (aka invoke_with_tools)
    # optional
    tool_config = ToolConfig(
        function_calling_config=ToolConfig.FunctionCallingConfig(
            mode=ToolConfig.FunctionCallingConfig.Mode.ANY
        ),
    )

    llm = VertexAILLM(
        model_name="gemini-2.0-flash-001",
        generation_config=generation_config,
        tool_config=tool_config,
    )

Then, internally:

  • if invoke mode: we drop the tool config (it's not useful)
  • if invoke_with_tool mode, we drop the generation config (otherwise the presence of fields like response_mime_type/response_schema raise errors)

Drawback of this approach:

  • We can't have a tool_config per call, which means we can't configure the allowed_function_names parameter per call.

We can't move these parameters to the invoke* methods because then they would be "provider-dependent" and LLMInterfaces wouldn't be interchangeable anymore, which defeats the whole point of the interface.

Type of Change

  • New feature
  • Bug fix
  • Breaking change
  • Documentation update
  • Project configuration change

Complexity

Complexity: ?

How Has This Been Tested?

  • Unit tests
  • E2E tests
  • Manual tests

Checklist

The following requirements should have been met (depending on the changes in the branch):

  • Documentation has been updated
  • Unit tests have been updated
  • E2E tests have been updated
  • Examples have been updated
  • New files have copyright header
  • CLA (https://neo4j.com/developer/cla/) has been signed
  • CHANGELOG.md updated if appropriate

@stellasia stellasia force-pushed the fix/vertexai-tools branch from 7164e26 to cc74102 Compare May 21, 2025 12:35
@stellasia stellasia marked this pull request as ready for review May 21, 2025 13:46
@stellasia stellasia requested a review from a team as a code owner May 21, 2025 13:46
)
try:
if isinstance(message_history, MessageHistory):
message_history = message_history.messages
messages = self.get_messages(input, message_history)
response = self.model.generate_content(messages, **self.model_params)
return LLMResponse(content=response.text)
response = model.generate_content(messages)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we removing model_params here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants