Skip to content

cant consume openai via requests #6538

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
javixeneize opened this issue May 15, 2025 · 5 comments
Open

cant consume openai via requests #6538

javixeneize opened this issue May 15, 2025 · 5 comments

Comments

@javixeneize
Copy link

What happened?

Hi

I am consuming an OpenAI instance via requests as i need to consume it using through an intermediate endpoint. Looks like the implementation of openai in autogen only supports direct calls using the library and i cant add my custom endpoint.

Is there any way to be able to consume my custom openAI endpoint in autogen?

Thanks

Which packages was the bug in?

AutoGen Studio (autogensudio)

AutoGen library version.

Python dev (main branch)

Other library version.

No response

Model used

No response

Model provider

None

Other model provider

No response

Python version

None

.NET version

None

Operating system

None

@jackgerrits
Copy link
Member

A different base_url can be specified. See here: https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.openai.html#autogen_ext.models.openai.OpenAIClientConfigurationConfigModel

To do so you would pass base_url to the client when creating:

openai_client = OpenAIChatCompletionClient(
    model="gpt-4o-2024-08-06",
    base_url="...",
    # api_key="sk-...", # Optional if you have an OPENAI_API_KEY environment variable set.
)

@javixeneize
Copy link
Author

yeah but it would still not work as that intermediate instance is not propagating the headers properly, this would still use the openai library instead of requests. Even out of autogen, i can only consume this endpoint via requests as if i consume it with the library it does not work

I have seen you can create custom models, https://microsoft.github.io/autogen/0.2/blog/2024/01/26/Custom-Models/, but it doesnt seem to allow you to create the api (request/response structure) for the request sent, isnt it?

Moreover what i need is the ability to create a custom model, where i send the endpoint, keys, the request structure and the response structure

Thanks

@jackgerrits
Copy link
Member

jackgerrits commented May 19, 2025

If the existing model clients don't fit your needs can you suggest how they need to change? It is not clear to me from your issue.

Alternatively, you can create a custom model by implementing the ChatCompletionClient interface.

The link you sent doesn't apply to the codebase now, it is for the old 0.2 version.

@javixeneize
Copy link
Author

Basically i am calling my LLM with this:

```
payload = json.dumps({
        "model": "gpt-4o-mini",
        "inputs": {"system": {"model_id": "any", "prompt_id": "base", "args": {"prompt": llm_prompt}},
                   "parameters": {
                       "max_tokens": 4000
                   },
                   "messages": userprompt}})

    response = requests.post(openai_url, headers=headers, data=payload)

So i need to know how can i use autogen to integrate it with this model

Thanks

@jackgerrits
Copy link
Member

There is no model client that works like this today, you will need to create a custom model by implementing the ChatCompletionClient interface.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants