-
Notifications
You must be signed in to change notification settings - Fork 6.8k
cant consume openai via requests #6538
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
A different base_url can be specified. See here: https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.openai.html#autogen_ext.models.openai.OpenAIClientConfigurationConfigModel To do so you would pass openai_client = OpenAIChatCompletionClient(
model="gpt-4o-2024-08-06",
base_url="...",
# api_key="sk-...", # Optional if you have an OPENAI_API_KEY environment variable set.
) |
yeah but it would still not work as that intermediate instance is not propagating the headers properly, this would still use the openai library instead of requests. Even out of autogen, i can only consume this endpoint via requests as if i consume it with the library it does not work I have seen you can create custom models, https://microsoft.github.io/autogen/0.2/blog/2024/01/26/Custom-Models/, but it doesnt seem to allow you to create the api (request/response structure) for the request sent, isnt it? Moreover what i need is the ability to create a custom model, where i send the endpoint, keys, the request structure and the response structure Thanks |
If the existing model clients don't fit your needs can you suggest how they need to change? It is not clear to me from your issue. Alternatively, you can create a custom model by implementing the The link you sent doesn't apply to the codebase now, it is for the old 0.2 version. |
Basically i am calling my LLM with this:
|
There is no model client that works like this today, you will need to create a custom model by implementing the ChatCompletionClient interface. |
What happened?
Hi
I am consuming an OpenAI instance via requests as i need to consume it using through an intermediate endpoint. Looks like the implementation of openai in autogen only supports direct calls using the library and i cant add my custom endpoint.
Is there any way to be able to consume my custom openAI endpoint in autogen?
Thanks
Which packages was the bug in?
AutoGen Studio (autogensudio)
AutoGen library version.
Python dev (main branch)
Other library version.
No response
Model used
No response
Model provider
None
Other model provider
No response
Python version
None
.NET version
None
Operating system
None
The text was updated successfully, but these errors were encountered: