Now we make it compatible with the official API (python, rust, PHP, etc.). You can use it as a proxy server between you and the OpenAI endpoint while maintaining its distribution ability to speed up requests!
First, run python -m openai_manager.serving --port 8000 --host localhost --api_key [your custom key]
. Then set up the official python openai package:
import openai
openai.api_base = "http://localhost:8000/v1"
openai.api_key = "[your custom key]"
# run like normal
prompt = ["Once upon a time, "] * 10
response = openai.Completion.create(
model="code-davinci-002",
prompt=prompt,
max_tokens=20,
)
print(response["choices"][0]["text"])