-
Notifications
You must be signed in to change notification settings - Fork 2.3k
[Bug]: LightRAG does not work with gpt-4o-mini #1348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Check your .env setting for azure OpenAI. You'd better try LightRAG Server first. |
I tried the server. It kept binding to ollama. Then when i removed all ollama references i got the error that it fails to bind with gpt-4o-mini |
it should work. i've been using azure 4o-mini and 4o with lightrag for months now. i've just deployed o3-mini in azure and testing it now... make sure these .env parameters are properly set: AZURE_EMBEDDING_DEPLOYMENT=text-embedding-3-large if the .env setttings are correct, then you might want to check your network. |
What version are you using? I have tried 3 or 4 times with 1.3.1, 1.3. It's not the network because using the same keys and all i was able to use graphrag from microsoft. |
https://github.com/HKUDS/LightRAG/blob/main/lightrag/api/README.md#for-azure-openai-backend
|
I used it under 1.2.6 and just today I updated to 1.3.1. testing both o3-mini and gpt-4o-mini to regenerate my database. I also used lightrag_azure_openai_demo.py to kick-off my test under 1.3.1. when this line is executed: does your console display a response... something like: *I slightly modified the prompt and text inside the test_funcs function |
here is my .env file: (take note, i deployed gpt-4o-mini model with deployment name "gpt-4o-mini" in Azure)
|
The LLM_BINDING environment variable controls the LLM API mode. For azure openai, you should set this:
|
hmm.. I tried running with that parameter disabled (commented out) but both my tests are running.. |
The Ollama problem is because the default embedding binding is ollama, you should also set it like:
|
seems I'm mistaken. I reused my terminal environment when running the tests.
so the values never got overwritten. |
load_dotenv() ensures the OS environment variable takes precedence over the .env file configuration. |
actually no... it loads the .env into the the OS environment and if you sent it "override" parameter to true, any existing value in the OS environment of the same parameter will be overwritten with what is inside the .env |
The load_dotenv() function preserves existing OS environment variables by design, which explains why your .env file modifications aren't being applied. |
they are preserved because the "override" parameter is False by default. |
It you deploy LightRAG in docker, not override is a must. |
Thank you both @danielaskdd, @BireleyX for the comments. The test_func() seems to work but when it starts looking at the document it just goes on forever. Here are the metrics Here is the terminal output. I stopped it after a while: Here is my .env:
|
I get the same problem when i try with gpt-4o-mini. Here is my azure demo file:
|
It appears the LLM request failed for some reason. Please verify your implementation of llm_model_func by testing it separately. |
@JoedNgangmeni you got it working already? here's my llm_func that works on both o3-mini and gpt-4o-mini:
I can't find any obvious problem with your code... how about you try maxing out your rate limit settings like I did: |
Yes. It finally works. I have no idea why though. |
Do you need to file an issue?
Describe the bug
When I run the code from azure open ai gpt-4o-mini it either gives me an error sayng that binding failed (LighRAG API) or never comes out of some loop while its indexing a file (azure_open_ai_demo code).
Steps to reproduce
No response
Expected Behavior
No response
LightRAG Config Used
Paste your config here
Logs and screenshots
No response
Additional Information
The text was updated successfully, but these errors were encountered: