You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+5-8
Original file line number
Diff line number
Diff line change
@@ -268,16 +268,13 @@ If QodeAssist is having problems connecting to the LLM provider, please check th
268
268
- For Ollama, the default is usually http://localhost:11434
269
269
- For LM Studio, the default is usually http://localhost:1234
270
270
271
-
2.Check the endpoint:
271
+
2.Confirm that the selected model and template are compatible:
272
272
273
-
Make sure the endpoint in the settings matches the one required by your provider
274
-
- For Ollama, it should be /api/generate
275
-
- For LM Studio and OpenAI compatible providers, it's usually /v1/chat/completions
273
+
Ensure you've chosen the correct model in the "Select Models" option
274
+
Verify that the selected prompt template matches the model you're using
276
275
277
-
3. Confirm that the selected model and template are compatible:
278
-
279
-
Ensure you've chosen the correct model in the "Select Models" option
280
-
Verify that the selected prompt template matches the model you're using
276
+
3. On Linux the prebuilt binaries support only ubuntu 22.04+ or simililliar os.
277
+
If you need compatiblity with another os, you have to build manualy. our experiments and resolution you can check here: https://github.com/Palm1r/QodeAssist/issues/48
281
278
282
279
If you're still experiencing issues with QodeAssist, you can try resetting the settings to their default values:
0 commit comments