We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The used embedding model is bge-m3 downloaded from https://huggingface.co/gpustack/bge-m3-GGUF/tree/main.
I want to use the model through the ollama api provided by koboldcpp_nocuda.exe by loading the embedding model.
error log: src/llama.cpp:8702: GGML_ASSERT(strcmp(res->name, "result_output") == 0 && "missing result_output tensor") failed
Is it possible to add an embedding option?
The text was updated successfully, but these errors were encountered:
Sorry, embedding models are not supported in koboldcpp at this time.
Sorry, something went wrong.
No branches or pull requests
The used embedding model is bge-m3 downloaded from https://huggingface.co/gpustack/bge-m3-GGUF/tree/main.
I want to use the model through the ollama api provided by koboldcpp_nocuda.exe by loading the embedding model.
error log:
src/llama.cpp:8702: GGML_ASSERT(strcmp(res->name, "result_output") == 0 && "missing result_output tensor") failed
Is it possible to add an embedding option?
The text was updated successfully, but these errors were encountered: