We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
seems that llama.cpp's iq1s_grid is now of size 2048, https://github.com/ggml-org/llama.cpp/blob/ee02ad02c56ff36a5edd22d8617ab3f9546ce7fe/ggml/src/ggml-common.h#L1075
while it is still 512 in this project, causing wrong dequantization and vecmatmul result to be wrong
llama-cpp-torch/llamacpp_kernel.cu
Line 743 in f1da76e
A correct implementation is shown in https://github.com/Isotr0py/ggml-libtorch/blob/main/ggml-cuda/gguf_kernel.cu
The text was updated successfully, but these errors were encountered:
No branches or pull requests
seems that llama.cpp's iq1s_grid is now of size 2048, https://github.com/ggml-org/llama.cpp/blob/ee02ad02c56ff36a5edd22d8617ab3f9546ce7fe/ggml/src/ggml-common.h#L1075
while it is still 512 in this project, causing wrong dequantization and vecmatmul result to be wrong
llama-cpp-torch/llamacpp_kernel.cu
Line 743 in f1da76e
A correct implementation is shown in https://github.com/Isotr0py/ggml-libtorch/blob/main/ggml-cuda/gguf_kernel.cu
The text was updated successfully, but these errors were encountered: