Replies: 3 comments 2 replies
-
You might want to quatized as well follow below link and let me know if it helps. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi, did you find any way to convert model into a bin file? Because the link which is shared is not helped me. |
Beta Was this translation helpful? Give feedback.
0 replies
-
After downloading the .pth (pytorch) file (assuming you downloaded from meta), you need to convert it to the Q4_0 quantized ggml model.
|
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I downloaded Llama2 7B files (consolidated.00.pth - checklist.chk - tokenizer.model). I want to load this model using llama-cpp but first, i need to convert this model into a bin file. What should I do?
Beta Was this translation helpful? Give feedback.
All reactions