Can axolotl generate LoRAs for Quip# models? #1006
-
Hi, Like many people, I have 24GB of VRAM. I've read good things about Quip# and its ability to load a 70B model onto a 24GB card. Is it possible to generate a LoRA for them using axolotl? I couldn't see anything that looked appropriate in the examples directory. Is support for these models planned? Thanks... |
Beta Was this translation helpful? Give feedback.
Answered by
NanoCode012
Feb 23, 2024
Replies: 1 comment
-
I'm not sure which model you mean, but, if it's this, you can see that it's just a Mistral model. https://huggingface.co/abhishek/quip-v1/blob/main/config.json You can use the mistral examples. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
araleza
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm not sure which model you mean, but, if it's this, you can see that it's just a Mistral model. https://huggingface.co/abhishek/quip-v1/blob/main/config.json
You can use the mistral examples.