Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any way for using LoftQ to GPTQ or AWQ model? #18

Open
WoosungMyung opened this issue Mar 7, 2024 · 2 comments
Open

Is there any way for using LoftQ to GPTQ or AWQ model? #18

WoosungMyung opened this issue Mar 7, 2024 · 2 comments

Comments

@WoosungMyung
Copy link

I want to use LoftQ initialization for GPTQ or AWQ baseline model.

Is that possible?

@yxli2123
Copy link
Owner

Hi @LameloBally, thanks for your interest of our work.

Unfortunately the answer is no. LoftQ aims to provide a data-free initialization for all downstream task finetuning, so quantization with data calibration is out of our scope. However, we welcome you to explore more possibilities of our method: you can obtain a quantized backbone (Q) from GPTQ or AWQ and then initialize LoRA adapters, A and B, by SVD(W-Q), where W is the full-precision pre-trained weight.

Let me know if you have further questions.

@WoosungMyung
Copy link
Author

@yxli2123 Thanks for your kind answer. It was very helpful for me!!
But, I have another question. How Can I initialize LoRA Adapter by SVD(W-Q) s.t. Q is GPTQ model & W is full precision.
I thought LoftQ is very innovative idea so I want to apply it to other cases to develop it further.
It will be very helpful for me if you let me know How can I find any guide or the way I can do it.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants