You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @LameloBally, thanks for your interest of our work.
Unfortunately the answer is no. LoftQ aims to provide a data-free initialization for all downstream task finetuning, so quantization with data calibration is out of our scope. However, we welcome you to explore more possibilities of our method: you can obtain a quantized backbone (Q) from GPTQ or AWQ and then initialize LoRA adapters, A and B, by SVD(W-Q), where W is the full-precision pre-trained weight.
@yxli2123 Thanks for your kind answer. It was very helpful for me!!
But, I have another question. How Can I initialize LoRA Adapter by SVD(W-Q) s.t. Q is GPTQ model & W is full precision.
I thought LoftQ is very innovative idea so I want to apply it to other cases to develop it further.
It will be very helpful for me if you let me know How can I find any guide or the way I can do it.
I want to use LoftQ initialization for GPTQ or AWQ baseline model.
Is that possible?
The text was updated successfully, but these errors were encountered: