Question about the option llm_int8_has_fp16_weight
in BitsAndBytesConfig
#983
Replies: 1 comment
-
Hey, this has been exposed recently. You can pass |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As can be seen here, axolotl uses
llm_int8_has_fp16_weight=False
and it aligns with the example code in the QLora repo. But another training framework xllm defaults tollm_int8_has_fp16_weight=True
(code). Also, according to the FAQ of bitsandbytes, it saysSo, which one should I use? Or maybe it should be exposed as an option to the yaml configuration?
Beta Was this translation helpful? Give feedback.
All reactions