Skip to content

Commit 2eda9e0

Browse files
committed
fix typo
1 parent 78b9efb commit 2eda9e0

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/axolotl/utils/models.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -333,7 +333,7 @@ def load_model(
333333
model, use_gradient_checkpointing=cfg.gradient_checkpointing
334334
)
335335

336-
# LlamaRMSNorm layers are in fp32 after kit call, so we need to
336+
# LlamaRMSNorm layers are in fp32 after kbit_training, so we need to
337337
# convert them back to fp16/bf16 for flash-attn compatibility.
338338
if cfg.flash_attention and cfg.is_llama_derived_model:
339339
for name, module in model.named_modules():

0 commit comments

Comments
 (0)