Skip to content

Commit cf5ae6b

Browse files
committed
Feat(readme): improve docs on multi-gpu
1 parent 168a7a0 commit cf5ae6b

File tree

1 file changed

+19
-2
lines changed

1 file changed

+19
-2
lines changed

Diff for: README.md

+19-2
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,6 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl
3636
pip3 install -e .
3737
pip3 install -U git+https://github.com/huggingface/peft.git
3838

39-
accelerate config
40-
4139
# finetune lora
4240
accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml
4341

@@ -525,6 +523,21 @@ Run
525523
accelerate launch scripts/finetune.py configs/your_config.yml
526524
```
527525

526+
#### Multi-GPU Config
527+
528+
- llama FSDP
529+
```yaml
530+
fsdp:
531+
- full_shard
532+
- auto_wrap
533+
fsdp_config:
534+
fsdp_offload_params: true
535+
fsdp_state_dict_type: FULL_STATE_DICT
536+
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
537+
```
538+
539+
- llama Deepspeed: append `ACCELERATE_USE_DEEPSPEED=true` in front of finetune command
540+
528541
### Inference
529542

530543
Pass the appropriate flag to the train command:
@@ -575,6 +588,10 @@ Try set `fp16: true`
575588

576589
Try to turn off xformers.
577590

591+
> Message about accelerate config missing
592+
593+
It's safe to ignore it.
594+
578595
## Need help? 🙋♂️
579596
580597
Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you

0 commit comments

Comments
 (0)