-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] refactor docs for easier info parsing #175
Conversation
@@ -128,540 +125,24 @@ eval $cmd | |||
echo -ne "-------------------- Finished executing script --------------------\n\n" | |||
``` | |||
|
|||
### Inference: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Went to specific model docs.
> [!NOTE] | ||
> To lower memory requirements: | ||
> - Use a DeepSpeed config to launch training (refer to [`accelerate_configs/deepspeed.yaml`](./accelerate_configs/deepspeed.yaml) as an example). | ||
> - Pass `--precompute_conditions` when launching training. | ||
> - Pass `--gradient_checkpointing` when launching training. | ||
> - Pass `--use_8bit_bnb` when launching training. Note that this is only applicable to Adam and AdamW optimizers. | ||
> - Do not perform validation/testing. This saves a significant amount of memory, which can be used to focus solely on training if you're on smaller VRAM GPUs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In docs/training/optimizations.md
.
> - Pass `--use_8bit_bnb` when launching training. Note that this is only applicable to Adam and AdamW optimizers. | ||
> - Do not perform validation/testing. This saves a significant amount of memory, which can be used to focus solely on training if you're on smaller VRAM GPUs. | ||
|
||
## Memory requirements |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need as have it here already:
https://github.com/a-r-r-o-w/finetrainers/tree/main/training
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the incredible refactor!
).frames[0] | ||
export_to_video(output, "output.mp4", fps=15) | ||
``` | ||
| Model Name | Tasks | Ckpts Tested | Min. GPU<br>VRAM | Comments | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would remove the comments column here tbh. All models are supported with multi-resolution and multi-frames so far.
LTX is fast to train but it is extremely hard to teach it new styles tbh. I have had very limited success getting a good lora. Have not figured out the best settings yet but continuing to try.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I kept the comments column for including anything we should note that couldn't be fit in the other columns. Anything that's out of the ordinary and is absolutely important for the users to know.
WDYT?
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>
@a-r-r-o-w thanks for the reviews! In the latest commits:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
Approach taken: