Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] refactor docs for easier info parsing #175

Merged
merged 19 commits into from
Jan 5, 2025
Merged

[docs] refactor docs for easier info parsing #175

merged 19 commits into from
Jan 5, 2025

Conversation

sayakpaul
Copy link
Collaborator

Approach taken:

  • Keep the README as less cluttered as possible.
  • Add docs for each model. Only when we have more training algos can we include separate sections within model docs.
  • Once we have I2V support for a model that should go into the respective model accordingly.
  • Have a central doc for memory optimizations.

@sayakpaul sayakpaul requested a review from a-r-r-o-w January 3, 2025 08:34
@@ -128,540 +125,24 @@ eval $cmd
echo -ne "-------------------- Finished executing script --------------------\n\n"
```

### Inference:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went to specific model docs.

Comment on lines -392 to -398
> [!NOTE]
> To lower memory requirements:
> - Use a DeepSpeed config to launch training (refer to [`accelerate_configs/deepspeed.yaml`](./accelerate_configs/deepspeed.yaml) as an example).
> - Pass `--precompute_conditions` when launching training.
> - Pass `--gradient_checkpointing` when launching training.
> - Pass `--use_8bit_bnb` when launching training. Note that this is only applicable to Adam and AdamW optimizers.
> - Do not perform validation/testing. This saves a significant amount of memory, which can be used to focus solely on training if you're on smaller VRAM GPUs.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In docs/training/optimizations.md.

> - Pass `--use_8bit_bnb` when launching training. Note that this is only applicable to Adam and AdamW optimizers.
> - Do not perform validation/testing. This saves a significant amount of memory, which can be used to focus solely on training if you're on smaller VRAM GPUs.

## Memory requirements
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need as have it here already:
https://github.com/a-r-r-o-w/finetrainers/tree/main/training

Copy link
Owner

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the incredible refactor!

).frames[0]
export_to_video(output, "output.mp4", fps=15)
```
| Model Name | Tasks | Ckpts Tested | Min. GPU<br>VRAM | Comments |
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would remove the comments column here tbh. All models are supported with multi-resolution and multi-frames so far.

LTX is fast to train but it is extremely hard to teach it new styles tbh. I have had very limited success getting a good lora. Have not figured out the best settings yet but continuing to try.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I kept the comments column for including anything we should note that couldn't be fit in the other columns. Anything that's out of the ordinary and is absolutely important for the users to know.

WDYT?

@sayakpaul
Copy link
Collaborator Author

@a-r-r-o-w thanks for the reviews! In the latest commits:

  • I have addressed all your comments and left a question for you here.
  • Introduced an ultra-short section from the docs/training/README.md to reference the model-specific docs for easier navigation.

@sayakpaul sayakpaul requested a review from a-r-r-o-w January 5, 2025 13:03
Copy link
Owner

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@sayakpaul sayakpaul merged commit eeb4dd7 into main Jan 5, 2025
1 check passed
@sayakpaul sayakpaul deleted the docs branch January 5, 2025 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants