Skip to content

Commit 9485e84

Browse files
authored
Update README.md
1 parent c6bb2d0 commit 9485e84

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

README.md

+15-15
Original file line numberDiff line numberDiff line change
@@ -268,21 +268,21 @@ python3 sample_image2video.py \
268268

269269
We list some more useful configurations for easy usage:
270270

271-
| Argument | Default | Description |
272-
|:----------------------:|:----------------------------:|:------------------------------------------------------------------------------------------------:|
273-
| `--prompt` | None | The text prompt for video generation. |
274-
| `--model` | HYVideo-T/2-cfgdistill | Here we use HYVideo-T/2 for I2V, HYVideo-T/2-cfgdistill is used for T2V mode. |
275-
| `--i2v-mode` | False | Whether to open i2v mode. |
276-
| `--i2v-image-path` | ./assets/demo/i2v/imgs/0.png | The reference image for video generation. |
277-
| `--i2v-resolution` | 720p | The resolution for the generated video. |
278-
| `--i2v-stability` | False | Whether to use stable mode for i2v inference. |
279-
| `--video-length` | 129 | The length of the generated video. |
280-
| `--infer-steps` | 50 | The number of steps for sampling. |
281-
| `--flow-shift` | 7.0 | Shift factor for flow matching schedulers . |
282-
| `--flow-reverse` | False | If reverse, learning/sampling from t=1 -> t=0. |
283-
| `--seed` | None | The random seed for generating video, if None, we init a random seed. |
284-
| `--use-cpu-offload` | False | Use CPU offload for the model load to save more memory, necessary for high-res video generation. |
285-
| `--save-path` | ./results | Path to save the generated video. |
271+
| Argument | Default | Description |
272+
|:----------------------:|:----------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
273+
| `--prompt` | None | The text prompt for video generation. |
274+
| `--model` | HYVideo-T/2-cfgdistill | Here we use HYVideo-T/2 for I2V, HYVideo-T/2-cfgdistill is used for T2V mode. |
275+
| `--i2v-mode` | False | Whether to open i2v mode. |
276+
| `--i2v-image-path` | ./assets/demo/i2v/imgs/0.png | The reference image for video generation. |
277+
| `--i2v-resolution` | 720p | The resolution for the generated video. |
278+
| `--i2v-stability` | False | Whether to use stable mode for i2v inference. |
279+
| `--video-length` | 129 | The length of the generated video. |
280+
| `--infer-steps` | 50 | The number of steps for sampling. |
281+
| `--flow-shift` | 7.0 | Shift factor for flow matching schedulers. We recommend 7 with `--i2v-stability` switch on for more stable video, 17 with `--i2v-stability` switch off for more dynamic video |
282+
| `--flow-reverse` | False | If reverse, learning/sampling from t=1 -> t=0. |
283+
| `--seed` | None | The random seed for generating video, if None, we init a random seed. |
284+
| `--use-cpu-offload` | False | Use CPU offload for the model load to save more memory, necessary for high-res video generation. |
285+
| `--save-path` | ./results | Path to save the generated video. |
286286

287287

288288

0 commit comments

Comments
 (0)