Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to generate long videos? #270

Open
2016110071 opened this issue Mar 24, 2025 · 7 comments
Open

how to generate long videos? #270

2016110071 opened this issue Mar 24, 2025 · 7 comments

Comments

@2016110071
Copy link

显存有限的情况下,怎么能够生成分钟级的视频呢?frame_num 如果太大,显存不够了,还有没有什么别的方法呢,比如分段推理之类的?

@able2608
Copy link

You can consider using the last frame as the image condition for i2v model. A high quality input usually yields good result. However, i2v tends to amplify the defects in the input (artifacts or blurry objects), so a careful selection might be needed to achieve better results.
As a side note, from my limited testing, i2v might shift the color composition of the input, and additional color matching or fixing might be needed.

@2016110071
Copy link
Author

You can consider using the last frame as the image condition for i2v model. A high quality input usually yields good result. However, i2v tends to amplify the defects in the input (artifacts or blurry objects), so a careful selection might be needed to achieve better results. As a side note, from my limited testing, i2v might shift the color composition of the input, and additional color matching or fixing might be needed.

Thanks for your suggestion, I will try it.

@pftq
Copy link

pftq commented Mar 26, 2025

The repo here has a bug that hard codes the 81-frame count in the video. I fixed here in a forked repo.
https://github.com/pftq/Wan2.1-Fixes

In addition, you might check out the Riflex extension setting on ComfyUI that lets you go longer if you notice looping at 10 sec or so.

@Oruli
Copy link

Oruli commented Mar 27, 2025

You can consider using the last frame as the image condition for i2v model. A high quality input usually yields good result. However, i2v tends to amplify the defects in the input (artifacts or blurry objects), so a careful selection might be needed to achieve better results. As a side note, from my limited testing, i2v might shift the color composition of the input, and additional color matching or fixing might be needed.

Honestly I've been messing with this for some time already and it seems WAN I2V prompt adherence is quite poor compared to the T2V which is excellent. I'm trying to string together T2V > I2V > I2V and so on, but it doesn't listen to a word I prompt it breaking the scene.

Does anyone have any clue how to improve this?

@Oruli
Copy link

Oruli commented Mar 27, 2025

The repo here has a bug that hard codes the 81-frame count in the video. I fixed here in a forked repo. https://github.com/pftq/Wan2.1-Fixes

In addition, you might check out the Riflex extension setting on ComfyUI that lets you go longer if you notice looping at 10 sec or so.

Can you explain? I though it was model training that dictated the max frame length before looping not a simple coded figure?

@able2608
Copy link

The repo here has a bug that hard codes the 81-frame count in the video. I fixed here in a forked repo. https://github.com/pftq/Wan2.1-Fixes
In addition, you might check out the Riflex extension setting on ComfyUI that lets you go longer if you notice looping at 10 sec or so.

Can you explain? I though it was model training that dictated the max frame length before looping not a simple coded figure?

The code contains hard-coded values that errors out if you set the frame length to something else other than 81 frames. With the patch it can go for arbitrary amount of frames, but whether the video will loop or not is another issue on its own.

@m-richa
Copy link

m-richa commented Apr 1, 2025

There is something called Streamer that they mention in their paper. But I don't think they have released the code for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants