-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to generate long videos? #270
Comments
You can consider using the last frame as the image condition for i2v model. A high quality input usually yields good result. However, i2v tends to amplify the defects in the input (artifacts or blurry objects), so a careful selection might be needed to achieve better results. |
Thanks for your suggestion, I will try it. |
The repo here has a bug that hard codes the 81-frame count in the video. I fixed here in a forked repo. In addition, you might check out the Riflex extension setting on ComfyUI that lets you go longer if you notice looping at 10 sec or so. |
Honestly I've been messing with this for some time already and it seems WAN I2V prompt adherence is quite poor compared to the T2V which is excellent. I'm trying to string together T2V > I2V > I2V and so on, but it doesn't listen to a word I prompt it breaking the scene. Does anyone have any clue how to improve this? |
Can you explain? I though it was model training that dictated the max frame length before looping not a simple coded figure? |
The code contains hard-coded values that errors out if you set the frame length to something else other than 81 frames. With the patch it can go for arbitrary amount of frames, but whether the video will loop or not is another issue on its own. |
There is something called Streamer that they mention in their paper. But I don't think they have released the code for that. |
显存有限的情况下,怎么能够生成分钟级的视频呢?frame_num 如果太大,显存不够了,还有没有什么别的方法呢,比如分段推理之类的?
The text was updated successfully, but these errors were encountered: