|
5 | 5 | ## News
|
6 | 6 |
|
7 | 7 |
|
| 8 | +**April 4, 2025** |
| 9 | +- We are releasing **[Stable Video 4D 2.0 (SV4D 2.0)](https://huggingface.co/stabilityai/sv4d2.0)**, an enhanced video-to-4D diffusion model for high-fidelity novel-view video synthesis and 4D asset generation. For research purposes: |
| 10 | + - **SV4D 2.0** was trained to generate 48 frames (12 video frames x 4 camera views) at 576x576 resolution, given a 12-frame input video of the same size, ideally consisting of white-background images of a moving object. |
| 11 | + - Compared to our previous 4D model [SV4D](https://huggingface.co/stabilityai/sv4d), **SV4D 2.0** can generate videos with higher fidelity, sharper details during motion, and better spatio-temporal consistency. It also generalizes much better to real-world videos. Moreover, it does not rely on refernce multi-view of the first frame generated by SV3D, making it more robust to self-occlusions. |
| 12 | + - To generate longer novel-view videos, we autoregressively generate 12 frames at a time and use the previous generation as conditioning views for the remaining frames. |
| 13 | + - Please check our [project page](https://sv4d20.github.io), [arxiv paper](https://arxiv.org/pdf/2503.16396) and [video summary](https://www.youtube.com/watch?v=dtqj-s50ynU) for more details. |
| 14 | + |
| 15 | +**QUICKSTART** : |
| 16 | +- `python scripts/sampling/simple_video_sample_4d2.py --input_path assets/sv4d_videos/camel.gif --output_folder outputs` (after downloading [sv4d2.safetensors](https://huggingface.co/stabilityai/sv4d2.0) from HuggingFace into `checkpoints/`) |
| 17 | + |
| 18 | +To run **SV4D 2.0** on a single input video of 21 frames: |
| 19 | +- Download SV4D 2.0 model (`sv4d2.safetensors`) from [here](https://huggingface.co/stabilityai/sv4d2.0) to `checkpoints/`: `huggingface-cli download stabilityai/sv4d2.0 sv4d2.safetensors --local-dir checkpoints` |
| 20 | +- Run inference: `python scripts/sampling/simple_video_sample_4d2.py --input_path <path/to/video>` |
| 21 | + - `input_path` : The input video `<path/to/video>` can be |
| 22 | + - a single video file in `gif` or `mp4` format, such as `assets/sv4d_videos/camel.gif`, or |
| 23 | + - a folder containing images of video frames in `.jpg`, `.jpeg`, or `.png` format, or |
| 24 | + - a file name pattern matching images of video frames. |
| 25 | + - `num_steps` : default is 50, can decrease to it to shorten sampling time. |
| 26 | + - `elevations_deg` : specified elevations (reletive to input view), default is 0.0 (same as input view). |
| 27 | + - **Background removal** : For input videos with plain background, (optionally) use [rembg](https://github.com/danielgatis/rembg) to remove background and crop video frames by setting `--remove_bg=True`. To obtain higher quality outputs on real-world input videos with noisy background, try segmenting the foreground object using [Clipdrop](https://clipdrop.co/) or [SAM2](https://github.com/facebookresearch/segment-anything-2) before running SV4D. |
| 28 | + - **Low VRAM environment** : To run on GPUs with low VRAM, try setting `--encoding_t=1` (of frames encoded at a time) and `--decoding_t=1` (of frames decoded at a time) or lower video resolution like `--img_size=512`. |
| 29 | + |
| 30 | +Notes: |
| 31 | +- We also train a 8-view model that generates 5 frames x 8 views at a time (same as SV4D). |
| 32 | + - Download the model from huggingface: `huggingface-cli download stabilityai/sv4d2.0 sv4d2_8views.safetensors --local-dir checkpoints` |
| 33 | + - Run inference: `python scripts/sampling/simple_video_sample_4d2.py --model_path checkpoints/sv4d2_8views.safetensors --input_path assets/sv4d_videos/chest.gif --output_folder outputs` |
| 34 | + - The 5x8 model takes 5 frames of input at a time. But the inference scripts for both model take 21-frame video as input by default (same as SV3D and SV4D), we run the model autoregressively until we generate 21 frames. |
| 35 | +- Install dependencies before running: |
| 36 | +``` |
| 37 | +python3.10 -m venv .generativemodels |
| 38 | +source .generativemodels/bin/activate |
| 39 | +pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 # check CUDA version |
| 40 | +pip3 install -r requirements/pt2.txt |
| 41 | +pip3 install . |
| 42 | +pip3 install -e git+https://github.com/Stability-AI/datapipelines.git@main#egg=sdata |
| 43 | +``` |
| 44 | + |
| 45 | +  |
| 46 | + |
| 47 | + |
8 | 48 | **July 24, 2024**
|
9 | 49 | - We are releasing **[Stable Video 4D (SV4D)](https://huggingface.co/stabilityai/sv4d)**, a video-to-4D diffusion model for novel-view video synthesis. For research purposes:
|
10 | 50 | - **SV4D** was trained to generate 40 frames (5 video frames x 8 camera views) at 576x576 resolution, given 5 context frames (the input video), and 8 reference views (synthesised from the first frame of the input video, using a multi-view diffusion model like SV3D) of the same size, ideally white-background images with one object.
|
@@ -164,6 +204,7 @@ This is assuming you have navigated to the `generative-models` root after clonin
|
164 | 204 | # install required packages from pypi
|
165 | 205 | python3 -m venv .pt2
|
166 | 206 | source .pt2/bin/activate
|
| 207 | +pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 |
167 | 208 | pip3 install -r requirements/pt2.txt
|
168 | 209 | ```
|
169 | 210 |
|
|
0 commit comments