Skip to content

Phantom-video/Phantom

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 

Repository files navigation

Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment

arXiv  project page 

Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment
Lijie Liu * , Tianxiang Ma * , Bingchuan Li * †, Zhuowei Chen * , Jiawei Liu, Qian He, Xinglong Wu
* Equal contribution,Project lead
Intelligent Creation Team, ByteDance

Overview

Phantom is a unified video generation framework for single and multi-subject references, built on existing text-to-video and image-to-video architectures. It achieves cross-modal alignment using text-image-video triplet data by redesigning the joint text-image injection model. Additionally, it emphasizes subject consistency in human generation while enhancing ID-preserving video generation.

Comparative Results 🆚

  • Identity Preserving Video Generation. image
  • Single Reference Subject-to-Video Generation. image
  • Multi-Reference Subject-to-Video Generation. image

Acknowledgements

We would like to express our gratitude to the SEED team for their support. Special thanks to Lu Jiang, Haoyuan Guo, Zhibei Ma, and Sen Wang for their assistance with the model and data. In addition, we are also very grateful to Siying Chen, Qingyang Li, and Wei Han for their help with the evaluation.

BibTeX

@article{liu2025phantom,
  title={Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment},
  author={Liu, Lijie and Ma, Tianxaing and Li, Bingchuan and Chen, Zhuowei and Liu, Jiawei and He, Qian and Wu, Xinglong},
  journal={arXiv preprint arXiv:2502.11079},
  year={2025}
}