Stable Virtual Camera (Seva) is a 1.3B generalist diffusion model for Novel View Synthesis (NVS), generating 3D consistent novel views of a scene, given any number of input views and target cameras.
- March 2025 -
Stable Virtual Camerais out everywhere.
To setup the virtual environment and install all necessary model dependencies, simply run:
pip install -e .Check INSTALL.md for other dependencies if you want to use our demos or develop from this repo.
We provide two demos for you to interative with Stable Virtual Camera.
This gradio demo is a GUI interface that requires no expertised knowledge, suitable for general users. Simply run
python demo_gr.pyFor a more detailed guide, follow GR_USAGE.md.
This cli demo allows you to pass in more options and control the model in a fine-grained way, suitable for power users and academic researchers. An examplar command line looks as simple as
python demo.py --data_path <data_path> [additional arguments]For a more detailed guide, follow CLI_USAGE.md.
For users interested in benchmarking NVS models using command lines, check benchmark containing the details about scenes, splits, and input/target views we reported in the paper.
If you find this repository useful, please consider giving a star ⭐ and citation.
@article{zhou2025stable,
title={Stable Virtual Camera: Generative View Synthesis with Diffusion Models},
author={Jensen (Jinghao) Zhou and Hang Gao and Vikram Voleti and Aaryaman Vasishta and Chun-Han Yao and Mark Boss and
Philip Torr and Christian Rupprecht and Varun Jampani
},
journal={arXiv preprint},
year={2025}
}