Skip to content
forked from CompVis/EDGS

A PyTorch implementation of the paper "EDGS: Eliminating Densification for Efficient Convergence of 3DGS"

License

Notifications You must be signed in to change notification settings

sensyn-robotics/EDGS

 
 

Repository files navigation

EDGS: Eliminating Densification for Efficient Convergence of 3DGS

Dmytro Kotovenko* · Olga Grebenkova* · Björn Ommer

CompVis @ LMU Munich · Munich Center for Machine Learning (MCML)

* equal contribution

Project Page Paper Open In Colab Hugging Face

3DGS initializes with a sparse set of Gaussians and progressively adds more in under-reconstructed regions. In contrast, EDGS starts with a dense initialization from triangulated 2D correspondences across training image pairs, requiring only minimal refinement. This leads to faster convergence and higher rendering quality. Our method reaches the original 3DGS LPIPS score in just 25% of the training time and uses only 60% of the splats. Renderings become nearly indistinguishable from ground truth after only 3,000 steps — without any densification.

3D scene reconstruction using our method in 11 seconds.

3D Reconstruction Demo

📚 Table of Contents

🚀 Quickstart

The fastest way to try our model is through the Hugging Face demo, which lets you upload images or a video and interactively rotate the resulting 3D scene. For broad accessibility, we currently support only forward-facing scenes.

Steps:

  1. Upload a list of photos or a single video.
  2. Click 📸 Preprocess Input to estimate 3D positions using COLMAP.
  3. Click 🚀 Start Reconstruction to run the model.

You can also explore the reconstructed scene in 3D directly in the browser.

⚡ Runtime: EDGS typically takes just 10–20 seconds, plus 5–10 seconds for COLMAP processing. Additional time may be needed to save outputs (model, video, 3D preview).

You can also run the same app locally on your machine with command: CUDA_VISIBLE_DEVICES=0 python gradio_demo.py --port 7862 --no_share Without --no_share flag you will get the adress for gradio app that you can share with the others allowing others to process their data on your server.

Alternatively, check our Colab notebook.

🛠️ Installation

You can install it just:

docker compose up -d

or you can install with running script/install.sh.

📦 Data

We evaluated on the following datasets:

  • MipNeRF360 — download here. Unzip "Dataset Pt. 1" and "Dataset Pt. 2", then merge scenes.
  • Tanks & Temples + Deep Blending — from the original 3DGS repo.

Using Your Own Dataset

Option A

Use gradle demo. After running docker compose up -d,

docker compose exec edgs-app bash
python script/gradio_demo.py --port 7862

Option B

From command line.

docker compose exec edgs-app bash
python script/fit_model_to_scene_full.py --video_path <your mp4 video> [--processed_scenes_dir <output directory>]

Option C

Using Jupyter lab.

docker compose exec edgs-app bash

And in the terminal in the docker container,

jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root --notebook-dir=notebooks

After JupyterLab starts, it will print URLs to the terminal. Look for a URL containing a token, like: http://127.0.0.1:8888/lab?token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Open http://localhost:8888 (or http://127.0.0.1:8888) in your host browser. When prompted for a "Password or token", paste the xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx part from the URL in step 4 into the field and log in. Alternatively, you can paste the full URL from step 4 directly into your browser.

Option D

You can use the same data format as the 3DGS project. Please follow their guide to prepare your scene.

Expected folder structure:

scene_folder
|---images
|   |---<image 0>
|   |---<image 1>
|   |---...
|---sparse
    |---0
        |---cameras.bin
        |---images.bin
        |---points3D.bin
docker compose exec edgs-app bash

Then run training command as described below section.

Nerf synthetic format is also acceptable.

You can also use functions provided in our code to convert a collection of images or a sinlge video into a desired format. However, this may requre tweaking and processing time can be large for large collection of images with little overlap.

🏋️ Training

To optimize on a single scene in COLMAP format use this code.

python script/train.py \
  train.gs_epochs=30000 \
  train.no_densify=True \
  gs.dataset.source_path=<scene folder> \
  gs.dataset.model_path=<output folder> \
  init_wC.matches_per_ref=20000 \
  init_wC.nns_per_ref=3 \
  init_wC.num_refs=180
Command Line Arguments for train.py
  • train.gs_epochs Number of training iterations (steps) for Gaussian Splatting.

  • train.no_densify Disables densification. True by default.

  • gs.dataset.source_path Path to your input dataset directory. This should follow the same format as the original 3DGS dataset structure.

  • gs.dataset.model_path Output directory where the trained model, logs, and renderings will be saved.

  • init_wC.matches_per_ref Number of 2D feature correspondences to extract per reference view for initialization. More matches leads to more gaussians.

  • init_wC.nns_per_ref Number of nearest neighbor images used per reference during matching.

  • init_wC.num_refs Total number of reference views sampled.

  • wandb.mode Specifies how Weights & Biases (W&B) logging is handled.

    • Default: "disabled"
    • Options:
      • "online" — log to the W&B server in real-time
      • "offline" — save logs locally to sync later
      • "disabled" — turn off W&B logging entirely

    If you want to enable W&B logging, make sure to also configure:

    • wandb.project — the name of your W&B project
    • wandb.entity — your W&B username or team name

Example override:

wandb.mode=online wandb.project=EDGS wandb.entity=your_username train.gs_epochs=15_000 init_wC.matches_per_ref=15_000

To run full evaluation on all datasets:

python full_eval.py -m360 <mipnerf360 folder> -tat <tanks and temples folder> -db <deep blending folder>

🏗️ Reusing Our Model

Our model is essentially a better initialization module for Gaussian Splatting. You can integrate it into your pipeline by calling:

source.corr_init.init_gaussians_with_corr(...)

Input arguments:

  • A GaussianModel and Scene instance
  • A configuration namespace cfg.init_wC to specify parameters like the number of matches, neighbors, and reference views
  • A RoMA model (automatically instantiated if not provided)

📄 Citation

@misc{kotovenko2025edgseliminatingdensificationefficient,
      title={EDGS: Eliminating Densification for Efficient Convergence of 3DGS}, 
      author={Dmytro Kotovenko and Olga Grebenkova and Björn Ommer},
      year={2025},
      eprint={2504.13204},
      archivePrefix={arXiv},
      primaryClass={cs.GR},
      url={https://arxiv.org/abs/2504.13204}, 
}

TODO:

  • Code for training and processing forward-facing scenes.
  • More data examples

About

A PyTorch implementation of the paper "EDGS: Eliminating Densification for Efficient Convergence of 3DGS"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.3%
  • Python 2.7%