This repository contains scripts that are used to evaluate localisation, 3D reconstruction and radiance field methods using the Oxford Spires Dataset, accepted in the International Journal of Robotics Research (IJRR).
This is a pre-release of the software. The codebase will be refactored in the near future. Please feel free to ask questions about the dataset and report bugs in the Github Issues.
You can download the dataset from HuggingFace with this script. Define which folder to download by changing the example_pattern
. We have also defined a list of core sequences which is can also be found in the file.
python scripts/dataset_download.py
You can also download the dataset from Google Drive.
oxspires_tools
provides python tools for using the dataset.
Install it by running
pip install .
To use the cpp/python binding, you need to install PCL and Octomap. You can either build the docker container:
docker compose -f .docker/oxspires/docker-compose.yml run --build oxspires_utils
Or install the dependencies manually and then run
BUILD_CPP=1 pip install .
The following scripts download synchronised images and lidar from a sequence in HuggingFace, and generates depth image, lidar overlaid on camera and surface normal images.
python scripts/generate_depth.py
The localisation benchmark runs LiDAR SLAM methods (Fast-LIO-SLAM, SC-LIO-SAM, ImMesh), a LIVO method (Fast-LIVO2) and LiDAR Bundle Adjustment method (HBA). The resultant trajectory are evaluated against the ground truth trajectory using evo.
Each link provided for the methods above is a fork containing a branch config-used-OSD
with the configurations used for the evaluation.
Build the docker container and run the methods:
git clone https://github.com/ori-drs/oxford_spires_dataset.git
cd oxford_spires_dataset
docker compose -f .docker/loc/docker-compose.yml run --build oxspires_loc
# in the docker
python scripts/localisation_benchmark/colmap.py
python scripts/localisation_benchmark/fast_lio_slam.py
python scripts/localisation_benchmark/immesh.py
python scripts/localisation_benchmark/vilens_hba.py
The reconstruction benchmark runs Structure-from-Motion (COLMAP), Multi-view Stereo (OpenMVS), radiance field methods (Nerfstudio's Nerfacto and Splatfacto), and generates 3D point cloud reconstruction, which is evaluated against the TLS-captured ground truth 3D point cloud.
Build the docker container and run the methods:
docker compose -f .docker/recon/docker-compose.yml run --build oxspires_recon
# inside the docker
python scripts/reconstruction_benchmark/main.py --config-file config/recon_benchmark.yaml
# This will download data from Hugging Face first
python scripts/reconstruction_benchmark/nvs_benchmark.py
the NVS benchmakr is also included in the reconstruction benchmark script, since it builds upon output from COLMAP.
Please refer to the contributing page.
@article{tao2025spires,
title={The Oxford Spires Dataset: Benchmarking Large-Scale LiDAR-Visual Localisation, Reconstruction and Radiance Field Methods},
author={Tao, Yifu and Mu{\~n}oz-Ba{\~n}{\'o}n, Miguel {\'A}ngel and Zhang, Lintong and Wang, Jiahao and Fu, Lanke Frank Tarimo and Fallon, Maurice},
journal={International Journal of Robotics Research},
year={2025},
}