DOC-Depth: A novel approach for dense depth ground truth generation

IEEE IV 2025 Oral Presentation

1Mines Paris - PSL University, 2Valeo, 3Exwayz Research
LED Overview

Abstract

Accurate depth information is essential for many computer vision applications. Yet, no available dataset recording method allows for fully dense accurate depth estimation in a large scale dynamic environment. In this paper, we introduce DOC-Depth, a novel, efficient and easy-to-deploy approach for dense depth generation from any LiDAR sensor. After reconstructing consistent dense 3D environment using LiDAR odometry, we address dynamic objects occlusions automatically thanks to DOC, our state-of-the art dynamic object classification method. Additionally, DOC-Depth is fast and scalable, allowing for the creation of unbounded datasets in terms of size and time. We demonstrate the effectiveness of our approach on the KITTI dataset, improving its density from 16.1% to 71.2% and release this new fully dense depth annotation, to facilitate future research in the domain. We also showcase results using various LiDAR sensors and in multiple environments.

Video Presentation

Qualitative results on KITTI Dataset


KITTI experiment

Qualitative results on novel Datasets


Novel datasets experiment

Fully Dense Depth KITTI annotations


Dense depth annotations for the full KITTI Depth dataset and the KITTI Odometry training dataset can be downloaded using this link:


If you use this dataset in your research, please cite our article.

BibTeX

@inproceedings{deMoreau2024doc,
  title = {DOC-Depth: A novel approach for dense depth ground truth generation},
  author = {De Moreau, Simon and Corsia, Mathias and Bouchiba, Hassan and Almehio, Yasser and Bursuc, Andrei and El-Idrissi, Hafid and Moutarde, Fabien},
  booktitle = {2025 IEEE Intelligent Vehicles Symposium (IV)},
  year = {2025},
}