DOC-Depth: A novel approach for dense depth ground truth generation

1Mines Paris - PSL University, 2Valeo, 3Exwayz Research
LED Overview

Abstract

Accurate depth information is essential for many computer vision applications. Yet, no available dataset recording method allows for fully dense accurate depth estimation in a large scale dynamic environment. In this paper, we introduce DOC-Depth, a novel, efficient and easy-to-deploy approach for dense depth generation from any LiDAR sensor. After reconstructing consistent dense 3D environment using LiDAR odometry, we address dynamic objects occlusions automatically thanks to DOC, our state-of-the art dynamic object classification method. Additionally, DOC-Depth is fast and scalable, allowing for the creation of unbounded datasets in terms of size and time. We demonstrate the effectiveness of our approach on the KITTI dataset, improving its density from 16.1\% to 71.2\% and release this new fully dense depth annotation, to facilitate future research in the domain. We also showcase results using various LiDAR sensors and in multiple environments.

Video Presentation

Qualitative results on KITTI Dataset


KITTI experiment

Qualitative results on novel Datasets


Novel datasets experiment

Fully Dense Depth KITTI annotations


Full KITTI depth completion and KITTI odometry dataset download link will be added soon. To download the evaluation set of KITTI depth completion dataset please use this link:


If you use this dataset in your research, please cite our article.

BibTeX

TBA