Depth Completion Evaluation

The depth completion and depth prediction evaluation are related to our work published in Sparsity Invariant CNNs (THREEDV 2017). It
contains over 93 thousand depth maps with corresponding raw LiDaR scans and RGB images, aligned with the "raw data" of the KITTI dataset.
Given the large amount of training data, this dataset shall allow a training of complex deep learning models for the tasks of depth completion
and single image depth prediction. Also, we provide manually selected images with unpublished depth maps to serve as a benchmark for those
two challenging tasks.

Make sure to unzip annotated depth maps and raw LiDaR scans into the same directory so that all corresponding files end up in the same folder
structure. The structure of all provided depth maps is aligned with the structure of our raw data to easily find corresponding left and right images,
or other provided information.

Note: On 12.04.2018 we have fixed a small error in the file, please download this file again if you have an old version.

Our evaluation table ranks all methods according to the root mean squared error (RMSE) of the inverse depth maps. All methods providing less than 100 % density have been interpolated using simple background interpolation as explained in the corresponding header file in the development kit. Legend:

  • iRMSE:  Root mean squared error of the inverse depth [1/km]
  • iMAE:    Mean absolute error of the inverse depth [1/km]
  • RMSE:   Root mean squared error [mm]
  • MAE:     Mean absolute error [mm]

Additional information used by the methods
  • Additional training data: Use of additional data sources for training (see details)
  • RGB image: Use of RGB images for depth completion

Method Setting Code iRMSE iMAE RMSE MAE Runtime Environment
1 HMS-Net_v2 3.90 1.90 911.49 310.14 0.02 s GPU @ 2.5 Ghz (Python + C/C++)
2 Sparse-to-Dense-2 3.21 1.35 954.36 288.64 0.07 s GPU @ 1.5 Ghz (Python)
3 HMS-Net 3.25 1.27 976.22 283.76 0.02 s GPU @ 2.5 Ghz (Python + C/C++)
4 Morph-Net 3.84 1.57 1045.45 310.49 0.17 s GPU @ 1.5 Ghz (Matlab + C/C++)
5 IP-Basic code 3.78 1.29 1288.46 302.60 0.011 s 1 core @ >3.5 Ghz (Python)
J. Ku, A. Harakeh and S. Waslander: In Defense of Classical Image Processing: Fast Depth Completion on the CPU. arXiv preprint arXiv:1802.00036 2018.
6 ADNN code 59.39 3.19 1325.37 439.48 0.04 s GPU @ 2.5 Ghz (Python)
7 NN+CNN 3.25 1.29 1419.75 416.14 0.02 s GPU
J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.
8 SparseConvs code 4.94 1.78 1601.33 481.27 0.01 s GPU
J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.
9 NadarayaW 6.34 1.84 1852.60 416.77 0.05 s 1 core @ 2.5 Ghz (Python)
J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox and A. Geiger: Sparsity Invariant CNNs. International Conference on 3D Vision (3DV) 2017.
10 SGDU 7.38 2.05 2312.57 605.47 0.2 s 4 cores @ 2.5 Ghz (C/C++)
N. Schneider, L. Schneider, P. Pinggera, U. Franke, M. Pollefeys and C. Stiller: Semantically Guided Depth Upsampling. German Conference on Pattern Recognition 2016.
11 NiN CNN 4.60 2.15 2378.79 685.53 0.01 s GPU
12 NiN+Mask CNN 4.63 2.40 2534.26 848.25 0.01 s GPU @ 2.5 Ghz (C/C++)
Table as LaTeX | Only published Methods

Related Datasets

  • SYNTHIA Dataset: SYNTHIA is a collection of photo-realistic frames rendered from a virtual city and comes with precise pixel-level semantic annotations as well as pixel-wise depth information. The dataset consists of +200,000 HD images from video streams and +20,000 HD images from independent snapshots.
  • Middlebury Stereo Evaluation: The classic stereo evaluation benchmark, featuring four test images in version 2 of the benchmark, with very accurate ground truth from a structured light system. 38 image pairs are provided in total.
  • Make3D Range Image Data: Images with small-resolution ground truth used to learn and evaluate depth from single monocular images.
  • Virtual KITTI Dataset: Virtual KITTI contains 50 high-resolution monocular videos (21,260 frames) generated from five different virtual worlds in urban settings under different imaging and weather conditions.
  • Scene Flow Dataset: The Freiburg Scene Flow Dataset collection has been used to train convolutional networks for disparity, optical flow, and scene flow estimation. The collection contains more than 39000 stereo frames in 960x540 pixel resolution, rendered from various synthetic sequences.


When using this dataset in your research, we will be happy if you cite us:
  author = {Jonas Uhrig and Nick Schneider and Lukas Schneider and Uwe Franke and Thomas Brox and Andreas Geiger},
  title = {Sparsity Invariant CNNs},
  booktitle = {International Conference on 3D Vision (3DV)},
  year = {2017}

eXTReMe Tracker