Method

Ego-Motion Estimation and Depth Refinement from Sparse, Noisy Depth Inputs [DFineNet]
https://github.com/Ougui9/DFineNet

Submitted on 2 Mar. 2019 07:09 by
Yilun Zhang (University of Pennsylvania)

Running time:0.02 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
Depth estimation is an important capability for
autonomous vehicles to understand and reconstruct
3D environments as well as avoid obstacles during
the execution. Accurate depth sensors such as
LiDARs are often heavy, expensive and can only
provide sparse depth while lighter depth sensors
such as stereo cameras are noiser in comparison.
We propose an end-to-end learning algorithm that
is capable of using sparse, noisy input depth for
refinement and depth completion. Our model also
produces the camera pose as a byproduct, making
it a great solution for autonomous systems. We
evaluate our approach on both indoor and outdoor
datasets. Empirical results show that our method
performs well on the
KITTI~\cite{kitti_geiger2012we} dataset when
compared to other competing methods, while having
superior performance in dealing with sparse,
noisy input depth on the TUM~\cite{sturm12iros}
dataset.
Parameters:
Our algorithm is implemented with Pytorch. We use
ADAM optimizer with a momentum of 0.9 and weight
decay of 0.0003. The ADAM parameters are pre-
selected as $\alpha = 0.0001, \beta_1 = 0.9$ and
$\beta_2 = 0.999$. Two Tesla V100 with RAM of
32GB are used for training with the batch size of
8 and 15 epochs take around 12 hours.
Latex Bibtex:
@article{Zhang2019DFineNetEE,
title={DFineNet: Ego-Motion Estimation and
Depth Refinement from Sparse, Noisy Depth Input
with RGB Guidance},
author={Yilun Zhang and Ty Nguyen and Ian D.
Miller and Shreyas S. Shivakumar and Steven W.
Chen and Camillo J. Taylor and Vijay Kumar},
journal={CoRR},
year={2019},
volume={abs/1903.06397}
}

Detailed Results

This page provides detailed results for the method(s) selected. For the first 20 test images, the percentage of erroneous pixels is depicted in the table. We use the error metric described in Sparsity Invariant CNNs (THREEDV 2017), which considers a pixel to be correctly estimated if the disparity or flow end-point error is <3px or <5% (for scene flow this criterion needs to be fulfilled for both disparity maps and the flow map). Underneath, the left input image, the estimated results and the error maps are shown (for disp_0/disp_1/flow/scene_flow, respectively). The error map uses the log-color scale described in Sparsity Invariant CNNs (THREEDV 2017), depicting correct estimates (<3px or <5% error) in blue and wrong estimates in red color tones. Dark regions in the error images denote the occluded pixels which fall outside the image boundaries. The false color maps of the results are scaled to the largest ground truth disparity values / flow magnitudes.

Test Set Average

iRMSE iMAE RMSE MAE
Error 3.21 1.39 943.89 304.17
This table as LaTeX

Test Image 0

iRMSE iMAE RMSE MAE
Error 4.56 1.25 1001.09 253.51
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 1

iRMSE iMAE RMSE MAE
Error 5.43 1.78 879.23 157.16
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 2

iRMSE iMAE RMSE MAE
Error 2.76 1.89 1459.51 603.23
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 3

iRMSE iMAE RMSE MAE
Error 4.26 2.19 822.85 340.29
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 4

iRMSE iMAE RMSE MAE
Error 4.09 1.86 694.88 277.68
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 5

iRMSE iMAE RMSE MAE
Error 5.58 1.51 875.65 250.39
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 6

iRMSE iMAE RMSE MAE
Error 5.21 1.68 611.18 267.72
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 7

iRMSE iMAE RMSE MAE
Error 5.88 2.06 776.36 199.25
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 8

iRMSE iMAE RMSE MAE
Error 7.29 1.43 1585.68 338.22
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 9

iRMSE iMAE RMSE MAE
Error 2.25 1.36 993.44 308.23
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 10

iRMSE iMAE RMSE MAE
Error 2.13 1.52 985.81 503.38
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 11

iRMSE iMAE RMSE MAE
Error 3.04 1.45 1376.24 530.74
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 12

iRMSE iMAE RMSE MAE
Error 6.23 2.81 1159.71 372.56
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 13

iRMSE iMAE RMSE MAE
Error 1.69 1.14 863.56 309.01
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 14

iRMSE iMAE RMSE MAE
Error 3.63 1.36 705.34 235.15
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 15

iRMSE iMAE RMSE MAE
Error 4.32 1.99 634.71 244.52
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 16

iRMSE iMAE RMSE MAE
Error 1.59 0.99 664.61 244.93
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 17

iRMSE iMAE RMSE MAE
Error 1.84 1.01 927.17 275.77
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 18

iRMSE iMAE RMSE MAE
Error 2.50 1.17 885.37 344.85
This table as LaTeX

Input Image

D1 Result

D1 Error


Test Image 19

iRMSE iMAE RMSE MAE
Error 1.59 1.12 845.13 324.15
This table as LaTeX

Input Image

D1 Result

D1 Error




eXTReMe Tracker