Stereo Evaluation 2015


The stereo 2015 / flow 2015 / scene flow 2015 benchmark consists of 200 training scenes and 200 test scenes (4 color images per scene, saved in loss less png format). Compared to the stereo 2012 and flow 2012 benchmarks, it comprises dynamic scenes for which the ground truth has been established in a semi-automatic process. Our evaluation server computes the percentage of bad pixels averaged over all ground truth pixels of all 200 test images. For this benchmark, we consider a pixel to be correctly estimated if the disparity or flow end-point error is <3px or <5% (for scene flow this criterion needs to be fulfilled for both disparity maps and the flow map). We require that all methods use the same parameter set for all test pairs. Our development kit provides details about the data format as well as MATLAB / C++ utility functions for reading and writing disparity maps and flow fields. More details can be found in Object Scene Flow for Autonomous Vehicles (CVPR 2015).

Our evaluation table ranks all methods according to the number of erroneous pixels. All methods providing less than 100 % density have been interpolated using simple background interpolation as explained in the corresponding header file in the development kit. Legend:

  • D1: Percentage of stereo disparity outliers in first frame
  • D2: Percentage of stereo disparity outliers in second frame
  • Fl: Percentage of optical flow outliers
  • SF: Percentage of scene flow outliers (=outliers in either D0, D1 or Fl)
  • bg: Percentage of outliers averaged only over background regions
  • fg: Percentage of outliers averaged only over foreground regions
  • all: Percentage of outliers averaged over all ground truth pixels


Note: On 13.03.2017 we have fixed several small errors in the flow (noc+occ) ground truth of the dynamic foreground objects and manually verified all images for correctness by warping them according to the ground truth. As a consequence, all error numbers have decreased slightly. Please download the devkit and the annotations with the improved ground truth for the training set again if you have downloaded the files prior to 13.03.2017 and consider reporting these new number in all future publications. The last leaderboards before these corrections can be found here (optical flow 2015) and here (scene flow 2015). The leaderboards for the KITTI 2015 stereo benchmarks did not change.

Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Flow: Method uses optical flow (2 temporally adjacent images)
  • Multiview: Method uses more than 2 temporally adjacent images
  • Motion stereo: Method uses epipolar geometry for computing optical flow
  • Additional training data: Use of additional data sources for training (see details)

Evaluation ground truth        Evaluation area

Method Setting Code D1-bg D1-fg D1-all Density Runtime Environment
1 M2S_CSPN 1.51 % 2.88 % 1.74 % 100.00 % 0.5 s GPU @ 2.5 Ghz (C/C++)
X. Cheng, P. Wang and R. Yang: Learning Depth with Convolutional Spatial Propagation Network. arXiv preprint arXiv:1810.02695 2018.
2 AMNet 1.53 % 3.43 % 1.84 % 100.00 % 0.9 s GPU @ 2.5 Ghz (Python)
X. Du, M. El-Khamy and J. Lee: AMNet: Deep Atrous Multiscale Stereo Disparity Estimation Networks. 2019.
3 AcfNet 1.51 % 3.80 % 1.89 % 100.00 % 0.48 s GPU @ 2.5 Ghz (Python)
4 Samsung_System_LSI 1.56 % 3.56 % 1.90 % 100.00 % 0.4 s GPU @ 2.5 Ghz (Python)
5 RawStereoNet 1.57 % 3.56 % 1.90 % 100.00 % 0.43 s NVIDIA TITAN X Pascal (PyTorch)
6 ASNet_s 1.54 % 3.88 % 1.93 % 100.00 % 1.5 s GPU @ 2.5 Ghz (Python)
7 MS_CSPN 1.56 % 3.78 % 1.93 % 100.00 % 0.5 s GPU @ 2.5 Ghz (C/C++)
X. Cheng, P. Wang and R. Yang: Learning Depth with Convolutional Spatial Propagation Network. arXiv preprint arXiv:1810.02695 2018.
8 GANet-15 1.55 % 3.82 % 1.93 % 100.00 % 0.36 s GPU (Pytorch)
F. Zhang, V. Prisacariu, R. Yang and P. Torr: GA-Net: Guided Aggregation Net for End-to-end Stereo Matching. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
9 NCA-Net 1.68 % 3.28 % 1.94 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
10 APMNet 1.67 % 3.35 % 1.95 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
11 ASONet 1.57 % 3.97 % 1.97 % 100.00 % 1.5 s GPU@2.5GHz(Python)
12 PSMNet_R 1.62 % 3.79 % 1.98 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
13 ASNet_t 1.57 % 4.18 % 2.00 % 100.00 % 1.5 s GPU @ 2.5 Ghz (C/C++)
14 HD^3-Stereo code 1.70 % 3.63 % 2.02 % 100.00 % 0.14 s NVIDIA Pascal Titan XP
Z. Yin, T. Darrell and F. Yu: Hierarchical Discrete Distribution Decomposition for Match Density Estimation. CVPR 2019.
15 EdgeStereo-V2 1.84 % 3.30 % 2.08 % 100.00 % 0.32s Nvidia GTX Titan Xp
X. Song, X. Zhao, L. Fang and H. Hu: EdgeStereo: An Effective Multi-Task Learning Network for Stereo Matching and Edge Detection. arXiv preprint arXiv:1903.01700 2019.
16 DSHNet 1.65 % 4.29 % 2.09 % 100.00 % 0.7 s Nvidia GTX Titan Xp
17 EMCUA 1.66 % 4.27 % 2.09 % 100.00 % 0.9 s 1 core @ 2.5 Ghz (C/C++)
G. Nie, M. Cheng, Y. Liu, Z. Liang, D. Fan, Y. Liu and Y. Wang: Multi-Level Context Ultra-Aggregation for Stereo Matching. IEEE CVPR 2019.
18 KesonStereo_V1 1.77 % 3.74 % 2.09 % 100.00 % 0.4 s GPU @ 2.5 Ghz (Python)
19 open-depth 1.76 % 3.84 % 2.10 % 100.00 % 0.51 s NVIDIA TITAN Xp (PyTorch 0.4.0)
20 GwcNet-g code 1.74 % 3.93 % 2.11 % 100.00 % 0.32 s GPU @ 2.0 Ghz (Python + C/C++)
X. Guo, K. Yang, W. Yang, X. Wang and H. Li: Group-wise correlation stereo network. CVPR 2019.
21 SSPCV-Net 1.75 % 3.89 % 2.11 % 100.00 % 0.9 s GPU @ 2.5 Ghz (Python)
22 MS-Net 1.72 % 4.08 % 2.11 % 100.00 % 0.75 s 1 core @ 2.5 Ghz (C/C++)
23 PANet 1.79 % 3.75 % 2.12 % 100.00 % 0.6 s GPU @ 2.5 Ghz (Python)
24 IPSMNet 1.72 % 4.11 % 2.12 % 100.00 % 0.5 s GPU @ 2.5 Ghz (python)
25 DM-Net 1.69 % 4.29 % 2.12 % 100.00 % 0.9s 1 core @ 2.5 Ghz (Python)
26 oos 1.70 % 4.33 % 2.14 % 100.00 % 0.4 s GPU @ 2.5 Ghz (Python + C/C++)
27 sceneflow1.0
This method uses optical flow information.
1.70 % 4.33 % 2.14 % 100.00 % 5 s GPU @ 2.5 Ghz (Python + C/C++)
28 MCUA 1.69 % 4.38 % 2.14 % 100.00 % 0.40s Titan XP
G. Nie, M. Cheng, Y. Liu, Z. Liang, D. Fan, Y. Liu and Y. Wang: Multi-Level Context Ultra-Aggregation for Stereo Matching. IEEE CVPR 2019.
29 HSM-1.8x 1.80 % 3.85 % 2.14 % 100.00 % 0.14 s GPU @ 2.5 Ghz (Python)
30 Stereo-fusion-SJTU 1.87 % 3.61 % 2.16 % 100.00 % 0.7 s Nvidia GTX Titan Xp
X. Song, X. Zhao, H. Hu and L. Fang: EdgeStereo: A Context Integrated Residual Pyramid Network for Stereo Matching. Asian Conference on Computer Vision 2018.
31 RECV 1.74 % 4.34 % 2.18 % 100.00 % 0.6 s GPU @ 2.5 Ghz (Python)
32 AutoDispNet-CSS 1.94 % 3.37 % 2.18 % 100.00 % 0.9 s 1 core @ 2.5 Ghz (C/C++)
33 PMA 1.75 % 4.59 % 2.22 % 100.00 % 0.65 s GPU @ 2.5 Ghz (Python)
34 SENSE
This method uses optical flow information.
2.07 % 3.01 % 2.22 % 100.00 % 0.35 s GPU, GTX 1080Ti
35 SLED-Net 1.85 % 4.15 % 2.23 % 100.00 % 0.75 s 1 core @ 2.5 Ghz (C/C++)
36 TinyStereo_V2 1.93 % 3.76 % 2.24 % 100.00 % 0.4 s GPU @ 2.5 Ghz (Python)
37 SegStereo code 1.88 % 4.07 % 2.25 % 100.00 % 0.6 s Nvidia GTX Titan Xp
G. Yang, H. Zhao, J. Shi, Z. Deng and J. Jia: SegStereo: Exploiting Semantic Information for Disparity Estimation. ECCV 2018.
38 NLCA-Net 1.91 % 3.94 % 2.25 % 100.00 % 0.6 s 1 core @ 2.5 Ghz (C/C++)
39 HDU-LJJ-Group 1.82 % 4.42 % 2.25 % 100.00 % 0.47 s GPU @ 1.5 Ghz (Python)
40 Stereo-DRNet 1.72 % 4.95 % 2.26 % 100.00 % 0.23 s GPU@2.5hz
41 PASM 1.78 % 4.64 % 2.26 % 100.00 % 0.52 s 1 core @ 2.5 Ghz (C/C++)
42 MSDC-Net 1.96 % 3.77 % 2.26 % 100.00 % 0.6 s GPU @ 2.5 Ghz (Python)
Z. Rao, M. He, Y. Dai, Z. Zhu, B. Li and R. He: MSDC-Net: Multi-Scale Dense and Contextual Networks for Automated Disparity Map for Stereo Matching. arXiv preprint arXiv:1904.12658 2019.
43 MCV-MFC 1.95 % 3.84 % 2.27 % 100.00 % 0.35 s 1 core @ 2.5 Ghz (C/C++)
44 HSM-1.5x 1.95 % 3.93 % 2.28 % 100.00 % 0.085 s GPU @ 2.5 Ghz (Python)
45 DSM 1.83 % 4.56 % 2.28 % 100.00 % 0.4 s 1 core @ 2.5 Ghz (Python)
46 TinyStereo 1.92 % 4.13 % 2.28 % 100.00 % 0.39 s 1 core @ 2.5 Ghz (C/C++)
47 Sparse2Dense_D1 1.82 % 4.74 % 2.31 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (Python)
48 CFP-Net code 1.90 % 4.39 % 2.31 % 100.00 % 0.9 s 8 cores @ 2.5 Ghz (Python)
Z. Zhu, M. He, Y. Dai, Z. Rao and B. Li: Multi-scale Cross-form Pyramid Network for Stereo Matching. arXiv preprint 2019.
49 PSMNet code 1.86 % 4.62 % 2.32 % 100.00 % 0.41 s Nvidia GTX Titan Xp
J. Chang and Y. Chen: Pyramid Stereo Matching Network. arXiv preprint arXiv:1803.08669 2018.
50 CAR 1.92 % 4.43 % 2.34 % 100.00 % 0.39 s GPU @ 2.5 Ghz (Python)
51 SE-PSM 1.90 % 4.59 % 2.34 % 100.00 % 0.85 s GPU @ 3.0 Ghz (Python)
52 disparity stereo 1.85 % 4.86 % 2.35 % 100.00 % 0.5 s GPU @ 1.5 Ghz (Python)
53 SWNet 1.92 % 4.60 % 2.36 % 100.00 % 0.4 s GPU @ 2.5 Ghz (Python)
54 RawStereoNet-r 1.87 % 4.86 % 2.37 % 100.00 % 0.43 s NVIDIA TITAN X Pascal (PyTorch)
55 DeepStereo_V2 2.00 % 4.21 % 2.37 % 100.00 % 0.4 s 1 core @ 2.5 Ghz (C/C++)
56 SMAR-Net 1.95 % 4.57 % 2.38 % 100.00 % 0.7 s GPU @ 2.5 Ghz (Python)
57 Sparse2Dense
This method makes use of multiple (>2) views.
1.85 % 5.08 % 2.39 % 100.00 % 0.5 s 8 cores @ >3.5 Ghz (Python)
58 L2-method 1.91 % 4.90 % 2.40 % 100.00 % 0.35 s GPU @ 2.5 Ghz (Python + C/C++)
59 PSM+NN 1.95 % 4.85 % 2.43 % 100.00 % 1 s GPU @ 2.5 Ghz (Python + C/C++)
60 LWSM2 1.87 % 5.23 % 2.43 % 100.00 % 0.24 s GPU @ 2.5 Ghz (Python)
61 LWSM 1.86 % 5.35 % 2.44 % 100.00 % 0.24 s GPU @ 2.5 Ghz (Python)
62 HcNet 2.03 % 4.61 % 2.46 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
63 RAP 2.00 % 4.83 % 2.47 % 100.00 % 0.54 s 1 core @ 2.5 Ghz (C/C++)
64 msc 2.02 % 4.73 % 2.47 % 100.00 % 0.03 s GPU @ 1.5 Ghz (Python)
65 ABNet 2.01 % 4.81 % 2.48 % 100.00 % 0.03 s GPU @ 1.5 Ghz (Python)
66 NNet 1.95 % 5.32 % 2.51 % 100.00 % 0.69 s GPU @ 2.5 Ghz (Python + C/C++)
67 CAR 2.05 % 4.81 % 2.51 % 100.00 % 0.3 s 1 core @ 2.5 Ghz (C/C++)
68 Sparse2Dense_K1 2.09 % 4.66 % 2.52 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (Python)
69 X_ASPP 2.13 % 4.57 % 2.54 % 100.00 % 0.88 s GPU @ 2.5 Ghz (Python)
70 MSFnet 1.96 % 5.50 % 2.55 % 100.00 % 0.6 s GPU @ 2.5 Ghz (Python)
71 UberATG-DRISF
This method uses optical flow information.
2.16 % 4.49 % 2.55 % 100.00 % 0.75 s CPU+GPU @ 2.5 Ghz (Python)
W. Ma, S. Wang, R. Hu, Y. Xiong and R. Urtasun: Deep Rigid Instance Scene Flow. CVPR 2019.
72 FBW-Net 2.08 % 4.98 % 2.56 % 100.00 % 2 s GPU @ 2.5 Ghz (Python)
73 PDSNet 2.29 % 4.05 % 2.58 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (Python)
S. Tulyakov, A. Ivanov and F. Fleuret: Practical Deep Stereo (PDS): Toward applications-friendly deep stereo matching. Proceedings of the international conference on Neural Information Processing Systems (NIPS) 2018.
74 DeepStereo 2.16 % 4.72 % 2.59 % 100.00 % 0.9 s Titian X
75 SCV code 2.22 % 4.53 % 2.61 % 100.00 % 0.36 s Nvidia GTX 1080 Ti
C. Lu, H. Uchiyama, D. Thomas, A. Shimada and R. Taniguchi: Sparse Cost Volume for Efficient Stereo Matching. Remote Sensing 2018.
76 MRFnet 1.97 % 5.81 % 2.61 % 100.00 % 0.24 s GPU @ 2.5 Ghz (Python + C/C++)
77 DG-Net 2.06 % 5.47 % 2.63 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (C/C++)
78 WSMCnet-C4S2 code 2.21 % 4.75 % 2.63 % 100.00 % 0.41 s Nvidia GTX 1070 (Python)
79 CooperativeStereo 2.09 % 5.38 % 2.64 % 100.00 % 0.9 s GPU @ 2.5 Ghz (Python + C/C++)
80 HTC
This method uses optical flow information.
2.12 % 5.40 % 2.67 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (C/C++)
81 CRL code 2.48 % 3.59 % 2.67 % 100.00 % 0.47 s Nvidia GTX 1080
J. Pang, W. Sun, J. Ren, C. Yang and Q. Yan: Cascade residual learning: A two-stage convolutional neural network for stereo matching. ICCV Workshop on Geometry Meets Deep Learning 2017.
82 CCFP-Net 2.11 % 5.53 % 2.68 % 100.00 % 0.5 s 8 cores @ 2.5 Ghz (Python)
83 NCCL2 2.11 % 5.59 % 2.69 % 100.00 % 0.61 s GPU @ 2.5 Ghz (Python + C/C++)
84 MFS-NET 2.22 % 5.09 % 2.70 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
85 GHSM-NET2 code 2.43 % 4.08 % 2.70 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
86 oosf
This method uses optical flow information.
2.15 % 5.54 % 2.72 % 100.00 % 5 s GPU @ 2.5 Ghz (Python + C/C++)
87 ABN 2.20 % 5.35 % 2.73 % 100.00 % 0.08 s GPU @ 2.5 Ghz (Java)
88 CMF 2.29 % 4.93 % 2.73 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
89 GHSM-NET 2.48 % 4.29 % 2.78 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
90 GC-NET 2.21 % 6.16 % 2.87 % 100.00 % 0.9 s Nvidia GTX Titan X
A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach and A. Bry: End-to-End Learning of Geometry and Context for Deep Stereo Regression. Proceedings of the International Conference on Computer Vision (ICCV) 2017.
91 CAR 2.35 % 5.53 % 2.88 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (C/C++)
92 PSM-Cross 2.45 % 5.14 % 2.90 % 100.00 % 0.45 s GPU @ 2.5 Ghz (Python)
93 DWARF
This method uses optical flow information.
2.81 % 3.41 % 2.91 % 100.00 % 0.09s - 1.43s TitanXP - JetsonTX2
94 Ours 2.39 % 5.57 % 2.92 % 100.00 % 0.03 s GPU @ 2.5 Ghz (Python)
95 DPSM-Net 2.53 % 4.84 % 2.92 % 100.00 % 0.35 s GPU @ 2.5 Ghz (Python)
96 SemanStereo 2.36 % 5.72 % 2.92 % 100.00 % 60 s 1 core @ 2.5 Ghz (Python)
97 psm-i2 2.46 % 5.51 % 2.97 % 100.00 % 0.48 s 1 core @ 2.5 Ghz (Python)
98 FBW_ROB 2.35 % 6.20 % 2.99 % 100.00 % 2 s GPU @ 2.5 Ghz (Python)
99 SPF-Net 2.60 % 4.97 % 2.99 % 100.00 % 0.16 s GPU @ 2.0 Ghz (Python + C/C++)
100 MCANet 2.82 % 3.90 % 3.00 % 100.00 % 0.33 s 1 core @ 2.5 Ghz (C/C++)
101 X_ASPP2 2.49 % 5.58 % 3.00 % 100.00 % 0.88 s GPU @ 2.5 Ghz (Python)
102 LRCR 2.55 % 5.42 % 3.03 % 100.00 % 49.2 s Nvidia GTX Titan X
Z. Jie, P. Wang, Y. Ling, B. Zhao, Y. Wei, J. Feng and W. Liu: Left-Right Comparative Recurrent Model for Stereo Matching. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
103 CFCNet 2.47 % 5.90 % 3.04 % 100.00 % 0.47 s GPU @ 3.0 Ghz (Python)
104 anta-test-1 2.55 % 5.65 % 3.06 % 100.00 % 0.5 s GPU @ 1.5 Ghz (Python)
105 Fast DS-CS 2.83 % 4.31 % 3.08 % 100.00 % 0.02 s GPU @ 2.0 Ghz (Python + C/C++)
106 RecResNet code 2.46 % 6.30 % 3.10 % 100.00 % 0.3 s GPU @ NVIDIA TITAN X (Tensorflow)
K. Batsos and P. Mordohai: RecResNet: A Recurrent Residual CNN Architecture for Disparity Map Enhancement. In International Conference on 3D Vision (3DV) 2018.
107 NVStereoNet code 2.62 % 5.69 % 3.13 % 100.00 % 0.6 s NVIDIA Titan Xp
N. Smolyanskiy, A. Kamenev and S. Birchfield: On the Importance of Stereo for Accurate Depth Estimation: An Efficient Semi-Supervised Deep Neural Network Approach. arXiv preprint arXiv:1803.09719 2018.
108 NVStereoNet_ROB 2.62 % 5.69 % 3.13 % 100.00 % 0.6 s NVIDIA Titan Xp
109 DRR 2.58 % 6.04 % 3.16 % 100.00 % 0.4 s Nvidia GTX Titan X
S. Gidaris and N. Komodakis: Detect, Replace, Refine: Deep Structured Prediction For Pixel Wise Labeling. arXiv preprint arXiv:1612.04770 2016.
110 MFMNet_s 2.97 % 4.20 % 3.17 % 100.00 % 0.36 s GPU @ 2.5 Ghz (Python + C/C++)
111 SCBNet 2.56 % 6.35 % 3.19 % 100.00 % 0.19 s 1 core @ 2.5 Ghz (Python)
112 MFMNert 3.05 % 4.48 % 3.29 % 100.00 % 0.36 s GPU @ 2.5 Ghz (Python + C/C++)
113 CS2D 2.72 % 6.30 % 3.31 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
114 SsSMnet 2.70 % 6.92 % 3.40 % 100.00 % 0.8 s P100
Y. Zhong, Y. Dai and H. Li: Self-Supervised Learning for Stereo Matching with Self-Improving Ability. arXiv:1709.00930 2017.
115 RTSnet 2.86 % 6.19 % 3.41 % 100.00 % 0.02 s P100 (pytorch)
116 L-ResMatch code 2.72 % 6.95 % 3.42 % 100.00 % 48 s 1 core @ 2.5 Ghz (C/C++)
A. Shaked and L. Wolf: Improved Stereo Matching with Constant Highway Networks and Reflective Loss. arXiv preprint arxiv:1701.00165 2016.
117 Displets v2 code 3.00 % 5.56 % 3.43 % 100.00 % 265 s >8 cores @ 3.0 Ghz (Matlab + C/C++)
F. Guney and A. Geiger: Displets: Resolving Stereo Ambiguities using Object Knowledge. Conference on Computer Vision and Pattern Recognition (CVPR) 2015.
118 anta-test-2 2.88 % 6.70 % 3.51 % 100.00 % 0.5 s GPU @ 1.5 Ghz (C/C++)
119 CNNF+SGM 2.78 % 7.69 % 3.60 % 100.00 % 71 s TESLA K40C
F. Zhang and B. Wah: Fundamental Principles on Learning New Features for Effective Dense Matching. IEEE Transactions on Image Processing 2018.
120 DH-SF
This method uses optical flow information.
2.70 % 8.07 % 3.60 % 100.00 % 350 s 1 core @ 2.5 Ghz (Matlab + C/C++)
121 PBCP 2.58 % 8.74 % 3.61 % 100.00 % 68 s Nvidia GTX Titan X
A. Seki and M. Pollefeys: Patch Based Confidence Prediction for Dense Disparity Map. British Machine Vision Conference (BMVC) 2016.
122 SGM-Net 2.66 % 8.64 % 3.66 % 100.00 % 67 s Titan X
A. Seki and M. Pollefeys: SGM-Nets: Semi-Global Matching With Neural Networks. CVPR 2017.
123 DSS 3.23 % 6.70 % 3.80 % 100.00 % 0.05 s GPU @ 2.5 Ghz (Python)
124 Dense-CNN 2.90 % 8.79 % 3.88 % 100.00 % 53 s 1 core @ 2.5 Ghz (C/C++)
125 MC-CNN-acrt code 2.89 % 8.88 % 3.89 % 100.00 % 67 s Nvidia GTX Titan X (CUDA, Lua/Torch7)
J. Zbontar and Y. LeCun: Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches. Submitted to JMLR .
126 ESM 3.33 % 6.73 % 3.90 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (Python)
127 RGL 4.22 % 4.02 % 4.19 % 100.00 % 0.1 s 1 core @ 2.5 Ghz (C/C++)
128 PRSM
This method uses optical flow information.
This method makes use of multiple (>2) views.
code 3.02 % 10.52 % 4.27 % 99.99 % 300 s 1 core @ 2.5 Ghz (C/C++)
C. Vogel, K. Schindler and S. Roth: 3D Scene Flow Estimation with a Piecewise Rigid Scene Model. ijcv 2015.
129 DispNetC code 4.32 % 4.41 % 4.34 % 100.00 % 0.06 s Nvidia GTX Titan X (Caffe)
N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy and T. Brox: A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation. CVPR 2016.
130 SGM-Forest 3.11 % 10.74 % 4.38 % 99.92 % 6 seconds 1 core @ 3.0 Ghz (Python/C/C++)
J. Schönberger, S. Sinha and M. Pollefeys: Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching. European Conference on Computer Vision (ECCV) 2018.
131 SSF
This method uses optical flow information.
3.55 % 8.75 % 4.42 % 100.00 % 5 min 1 core @ 2.5 Ghz (Matlab + C/C++)
Z. Ren, D. Sun, J. Kautz and E. Sudderth: Cascaded Scene Flow Prediction using Semantic Segmentation. International Conference on 3D Vision (3DV) 2017.
132 ISF
This method uses optical flow information.
4.12 % 6.17 % 4.46 % 100.00 % 10 min 1 core @ 3 Ghz (C/C++)
A. Behl, O. Jafari, S. Mustikovela, H. Alhaija, C. Rother and A. Geiger: Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?. International Conference on Computer Vision (ICCV) 2017.
133 MSFG-Net 3.62 % 8.90 % 4.50 % 100.00 % 0.6 s 1 core @ 2.5 Ghz (Python)
134 Content-CNN 3.73 % 8.58 % 4.54 % 100.00 % 1 s Nvidia GTX Titan X (Torch)
W. Luo, A. Schwing and R. Urtasun: Efficient Deep Learning for Stereo Matching. CVPR 2016.
135 MADnet code 3.75 % 9.20 % 4.66 % 100.00 % 0.02 s GPU @ 2.5 Ghz (Python)
A. Tonioni, F. Tosi, M. Poggi, S. Mattoccia and L. Di Stefano: Real-Time self-adaptive deep stereo. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
136 MSFG-Net 3.81 % 9.62 % 4.77 % 100.00 % 0.6 s GPU @ 2.5 Ghz (C/C++)
137 FastStereov2 3.91 % 9.19 % 4.79 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (C/C++)
138 VN 4.29 % 7.65 % 4.85 % 100.00 % 0.5 s GPU @ 3.5 Ghz (Python + C/C++)
139 FastStereo 4.07 % 8.88 % 4.87 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (C/C++)
140 MC-CNN-WS code 3.78 % 10.93 % 4.97 % 100.00 % 1.35 s 1 core 2.5 Ghz + K40 NVIDIA, Lua-Torch
S. Tulyakov, A. Ivanov and F. Fleuret: Weakly supervised learning of deep metrics for stereo reconstruction. ICCV 2017.
141 3DMST 3.36 % 13.03 % 4.97 % 100.00 % 93 s 1 core @ >3.5 Ghz (C/C++)
X. Lincheng Li and L. Zhang: 3D Cost Aggregation with Multiple Minimum Spanning Trees for Stereo Matching. submitted to Applied Optics .
142 CBMV_ROB code 3.55 % 12.09 % 4.97 % 100.00 % 250 s 6 core @ 3.0 Ghz (Python + C/C++)
K. Batsos, C. Cai and P. Mordohai: CBMV: A Coalesced Bidirectional Matching Volume for Disparity Estimation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
143 SPOSF
This method uses optical flow information.
4.12 % 9.49 % 5.01 % 99.96 % 10 min 1 core @ 3.5 Ghz (Matlab + C/C++)
144 OSF+TC
This method uses optical flow information.
This method makes use of multiple (>2) views.
4.11 % 9.64 % 5.03 % 100.00 % 50 min 1 core @ 2.5 Ghz (C/C++)
M. Neoral and J. Šochman: Object Scene Flow with Temporal Consistency. 22nd Computer Vision Winter Workshop (CVWW) 2017.
145 CBMV code 4.17 % 9.53 % 5.06 % 100.00 % 250 s 6 cores @ 3.0 Ghz (Python,C/C++,CUDA Nvidia TitanX)
K. Batsos, C. Cai and P. Mordohai: CBMV: A Coalesced Bidirectional Matching Volume for Disparity Estimation. 2018.
146 PWOC-3D
This method uses optical flow information.
4.19 % 9.82 % 5.13 % 100.00 % 0.13 s GTX 1080 Ti
R. Saxena, R. Schuster, O. Wasenmüller and D. Stricker: PWOC-3D: Deep Occlusion-Aware End-to-End Scene Flow Estimation. Intelligent Vehicles Symposium (IV) 2019.
147 SS-SF
This method uses optical flow information.
3.59 % 13.11 % 5.18 % 100.00 % 3 min 1 core @ 2.5 Ghz (Matlab + C/C++)
148 OSF 2018
This method uses optical flow information.
code 4.11 % 11.12 % 5.28 % 100.00 % 390 s 1 core @ 2.5 Ghz (Matlab + C/C++)
M. Menze, C. Heipke and A. Geiger: Object Scene Flow. ISPRS Journal of Photogrammetry and Remote Sensing (JPRS) 2018.
149 SPS-St code 3.84 % 12.67 % 5.31 % 100.00 % 2 s 1 core @ 3.5 Ghz (C/C++)
K. Yamaguchi, D. McAllester and R. Urtasun: Efficient Joint Segmentation, Occlusion Labeling, Stereo and Flow Estimation. ECCV 2014.
150 MDP
This method uses stereo information.
4.19 % 11.25 % 5.36 % 100.00 % 11.4 s 4 cores @ 3.5 Ghz (Matlab + C/C++)
A. Li, D. Chen, Y. Liu and Z. Yuan: Coordinating Multiple Disparity Proposals for Stereo Computation. IEEE Conference on Computer Vision and Pattern Recognition 2016.
151 WDMC 4.35 % 10.78 % 5.42 % 100.00 % 1 min 8 cores @ 3.5 Ghz (Python)
152 DC-NET 4.31 % 11.52 % 5.51 % 100.00 % 0.53 s >8 cores @ 3.5 Ghz (C/C++)
153 SFF++
This method uses optical flow information.
This method makes use of multiple (>2) views.
4.27 % 12.38 % 5.62 % 100.00 % 78 s 4 cores @ 3.5 Ghz (C/C++)
154 OSF
This method uses optical flow information.
code 4.54 % 12.03 % 5.79 % 100.00 % 50 min 1 core @ 2.5 Ghz (C/C++)
M. Menze and A. Geiger: Object Scene Flow for Autonomous Vehicles. Conference on Computer Vision and Pattern Recognition (CVPR) 2015.
155 SDR code 4.51 % 12.64 % 5.86 % 100.00 % 4.2 s 1 core @ 2.5 Ghz (C/C++)
156 cpSGM-ADC 4.78 % 11.85 % 5.96 % 100.00 % 9 s s 4 cores @ 3.5 Ghz (C/C++)
157 pSGM 4.84 % 11.64 % 5.97 % 100.00 % 7.77 s 4 cores @ 3.5 Ghz (C/C++)
Y. Lee, M. Park, Y. Hwang, Y. Shin and C. Kyung: Memory-Efficient Parametric Semiglobal Matching. IEEE Signal Processing Letters 2018.
158 CSF
This method uses optical flow information.
4.57 % 13.04 % 5.98 % 99.99 % 80 s 1 core @ 2.5 Ghz (C/C++)
Z. Lv, C. Beall, P. Alcantarilla, F. Li, Z. Kira and F. Dellaert: A Continuous Optimization Approach for Efficient and Accurate Scene Flow. European Conf. on Computer Vision (ECCV) 2016.
159 MBM 4.69 % 13.05 % 6.08 % 100.00 % 0.13 s 1 core @ 3.0 Ghz (C/C++)
N. Einecke and J. Eggert: A Multi-Block-Matching Approach for Stereo. IV 2015.
160 PR-Sceneflow
This method uses optical flow information.
code 4.74 % 13.74 % 6.24 % 100.00 % 150 s 4 core @ 3.0 Ghz (Matlab + C/C++)
C. Vogel, K. Schindler and S. Roth: Piecewise Rigid Scene Flow. ICCV 2013.
161 SGM+DAISY code 4.86 % 13.42 % 6.29 % 95.26 % 5 s 1 core @ 2.5 Ghz (C/C++)
162 DispSegNet 4.20 % 16.97 % 6.33 % 100.00 % 0.9 s GPU @ 2.5 Ghz (Python)
J. Zhang, K. Skinner, R. Vasudevan and M. Johnson-Roberson: DispSegNet: Leveraging Semantics for End- to-End Learning of Disparity Estimation From Stereo Imagery. IEEE Robotics and Automation Letters 2019.
163 DeepCostAggr code 5.34 % 11.35 % 6.34 % 99.98 % 0.03 s GPU @ 2.5 Ghz (C/C++)
A. Kuzmin, D. Mikushin and V. Lempitsky: End-to-end Learning of Cost-Volume Aggregation for Real-time Dense Stereo. 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP) 2017.
164 SGM_ROB 5.06 % 13.00 % 6.38 % 100.00 % 0.11 s Nvidia GTX 980
H. Hirschm\"uller: Stereo Processing by Semi-Global Matching and Mutual Information. IEEE Transactions on Pattern Analysis and Machine Intelligence 2008.
165 SceneFFields
This method uses optical flow information.
5.12 % 13.83 % 6.57 % 100.00 % 65 s 4 cores @ 3.7 Ghz (C/C++)
R. Schuster, O. Wasenmüller, G. Kuschk, C. Bailer and D. Stricker: SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences. IEEE Winter Conference on Applications of Computer Vision (WACV) 2018.
166 SPS+FF++
This method uses optical flow information.
code 5.47 % 12.19 % 6.59 % 100.00 % 36 s 1 core @ 3.5 Ghz (C/C++)
R. Schuster, O. Wasenmüller and D. Stricker: Dense Scene Flow from Stereo Disparity and Optical Flow. ACM Computer Science in Cars Symposium (CSCS) 2018.
167 UnOS(Full)
This method uses optical flow information.
5.10 % 14.55 % 6.67 % 100.00 % 0.08 s 1 core @ 2.5 Ghz (C/C++)
168 FSF+MS
This method uses optical flow information.
This method makes use of the epipolar geometry.
This method makes use of multiple (>2) views.
5.72 % 11.84 % 6.74 % 100.00 % 2.7 s 4 cores @ 3.5 Ghz (C/C++)
T. Taniai, S. Sinha and Y. Sato: Fast Multi-frame Stereo Scene Flow with Motion Segmentation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) 2017.
169 AABM 4.88 % 16.07 % 6.74 % 100.00 % 0.08 s 1 core @ 3.0 Ghz (C/C++)
N. Einecke and J. Eggert: Stereo Image Warping for Improved Depth Estimation of Road Surfaces. IV 2013.
170 DLM-Net 5.04 % 15.76 % 6.83 % 100.00 % 0.68 s 1 core @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
171 SGM+C+NL
This method uses optical flow information.
code 5.15 % 15.29 % 6.84 % 100.00 % 4.5 min 1 core @ 2.5 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
D. Sun, S. Roth and M. Black: A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them. IJCV 2013.
172 SGM+LDOF
This method uses optical flow information.
code 5.15 % 15.29 % 6.84 % 100.00 % 86 s 1 core @ 2.5 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
T. Brox and J. Malik: Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation. PAMI 2011.
173 SGM+SF
This method uses optical flow information.
5.15 % 15.29 % 6.84 % 100.00 % 45 min 16 core @ 3.2 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
M. Hornacek, A. Fitzgibbon and C. Rother: SphereFlow: 6 DoF Scene Flow from RGB-D Pairs. CVPR 2014.
174 SMV 5.03 % 16.34 % 6.91 % 99.99 % 1.6 min 8 cores @ 3.5 Ghz (Python)
175 SNCC 5.36 % 16.05 % 7.14 % 100.00 % 0.08 s 1 core @ 3.0 Ghz (C/C++)
N. Einecke and J. Eggert: A Two-Stage Correlation Method for Stereoscopic Depth Estimation. DICTA 2010.
176 OASM-DDS 5.12 % 17.96 % 7.25 % 100.00 % 0.90 s 1 core @ 2.5 Ghz (Python)
177 DSimNet 6.15 % 13.20 % 7.32 % 100.00 % 0.57 s GPU @ 2.5 Ghz (Python)
178 WCMA_ROB 5.68 % 16.36 % 7.45 % 100.00 % 40 s 1 core @ 2.5 Ghz (Matlab + C/C++)
179 SGM+CT 6.50 % 16.62 % 8.18 % 99.53 % 23 s 1 core @ 2.5 Ghz (C/C++)
180 CSCT+SGM+MF 6.91 % 14.87 % 8.24 % 100.00 % 0.0064 s Nvidia GTX Titan X @ 1.0 Ghz (CUDA)
D. Hernandez-Juarez, A. Chacon, A. Espinosa, D. Vazquez, J. Moure and A. Lopez: Embedded real-time stereo estimation via Semi-Global Matching on the GPU. Procedia Computer Science 2016.
181 MeshStereo code 5.82 % 21.21 % 8.38 % 100.00 % 87 s 1 core @ 2.5 Ghz (C/C++)
C. Zhang, Z. Li, Y. Cheng, R. Cai, H. Chao and Y. Rui: MeshStereo: A Global Stereo Model With Mesh Alignment Regularization for View Interpolation. The IEEE International Conference on Computer Vision (ICCV) 2015.
182 PCOF + ACTF
This method uses optical flow information.
6.31 % 19.24 % 8.46 % 100.00 % 0.08 s GPU @ 2.0 Ghz (C/C++)
M. Derome, A. Plyer, M. Sanfourche and G. Le Besnerais: A Prediction-Correction Approach for Real-Time Optical Flow Computation Using Stereo. German Conference on Pattern Recognition 2016.
183 PCOF-LDOF
This method uses optical flow information.
6.31 % 19.24 % 8.46 % 100.00 % 50 s 1 core @ 3.0 Ghz (C/C++)
M. Derome, A. Plyer, M. Sanfourche and G. Le Besnerais: A Prediction-Correction Approach for Real-Time Optical Flow Computation Using Stereo. German Conference on Pattern Recognition 2016.
184 OASM-Net 6.89 % 19.42 % 8.98 % 100.00 % 0.73 s GPU @ 2.5 Ghz (Python)
A. Li and Z. Yuan: Occlusion Aware Stereo Matching via Cooperative Unsupervised Learning. Proceedings of the Asian Conference on Computer Vision, ACCV 2018.
185 ELAS_ROB code 7.38 % 21.15 % 9.67 % 100.00 % 0.19 s 4 cores @ >3.5 Ghz (C/C++)
A. Geiger, M. Roser and R. Urtasun: Efficient Large-Scale Stereo Matching. ACCV 2010.
186 ELAS code 7.86 % 19.04 % 9.72 % 92.35 % 0.3 s 1 core @ 2.5 Ghz (C/C++)
A. Geiger, M. Roser and R. Urtasun: Efficient Large-Scale Stereo Matching. ACCV 2010.
187 REAF code 8.43 % 18.51 % 10.11 % 100.00 % 1.1 s 1 core @ 2.5 Ghz (C/C++)
C. Cigla: Recursive Edge-Aware Filters for Stereo Matching. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2015.
188 iGF
This method makes use of multiple (>2) views.
8.64 % 21.85 % 10.84 % 100.00 % 220 s 1 core @ 3.0 Ghz (C/C++)
R. Hamzah, H. Ibrahim and A. Hassan: Stereo matching algorithm based on per pixel difference adjustment, iterative guided filter and graph segmentation. Journal of Visual Communication and Image Representation 2016.
189 OCV-SGBM code 8.92 % 20.59 % 10.86 % 90.41 % 1.1 s 1 core @ 2.5 Ghz (C/C++)
H. Hirschmueller: Stereo processing by semiglobal matching and mutual information. PAMI 2008.
190 TW-SMNet 11.92 % 12.16 % 11.96 % 100.00 % 0.7 s GPU @ 2.5 Ghz (Python)
191 SDM 9.41 % 24.75 % 11.96 % 62.56 % 1 min 1 core @ 2.5 Ghz (C/C++)
J. Kostkova: Stratified dense matching for stereopsis in complex scenes. BMVC 2003.
192 SGM&FlowFie+
This method uses optical flow information.
11.93 % 20.57 % 13.37 % 81.24 % 29 s 1 core @ 3.5 Ghz (C/C++)
R. Schuster, C. Bailer, O. Wasenmüller and D. Stricker: Combining Stereo Disparity and Optical Flow for Basic Scene Flow. Commercial Vehicle Technology Symposium (CVTS) 2018.
193 GCSF
This method uses optical flow information.
code 11.64 % 27.11 % 14.21 % 100.00 % 2.4 s 1 core @ 2.5 Ghz (C/C++)
J. Cech, J. Sanchez-Riera and R. Horaud: Scene Flow Estimation by growing Correspondence Seeds. CVPR 2011.
194 MT-TW-SMNet 15.47 % 16.25 % 15.60 % 100.00 % 0.4s GPU @ 2.5 Ghz (Python)
195 Mono-SF
This method uses optical flow information.
14.21 % 26.94 % 16.32 % 100.00 % 41 s 1 core @ 3.5 Ghz (Matlab + C/C++)
196 CostFilter code 17.53 % 22.88 % 18.42 % 100.00 % 4 min 1 core @ 2.5 Ghz (Matlab)
C. Rhemann, A. Hosni, M. Bleyer, C. Rother and M. Gelautz: Fast Cost-Volume Filtering for Visual Correspondence and Beyond. CVPR 2011.
197 DWBSF
This method uses optical flow information.
19.61 % 22.69 % 20.12 % 100.00 % 7 min 4 cores @ 3.5 Ghz (C/C++)
C. Richardt, H. Kim, L. Valgaerts and C. Theobalt: Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. 3DV 2016.
198 monoResMatch code 22.10 % 19.81 % 21.72 % 100.00 % 0.16 s Titan X GPU
F. Tosi, F. Aleotti, M. Poggi and S. Mattoccia: Learning monocular depth estimation infusing traditional stereo knowledge. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019.
199 OCV-BM code 24.29 % 30.13 % 25.27 % 58.54 % 0.1 s 1 core @ 2.5 Ghz (C/C++)
G. Bradski: The OpenCV Library. Dr. Dobb's Journal of Software Tools 2000.
200 VSF
This method uses optical flow information.
code 27.31 % 21.72 % 26.38 % 100.00 % 125 min 1 core @ 2.5 Ghz (C/C++)
F. Huguet and F. Devernay: A Variational Method for Scene Flow Estimation from Stereo Sequences. ICCV 2007.
201 SED code 25.01 % 40.43 % 27.58 % 4.02 % 0.68 s 1 core @ 2.0 Ghz (C/C++)
D. Pe\~{n}a and A. Sutherland: Disparity Estimation by Simultaneous Edge Drawing. Computer Vision -- ACCV 2016 Workshops: ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 2017.
202 MST code 45.83 % 38.22 % 44.57 % 100.00 % 7 s 1 core @ 2.5 Ghz (Matlab + C/C++)
Q. Yang: A Non-Local Cost Aggregation Method for Stereo Matching. CVPR 2012.
203 DispCC 97.45 % 99.68 % 97.82 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
Table as LaTeX | Only published Methods




Related Datasets

  • HCI/Bosch Robust Vision Challenge: Optical flow and stereo vision challenge on high resolution imagery recorded at a high frame rate under diverse weather conditions (e.g., sunny, cloudy, rainy). The Robert Bosch AG provides a prize for the best performing method.
  • Image Sequence Analysis Test Site (EISATS): Synthetic image sequences with ground truth information provided by UoA and Daimler AG. Some of the images come with 3D range sensor information.
  • Middlebury Stereo Evaluation: The classic stereo evaluation benchmark, featuring four test images in version 2 of the benchmark, with very accurate ground truth from a structured light system. 38 image pairs are provided in total.
  • Daimler Stereo Dataset: Stereo bad weather highway scenes with partial ground truth for freespace
  • Make3D Range Image Data: Images with small-resolution ground truth used to learn and evaluate depth from single monocular images.
  • Lubor Ladicky's Stereo Dataset: Stereo Images with manually labeled ground truth based on polygonal areas.
  • Middlebury Optical Flow Evaluation: The classic optical flow evaluation benchmark, featuring eight test images, with very accurate ground truth from a shape from UV light pattern system. 24 image pairs are provided in total.

Citation

When using this dataset in your research, we will be happy if you cite us:
@ARTICLE{Menze2018JPRS,
  author = {Moritz Menze and Christian Heipke and Andreas Geiger},
  title = {Object Scene Flow},
  journal = {ISPRS Journal of Photogrammetry and Remote Sensing (JPRS)},
  year = {2018}
}
@INPROCEEDINGS{Menze2015ISA,
  author = {Moritz Menze and Christian Heipke and Andreas Geiger},
  title = {Joint 3D Estimation of Vehicles and Scene Flow},
  booktitle = {ISPRS Workshop on Image Sequence Analysis (ISA)},
  year = {2015}
}



eXTReMe Tracker