3D Object Detection Evaluation 2017


The 3D object detection benchmark consists of 7481 training images and 7518 test images as well as the corresponding point clouds, comprising a total of 80.256 labeled objects. For evaluation, we compute precision-recall curves. To rank the methods we compute average precision. We require that all methods use the same parameter set for all test pairs. Our development kit provides details about the data format as well as MATLAB / C++ utility functions for reading and writing the label files.

We evaluate 3D object detection performance using the PASCAL criteria also used for 2D object detection. Far objects are thus filtered based on their bounding box height in the image plane. As only objects also appearing on the image plane are labeled, objects in don't car areas do not count as false positives. We note that the evaluation does not take care of ignoring detections that are not visible on the image plane — these detections might give rise to false positives. For cars we require an 3D bounding box overlap of 70%, while for pedestrians and cyclists we require a 3D bounding box overlap of 50%. Difficulties are defined as follows:

  • Easy: Min. bounding box height: 40 Px, Max. occlusion level: Fully visible, Max. truncation: 15 %
  • Moderate: Min. bounding box height: 25 Px, Max. occlusion level: Partly occluded, Max. truncation: 30 %
  • Hard: Min. bounding box height: 25 Px, Max. occlusion level: Difficult to see, Max. truncation: 50 %

All methods are ranked based on the moderately difficult results.

Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Stereo: Method uses left and right (stereo) images
  • Flow: Method uses optical flow (2 temporally adjacent images)
  • Multiview: Method uses more than 2 temporally adjacent images
  • Laser Points: Method uses point clouds from Velodyne laser scanner
  • Additional training data: Use of additional data sources for training (see details)

Car


Method Setting Code Moderate Easy Hard Runtime Environment
1 AILabs3D
This method makes use of Velodyne laser scans.
73.70 % 83.32 % 65.77 % 0.6 s GPU @ >3.5 Ghz (Python)
2 SECOND
This method makes use of Velodyne laser scans.
73.66 % 83.13 % 66.20 % 0.05 s GPU @ 3.1 Ghz (Python)
3 KazuaNet
This method makes use of Velodyne laser scans.
code 73.04 % 83.71 % 59.16 % 0.1 s GPU @ >3.5 Ghz (Python + C/C++)
4 MDC
This method makes use of Velodyne laser scans.
72.67 % 82.07 % 64.60 % 0.2 s volta v100
5 AVOD-FPN
This method makes use of Velodyne laser scans.
code 71.88 % 81.94 % 66.38 % 0.1 s Titan X (Pascal)
J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.
6 CONV-BOX
This method makes use of Velodyne laser scans.
70.47 % 79.98 % 64.49 % 0.2 s Tesla V100
7 F-PointNet
This method makes use of Velodyne laser scans.
code 70.39 % 81.20 % 62.19 % 0.17 s GPU @ 3.0 Ghz (Python)
C. Qi, W. Liu, C. Wu, H. Su and L. Guibas: Frustum PointNets for 3D Object Detection from RGB-D Data. arXiv preprint arXiv:1711.08488 2017.
8 D3D
This method makes use of Velodyne laser scans.
67.90 % 83.51 % 59.59 % 0.4 s 1 core @ 3.5 Ghz (Python)
9 SCANet 66.30 % 76.09 % 58.68 % 0.09s GPU @ 2.5 Ghz (Python)
10 UberATG-ContFuse
This method makes use of Velodyne laser scans.
66.22 % 82.54 % 64.04 % 0.06 s GPU @ 2.5 Ghz (Python)
M. Liang, B. Yang, S. Wang and R. Urtasun: Deep Continuous Fusion for Multi-Sensor 3D Object Detection. ECCV 2018.
11 AVOD
This method makes use of Velodyne laser scans.
code 65.78 % 73.59 % 58.38 % 0.08 s Titan X (pascal)
J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.
12 VSE 65.75 % 75.98 % 58.44 % 0.15 s GPU @ 2.5 Ghz (Python)
13 LTT
This method makes use of Velodyne laser scans.
65.42 % 74.07 % 59.68 % 0.4 s 1 core @ 3.5 Ghz (Python)
14 FNV1_RPN 65.18 % 74.61 % 57.75 % 0.12 s 1 core @ 2.5 Ghz (Python + C/C++)
15 FNV1_Fusion 65.07 % 74.78 % 57.74 % 0.11 s GPU @ 2.5 Ghz (Python)
16 SECA 64.59 % 73.70 % 57.21 % 0.09 s GPU @ 2.5 Ghz (Python)
17 Kiwoo 64.21 % 75.62 % 57.19 % 0.1 s 1 core @ 2.5 Ghz (Python + C/C++)
18 AVOD-SSD
This method makes use of Velodyne laser scans.
code 63.87 % 73.64 % 56.90 % 0.09 s GPU @ 2.5 Ghz (Python)
19 MV3D
This method makes use of Velodyne laser scans.
62.35 % 71.09 % 55.12 % 0.36 s GPU @ 2.5 Ghz (Python + C/C++)
X. Chen, H. Ma, J. Wan, B. Li and T. Xia: Multi-View 3D Object Detection Network for Autonomous Driving. CVPR 2017.
20 T2Method 62.08 % 74.36 % 55.14 % 0.05 s GPU @ 2.5 Ghz (Python + C/C++)
21 FNV1 61.69 % 71.93 % 55.41 % 0.11 s GPU @ 2.5 Ghz (Python)
22 FNV2 59.26 % 67.67 % 51.97 % 0.18 s GPU @ 2.5 Ghz (Python)
23 CLF3D
This method makes use of Velodyne laser scans.
58.48 % 65.54 % 46.54 % 0.13 s GPU @ 2.5 Ghz (Python)
24 A3DODWTDA
This method makes use of Velodyne laser scans.
code 56.81 % 59.35 % 50.51 % 0.08 s GPU @ 3.0 Ghz (Python)
F. Gustafsson and E. Linder-Norén: Automotive 3D Object Detection Without Target Domain Annotations. 2018.
25 anm 56.76 % 68.02 % 49.39 % 3 s 1 core @ 2.5 Ghz (C/C++)
26 avodC 55.47 % 65.71 % 48.74 % 0.1 s GPU @ 2.5 Ghz (Python)
27 tester 53.20 % 65.56 % 48.59 % 0.1
28 MV3D (LIDAR)
This method makes use of Velodyne laser scans.
52.73 % 66.77 % 51.31 % 0.24 s GPU @ 2.5 Ghz (Python + C/C++)
X. Chen, H. Ma, J. Wan, B. Li and T. Xia: Multi-View 3D Object Detection Network for Autonomous Driving. CVPR 2017.
29 NLK 47.82 % 55.84 % 43.93 % 0.05 s 1 core @ 2.5 Ghz (Python + C/C++)
30 Roadstar.ai 44.00 % 48.60 % 40.05 % 0.08 s GPU @ 2.0 Ghz (Python)
31 VoxelNet basic
This method makes use of Velodyne laser scans.
24.35 % 29.70 % 23.52 % 0.07 s GPU (Python)
32 RT3D
This method makes use of Velodyne laser scans.
21.27 % 23.49 % 19.81 % 0.09 s GPU @ 1.8Ghz
33 BirdNet
This method makes use of Velodyne laser scans.
13.44 % 14.75 % 12.04 % 0.11 s Titan Xp GPU
J. Beltran, C. Guindel, F. Moreno, D. Cruzado, F. Garcia and A. Escalera: BirdNet: a 3D Object Detection Framework from LiDAR information. arXiv preprint arXiv:1805.01195 2018.
34 Licar
This method makes use of Velodyne laser scans.
12.88 % 16.25 % 13.67 % 0.09 s GPU @ 2.0 Ghz (Python)
35 TopNet-HighRes
This method makes use of Velodyne laser scans.
12.58 % 15.29 % 12.25 % 0.27 s NVIDIA GeForce 1080 Ti (tensorflow-gpu)
S. Wirges, T. Fischer, J. Frias and C. Stiller: Object Detection and Classification in Occupancy Grid Maps using Deep Convolutional Networks. 2018.
36 TopNet-DecayRate
This method makes use of Velodyne laser scans.
12.36 % 16.59 % 11.71 % 92 ms NVIDIA GeForce 1080 Ti (tensorflow-gpu)
S. Wirges, T. Fischer, J. Frias and C. Stiller: Object Detection and Classification in Occupancy Grid Maps using Deep Convolutional Networks. 2018.
37 SAITv1 11.01 % 12.92 % 10.45 % 0.18 s GPU @ 2.5 Ghz (Python, C/C++)
38 DT3D 9.92 % 15.37 % 9.26 % 0,21s GPU @ 2.5 Ghz (Python)
39 M3D 7.81 % 10.25 % 6.54 % 0.4 s GPU @ 2.5 Ghz (Python + C/C++)
40 CSoR
This method makes use of Velodyne laser scans.
6.79 % 6.76 % 6.14 % 3.5 s 4 cores @ >3.5 Ghz (Python + C/C++)
L. Plotkin: PyDriver: Entwicklung eines Frameworks für räumliche Detektion und Klassifikation von Objekten in Fahrzeugumgebung. 2015.
41 A3DODWTDA (image) code 6.45 % 6.76 % 4.87 % 0.8 s GPU @ 3.0 Ghz (Python)
F. Gustafsson and E. Linder-Norén: Automotive 3D Object Detection Without Target Domain Annotations. 2018.
42 VS3D 6.29 % 7.69 % 6.16 % 0.58 s GPU @ 2.5 Ghz (C/C++)
43 3D-SSMFCNN code 2.28 % 2.39 % 1.52 % 0.1 s GPU @ 1.5 Ghz (C/C++)
L. Novak: Vehicle Detection and Pose Estimation for Autonomous Driving. 2017.
44 3DVSSD 1.14 % 1.38 % 1.27 % 0.06 s 1 core @ 2.5 Ghz (C/C++)
45 mBoW
This method makes use of Velodyne laser scans.
0.00 % 0.00 % 0.00 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Pedestrian


Method Setting Code Moderate Easy Hard Runtime Environment
1 F-PointNet
This method makes use of Velodyne laser scans.
code 44.89 % 51.21 % 40.23 % 0.17 s GPU @ 3.0 Ghz (Python)
C. Qi, W. Liu, C. Wu, H. Su and L. Guibas: Frustum PointNets for 3D Object Detection from RGB-D Data. arXiv preprint arXiv:1711.08488 2017.
2 AVOD-FPN
This method makes use of Velodyne laser scans.
code 42.81 % 50.80 % 40.88 % 0.1 s Titan X (Pascal)
J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.
3 SECOND
This method makes use of Velodyne laser scans.
42.56 % 51.07 % 37.29 % 0.05 s GPU @ 3.1 Ghz (Python)
4 MDC
This method makes use of Velodyne laser scans.
42.54 % 50.79 % 36.56 % 0.2 s volta v100
5 CONV-BOX
This method makes use of Velodyne laser scans.
41.01 % 47.74 % 35.98 % 0.2 s Tesla V100
6 anm 34.71 % 45.89 % 32.43 % 3 s 1 core @ 2.5 Ghz (C/C++)
7 CLF3D
This method makes use of Velodyne laser scans.
31.65 % 35.85 % 26.94 % 0.13 s GPU @ 2.5 Ghz (Python)
8 AVOD
This method makes use of Velodyne laser scans.
code 31.51 % 38.28 % 26.98 % 0.08 s Titan X (pascal)
J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.
9 anonymous
This method makes use of Velodyne laser scans.
31.30 % 38.00 % 28.77 % 0.75 s GPU @ 3.5 Ghz (Python)
10 Roadstar.ai 23.28 % 24.61 % 21.97 % 0.08 s GPU @ 2.0 Ghz (Python)
11 NLK 14.51 % 16.80 % 13.43 % 0.05 s 1 core @ 2.5 Ghz (Python + C/C++)
12 BirdNet
This method makes use of Velodyne laser scans.
11.80 % 14.31 % 10.55 % 0.11 s Titan Xp GPU
J. Beltran, C. Guindel, F. Moreno, D. Cruzado, F. Garcia and A. Escalera: BirdNet: a 3D Object Detection Framework from LiDAR information. arXiv preprint arXiv:1805.01195 2018.
13 TopNet-DecayRate
This method makes use of Velodyne laser scans.
10.95 % 11.46 % 9.09 % 92 ms NVIDIA GeForce 1080 Ti (tensorflow-gpu)
S. Wirges, T. Fischer, J. Frias and C. Stiller: Object Detection and Classification in Occupancy Grid Maps using Deep Convolutional Networks. 2018.
14 TopNet-HighRes
This method makes use of Velodyne laser scans.
9.66 % 13.45 % 9.64 % 0.27 s NVIDIA GeForce 1080 Ti (tensorflow-gpu)
S. Wirges, T. Fischer, J. Frias and C. Stiller: Object Detection and Classification in Occupancy Grid Maps using Deep Convolutional Networks. 2018.
15 DT3D 1.14 % 1.14 % 1.14 % 0,21s GPU @ 2.5 Ghz (Python)
16 mBoW
This method makes use of Velodyne laser scans.
0.00 % 0.00 % 0.00 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Cyclist


Method Setting Code Moderate Easy Hard Runtime Environment
1 MDC
This method makes use of Velodyne laser scans.
57.27 % 75.27 % 49.75 % 0.2 s volta v100
2 F-PointNet
This method makes use of Velodyne laser scans.
code 56.77 % 71.96 % 50.39 % 0.17 s GPU @ 3.0 Ghz (Python)
C. Qi, W. Liu, C. Wu, H. Su and L. Guibas: Frustum PointNets for 3D Object Detection from RGB-D Data. arXiv preprint arXiv:1711.08488 2017.
3 CONV-BOX
This method makes use of Velodyne laser scans.
54.45 % 68.27 % 52.26 % 0.2 s Tesla V100
4 SECOND
This method makes use of Velodyne laser scans.
53.85 % 70.51 % 46.90 % 0.05 s GPU @ 3.1 Ghz (Python)
5 AVOD-FPN
This method makes use of Velodyne laser scans.
code 52.18 % 64.00 % 46.61 % 0.1 s Titan X (Pascal)
J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.
6 Roadstar.ai 45.51 % 55.85 % 41.48 % 0.08 s GPU @ 2.0 Ghz (Python)
7 AVOD
This method makes use of Velodyne laser scans.
code 44.90 % 60.11 % 38.80 % 0.08 s Titan X (pascal)
J. Ku, M. Mozifian, J. Lee, A. Harakeh and S. Waslander: Joint 3D Proposal Generation and Object Detection from View Aggregation. IROS 2018.
8 anm 35.86 % 50.06 % 31.11 % 3 s 1 core @ 2.5 Ghz (C/C++)
9 CLF3D
This method makes use of Velodyne laser scans.
35.39 % 50.58 % 33.55 % 0.13 s GPU @ 2.5 Ghz (Python)
10 NLK 34.74 % 44.10 % 32.30 % 0.05 s 1 core @ 2.5 Ghz (Python + C/C++)
11 BirdNet
This method makes use of Velodyne laser scans.
12.43 % 18.35 % 11.88 % 0.11 s Titan Xp GPU
J. Beltran, C. Guindel, F. Moreno, D. Cruzado, F. Garcia and A. Escalera: BirdNet: a 3D Object Detection Framework from LiDAR information. arXiv preprint arXiv:1805.01195 2018.
12 TopNet-DecayRate
This method makes use of Velodyne laser scans.
9.09 % 10.54 % 9.09 % 92 ms NVIDIA GeForce 1080 Ti (tensorflow-gpu)
S. Wirges, T. Fischer, J. Frias and C. Stiller: Object Detection and Classification in Occupancy Grid Maps using Deep Convolutional Networks. 2018.
13 TopNet-HighRes
This method makes use of Velodyne laser scans.
5.98 % 4.48 % 6.18 % 0.27 s NVIDIA GeForce 1080 Ti (tensorflow-gpu)
S. Wirges, T. Fischer, J. Frias and C. Stiller: Object Detection and Classification in Occupancy Grid Maps using Deep Convolutional Networks. 2018.
14 DT3D 1.20 % 1.76 % 1.26 % 0,21s GPU @ 2.5 Ghz (Python)
15 mBoW
This method makes use of Velodyne laser scans.
0.00 % 0.00 % 0.00 % 10 s 1 core @ 2.5 Ghz (C/C++)
J. Behley, V. Steinhage and A. Cremers: Laser-based Segment Classification Using a Mixture of Bag-of-Words. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013.
Table as LaTeX | Only published Methods

Related Datasets

Citation

When using this dataset in your research, we will be happy if you cite us:
@INPROCEEDINGS{Geiger2012CVPR,
  author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
  title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2012}
}



eXTReMe Tracker