Method

IDA-3D: Instance-Depth-Aware 3D Object Detection from Stereo Vision for Autonomous Driving [st] [IDA-3D]


Submitted on 12 Feb. 2020 02:11 by
Wanli Peng (Dalian University of Technology)

Running time:0.08 s
Environment:1 core @ 2.5 Ghz (Python + C/C++)

Method Description:
we propose a stereo based 3D object detection
approach from stereo vision which does not rely on
LiDAR data either as input or as supervision in
training, but solely takes RGB images with
corresponding annotated 3D bounding boxes as
training data. As depth estimation of object is the
key factor affecting the performance of 3D object
detection, we introduce an Instance-DepthAware
(IDA) module which accurately predicts the depth
of the 3D bounding box’s center by instance-depth
awareness, disparity adaptation and matching cost
reweighting.
Parameters:
TBD
Latex Bibtex:

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 92.79 % 84.92 % 74.75 %
Car (Orientation) 92.63 % 84.32 % 73.98 %
Car (3D Detection) 45.09 % 29.32 % 23.13 %
Car (Bird's Eye View) 61.87 % 42.47 % 34.59 %
This table as LaTeX


2D object detection results.
This figure as: png eps pdf txt gnuplot



Orientation estimation results.
This figure as: png eps pdf txt gnuplot



3D object detection results.
This figure as: png eps pdf txt gnuplot



Bird's eye view results.
This figure as: png eps pdf txt gnuplot




eXTReMe Tracker