Method

LumiNet [LumiNet]
https://github.com/faziii0/LumiNet

Submitted on 23 Oct. 2024 08:29 by
Fazal ghaffar (Deakin University)

Running time:0.1 s
Environment:1 core @ 2.5 Ghz (Python)

Method Description:
This paper combines LiDAR point clouds, RGB
images, and depth images to provide complementary
information to solve 3D object detection problems.
These modalities provide crucial indicators for
reliable 3D object detection in various
applications, particularly Autonomous Vehicles
(AVs). Denoted as LumiNet (LiDAR point clouds,
RGB, and depth image), our proposed framework
leverages a sensory-fusion approach to predict
oriented 3D bounding boxes using LiDAR point
clouds, RGB images, and depth images. A point-
wise integration of semantic information from RGB
images into point features using a fusion module
is devised. In view of the importance of depth as
a transitional representation for activity
recognition in real environments, we employ depth
features to enhance RGB and LiDAR features. Scene
understanding in autonomous driving depends on an
accurate depth estimate from LiDAR and images.
Even though depth estimation is useful, it is
still an extremely difficult and unconstrained ta
Parameters:
None
Latex Bibtex:

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 99.23 % 96.27 % 88.94 %
Car (Orientation) 99.09 % 95.87 % 88.47 %
Car (3D Detection) 91.76 % 83.32 % 78.29 %
Car (Bird's Eye View) 95.79 % 90.13 % 85.06 %
This table as LaTeX


2D object detection results.
This figure as: png eps txt gnuplot



Orientation estimation results.
This figure as: png eps txt gnuplot



3D object detection results.
This figure as: png eps txt gnuplot



Bird's eye view results.
This figure as: png eps txt gnuplot




eXTReMe Tracker