Method

Towards Unified 3D Object Detection [MM-UniMODE]


Submitted on 30 Jan. 2024 08:50 by
Zhuoling Li (Tsinghua Univerisity)

Running time:0.04 s
Environment:1 core @ 2.5 Ghz (Python)

Method Description:
We train a multi-modal detector named MM-UniMODE
with our developed MM-Omni3D dataset (a composite
dataset covering diverse scenes) and achieve
promising result.
Parameters:
100.8M
Latex Bibtex:

Detailed Results

Object detection and orientation estimation results. Results for object detection are given in terms of average precision (AP) and results for joint object detection and orientation estimation are provided in terms of average orientation similarity (AOS).


Benchmark Easy Moderate Hard
Car (Detection) 98.78 % 97.69 % 94.62 %
Car (Orientation) 98.63 % 97.44 % 94.30 %
Car (3D Detection) 91.23 % 84.81 % 81.44 %
Car (Bird's Eye View) 95.69 % 91.51 % 88.71 %
This table as LaTeX


2D object detection results.
This figure as: png eps txt gnuplot



Orientation estimation results.
This figure as: png eps txt gnuplot



3D object detection results.
This figure as: png eps txt gnuplot



Bird's eye view results.
This figure as: png eps txt gnuplot




eXTReMe Tracker