Method

Virtual Sparse Convolution for Multimodal 3D Object Detection [VirConvTrack]


Submitted on 24 Aug. 2024 15:35 by
Mohamed Mostafa (Khalifa University)

Running time:1 s
Environment:1 core @ 2.5 Ghz (C/C++)

Method Description:
This is a re-run of the paper "Virtual Sparse
Convolution for Multimodal 3D Object Detection"
from their public code.
Parameters:
N/A
Latex Bibtex:
@INPROCEEDINGS{10205191,
author={Wu, Hai and Wen, Chenglu and Shi,
Shaoshuai and Li, Xin and Wang, Cheng},
booktitle={2023 IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR)},
title={Virtual Sparse Convolution for Multimodal
3D Object Detection},
year={2023},
volume={},
number={},
pages={21653-21662},
keywords={Computer vision;Three-dimensional
displays;Laser radar;Image
coding;Convolution;Fuses;Pipelines;Recognition:
Categorization;detection;retrieval},
doi={10.1109/CVPR52729.2023.02074}}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 90.60 % 86.92 % 90.94 % 89.63 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 93.67 % 98.34 % 95.95 % 36898 622 2495 5.59 % 41872 815

Benchmark MT PT ML IDS FRAG
CAR 84.92 % 6.92 % 8.15 % 115 161

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker