Method

mmMCL3DMOT: mmMCL3DMOT: Multi-modal Momentum Contrastive Learning for 3D Multi-Object Tracking [mmMCL3DMOT:]
[Anonymous Submission]

Submitted on 4 Dec. 2023 12:55 by
[Anonymous Submission]

Running time:268 s
Environment:1 core @ 2.5 Ghz (C/C++)

Method Description:
we propose a novel approach called CMMC3DMOT to
calculate object feature similarity by employing
cross-modal momentum contrastive self-supervised
learning.
Parameters:
The association thresholds were set to -0.25, 0.75,
0.2 and 2
Latex Bibtex:

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 87.31 % 87.68 % 87.37 % 90.22 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 89.59 % 98.96 % 94.04 % 34264 360 3983 3.24 % 37908 736

Benchmark MT PT ML IDS FRAG
CAR 73.23 % 20.15 % 6.62 % 21 331

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker