Method

Multi-Modal Object Tracking with Pareto Neural Architecture Search [PNAS-MOT]
https://github.com/PholyPeng/PNAS-MOT

Submitted on 23 Mar. 2024 04:56 by
Peter Peng (Shanghai Jiao Tong University)

Running time:0.01 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
Neural Architecture Search for Multiple object
tracking
Parameters:
\alpha=0.2
Latex Bibtex:
@ARTICLE{peng2024pnasmot,
author={Peng, Chensheng and Zeng, Zhaoyu and
Gao, Jinling and Zhou, Jundong and Tomizuka,
Masayoshi and Wang, Xinbing and Zhou, Chenghu and
Ye, Nanyang},
journal={IEEE Robotics and Automation Letters},
title={PNAS-MOT: Multi-Modal Object Tracking
With Pareto Neural Architecture Search},
year={2024},
volume={},
number={},
pages={1-8},
doi={10.1109/LRA.2024.3379865}}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 90.42 % 85.62 % 92.02 % 88.33 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 94.31 % 98.59 % 96.40 % 36729 525 2218 4.72 % 47450 1914

Benchmark MT PT ML IDS FRAG
CAR 86.77 % 10.92 % 2.31 % 552 762

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker