Method

Robust Multi-Modality Multi-Object Tracking [mmMOT]
https://github.com/ZwwWayne/mmMOT

Submitted on 21 Apr. 2019 11:58 by
Wenwei Zhang (Wuhan University)

Running time:0.01
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
TBD
Parameters:
TBD
Latex Bibtex:
@InProceedings{mmMOT_2019_ICCV,
author = {Zhang, Wenwei and Zhou, Hui and Sun,
Shuyang, and Wang, Zhe and Shi, Jianping and Loy,
Chen Change},
title = {Robust Multi-Modality Multi-Object Tracking},
booktitle = {International Conference on
Computer Vision (ICCV)},
month = {October},
year = {2019}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 84.77 % 85.21 % 85.60 % 88.28 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 88.81 % 97.93 % 93.15 % 33659 711 4243 6.39 % 38505 2152

Benchmark MT PT ML IDS FRAG
CAR 73.23 % 24.00 % 2.77 % 284 753

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker