Method

Robust Multi-Modality Multi-Object Tracking [mmMOT]
https://github.com/ZwwWayne/mmMOT

Submitted on 21 Apr. 2019 11:58 by
Wenwei Zhang (Wuhan University)

Running time:0.02s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
TBD
Parameters:
TBD
Latex Bibtex:
@InProceedings{mmMOT_2019_ICCV,
author = {Zhang, Wenwei and Zhou, Hui and Sun,
Shuyang, and Wang, Zhe and Shi, Jianping and Loy,
Chen Change},
title = {Robust Multi-Modality Multi-Object
Tracking},
booktitle = {International Conference on
Computer Vision (ICCV)},
month = {October},
year = {2019}
}

Detailed Results

From all 29 test sequences, our benchmark computes the HOTA tracking metrics (HOTA, DetA, AssA, DetRe, DetPr, AssRe, AssPr, LocA) [1] as well as the CLEARMOT, MT/PT/ML, identity switches, and fragmentation [2,3] metrics. The tables below show all of these metrics.


Benchmark HOTA DetA AssA DetRe DetPr AssRe AssPr LocA
CAR 62.05 % 72.29 % 54.02 % 76.17 % 84.89 % 58.98 % 82.40 % 86.58 %

Benchmark TP FP FN
CAR 30108 4284 752

Benchmark MOTA MOTP MODA IDSW sMOTA
CAR 83.23 % 85.03 % 85.36 % 733 70.12 %

Benchmark MT rate PT rate ML rate FRAG
CAR 72.92 % 24.15 % 2.92 % 570

Benchmark # Dets # Tracks
CAR 30860 1484

This table as LaTeX