From all 29 test sequences, our benchmark computes the commonly used tracking metrics (adapted for the segmentation case): CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2].
The tables below show all of these metrics.
Benchmark |
sMOTSA |
MOTSA |
MOTSP |
MODSA |
MODSP |
CAR |
74.50 % |
83.50 % |
89.60 % |
84.80 % |
92.10 % |
PEDESTRIAN |
58.10 % |
72.00 % |
81.50 % |
73.30 % |
94.10 % |
Benchmark |
recall |
precision |
F1 |
TP |
FP |
FN |
FAR |
#objects |
#trajectories |
CAR |
86.50 % |
98.10 % |
91.90 % |
31792 |
629 |
4968 |
5.70 % |
37505 |
1229 |
PEDESTRIAN |
75.60 % |
97.10 % |
85.00 % |
15640 |
459 |
5057 |
4.10 % |
18753 |
704 |
Benchmark |
MT |
PT |
ML |
IDS |
FRAG |
CAR |
67.10 % |
29.40 % |
3.50 % |
457 |
811 |
PEDESTRIAN |
43.30 % |
43.00 % |
13.70 % |
270 |
633 |
This table as LaTeX
|
[1] K. Bernardin, R. Stiefelhagen:
Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia:
Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.