Method

Learning to Track with Object Permanence [on] [at] [PermaTrack]


Submitted on 18 Mar. 2021 05:14 by
Pavel Tokmakov (TRI)

Running time:0.1 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
We introduce an end-to-end trainable approach for joint object
detection and tracking. It is capable of localizing and associating
objects behind full occlusions. Our method is online, vision-based,
and does not use any heuristic post-processing steps.
Parameters:
See manuscript.
Latex Bibtex:
@inproceedings{tokmakov2021learning,
title={Learning to Track with Object Permanence},
author={Tokmakov, Pavel and Li, Jie and Burgard, Wolfram and
Gaidon, Adrien},
booktitle={ICCV},
year={2021}
}

Detailed Results

From all 29 test sequences, our benchmark computes the HOTA tracking metrics (HOTA, DetA, AssA, DetRe, DetPr, AssRe, AssPr, LocA) [1] as well as the CLEARMOT, MT/PT/ML, identity switches, and fragmentation [2,3] metrics. The tables below show all of these metrics.


Benchmark HOTA DetA AssA DetRe DetPr AssRe AssPr LocA
CAR 78.03 % 78.29 % 78.41 % 81.71 % 86.54 % 81.14 % 89.49 % 87.10 %
PEDESTRIAN 48.63 % 52.28 % 45.61 % 57.40 % 71.03 % 49.63 % 73.28 % 78.57 %

Benchmark TP FP FN
CAR 32072 2320 402
PEDESTRIAN 17192 5958 1514

Benchmark MOTA MOTP MODA IDSW sMOTA
CAR 91.33 % 85.65 % 92.08 % 258 77.95 %
PEDESTRIAN 65.98 % 74.53 % 67.72 % 403 47.07 %

Benchmark MT rate PT rate ML rate FRAG
CAR 85.69 % 11.69 % 2.62 % 250
PEDESTRIAN 48.80 % 35.40 % 15.81 % 646

Benchmark # Dets # Tracks
CAR 32474 901
PEDESTRIAN 18706 672

This table as LaTeX