Method

You Don't Only Look Once [UDOLO]
[Anonymous Submission]

Submitted on 16 Jul. 2020 17:09 by
[Anonymous Submission]

Running time:0.15 s
Environment:GPU @ 2.5 Ghz (Python + C/C++)

Method Description:
You Don't Only Look Once: Constructing Spatial-Temporal Memory
for Integrated 3D Object Detection and Tracking
Parameters:
As illustrated in the paper.
Latex Bibtex:

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 81.59 % 86.17 % 82.23 % 89.05 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 86.56 % 97.19 % 91.57 % 33173 959 5151 8.62 % 37437 2472

Benchmark MT PT ML IDS FRAG
CAR 63.08 % 30.46 % 6.46 % 222 875

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker