Method

SRK_ODESA (a.k.a. SiRtaKi) [on][at] [SRK_ODESA]
[Anonymous Submission]

Submitted on 31 Jul. 2019 12:54 by
[Anonymous Submission]

Running time:0.2 s
Environment:GPU @ >3.5 Ghz (Python)

Method Description:
The corresponding solution employs tracking-by-detection approach. Its core is formed by an effective object embedding which enjoys several attractive properties. Namely, it is rather light-weight, offers expandability to the case of objects composed from multiple parts and demonstrates good generalization. The last property could be illustrated by the fact that no KITTI data was involved into the embedding training procedure.
Parameters:
private
Latex Bibtex:
@inproceedings{odesa2020,
title={ODESA: Object Descriptor that is Smooth Appearance-wise for object tracking task},
author={Borysenko, Dmytro and Mykheievskyi, Dmytro and Porokhonskyy, Viktor},
booktitle={to be submitted to ECCV'20}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
PEDESTRIAN 65.52 % 74.34 % 66.01 % 91.84 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
PEDESTRIAN 70.54 % 94.19 % 80.67 % 16418 1012 6857 9.10 % 20777 1210

Benchmark MT PT ML IDS FRAG
PEDESTRIAN 38.14 % 48.80 % 13.06 % 112 1057

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker