Method

DeepLidarMultiBranch - LidarOnly [la][on] [DLMB]
[Anonymous Submission]

Submitted on 23 Jan. 2019 14:25 by
[Anonymous Submission]

Running time:0.1 s
Environment:8 cores @ 2.5 Ghz (C/C++)

Method Description:
Only LiDAR detection and tracking method with a fusion of two different point-wise Deep Learning binary classification branches. The bounding box extraction is performed using geometric methods, and tracking using Multi-Hypotheses Extended Kalman Filters
Parameters:
on publication
Latex Bibtex:

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 39.65 % 72.54 % 45.76 % 79.43 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 63.54 % 83.05 % 72.00 % 23982 4894 13759 43.99 % 31667 5731

Benchmark MT PT ML IDS FRAG
CAR 29.54 % 54.62 % 15.85 % 2102 3063

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker