Method

Multi-Object Tracking Based on LiDAR and Camera Information Fusion[on] [BLAC]


Submitted on 20 Dec. 2019 13:04 by
Ruth Fitch (Central Texas College)

Running time:0.03 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
We fuse the multi-sensor information by using the
object detectors on the images and the LiDAR
point cloud respectively. In addition, we offline
learn a deep appearance descriptor on a large-
scale vehicle re-identification dataset for
appearance association. And we obtain the motion
estimation by Kalman filtering, then use
Hungarian algorithm for data association.
Parameters:
nms=0.7
max_cosine_distance = 0.2
Latex Bibtex:

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 76.40 % 83.05 % 76.83 % 87.11 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 82.45 % 95.75 % 88.60 % 30978 1376 6592 12.37 % 35332 1111

Benchmark MT PT ML IDS FRAG
CAR 47.38 % 38.62 % 14.00 % 147 608

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker