Method

dual_way object detection and tracking network[la][gp] [DODT]


Submitted on 13 Aug. 2019 13:08 by
xusen guo (sun yet-sen university)

Running time:0.013 s
Environment:1 core @ 3.2 Ghz (Python)

Method Description:
we set up a ConvNet architecture that can
associate keyframes images and keyframes point
clouds to generate accurate 3D detections and
trajectories in an end-to-end form. Specifically,
a tracking module is introduced to capture
objects co-occurrences across time, and a motion
based interpolation algorithm is proposed to
generate streaming level results given keyframe
detections.
Parameters:
sigma_l=0.1, iou_threshold=0.1, max_age = 3,
min_hits = 3
Latex Bibtex:

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark MOTA MOTP MODA MODP
CAR 76.68 % 81.65 % 76.87 % 86.23 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 83.41 % 95.03 % 88.84 % 31671 1657 6299 14.90 % 37780 998

Benchmark MT PT ML IDS FRAG
CAR 60.77 % 27.54 % 11.69 % 63 384

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker