Method

MOTSFusion (Pedestrians) [MOTSFusion]
https://github.com/tobiasfshr/MOTSFusion

Submitted on 5 Dec. 2019 22:02 by
Jonathon Luiten (RWTH Aachen University)

Running time:0.44 s
Environment:1 core @ 2.5 Ghz (C/C++)

Method Description:
First we build tracklets by calculating a
segmentation mask for each detection and linking
these over time using optical flow. We then fuse
these tracklets into 3D object reconstuctions
using depth and ego motion estimates. These 3D
reconstructions are then used to estimate the 3D
motion of objects, which is used to merge
tracklets into long-term tracks, bridging
occlusion gaps of up to 20 frames. This also
allows us to fill in missing detections.
Parameters:
Detections = TrackRCNN
Segmentations = BB2SegNet
Latex Bibtex:
@article{luiten2019MOTSFusion,
title={Track to Reconstruct and Reconstruct to
Track},
author={Luiten, Jonathon and Fischer, Tobias and
Leibe, Bastian},
journal={IEEE Robotics and Automation Letters},
year={2020},
publisher={IEEE}
}

Detailed Results

From all 29 test sequences, our benchmark computes the commonly used tracking metrics (adapted for the segmentation case): CLEARMOT, MT/PT/ML, identity switches, and fragmentations [1,2]. The tables below show all of these metrics.


Benchmark sMOTSA MOTSA MOTSP MODSA MODSP
CAR 0.00 % 0.00 % 0.00 % 0.00 % 0.00 %
PEDESTRIAN 58.70 % 72.90 % 81.50 % 74.20 % 94.10 %

Benchmark recall precision F1 TP FP FN FAR #objects #trajectories
CAR 0.00 % 0.00 % 0.00 % 0 0 0 0.00 % 0 0
PEDESTRIAN 76.50 % 97.10 % 85.60 % 15827 465 4870 4.20 % 19199 395

Benchmark MT PT ML IDS FRAG
CAR 0.00 % 0.00 % 0.00 % 0 0
PEDESTRIAN 47.40 % 37.00 % 15.60 % 279 534

This table as LaTeX


[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.


eXTReMe Tracker