Method

DEFT: Detection Embeddings for Tracking [on] [DEFT]
https://github.com/MedChaabane/DEFT

Submitted on 28 Oct. 2020 04:29 by
Mohamed Chaabane (Colorado State University)

Running time:0.04 s
Environment:GPU @ 2.5 Ghz (Python)

Method Description:
DEFT is joint model of detection and tracking. Our approach relies on
an appearance-based object matching network jointly-learned with
an underlying object detection network. An LSTM is also added to
capture motion constraints.
Parameters:
TBD
Latex Bibtex:
@InProceedings{Chaabane2021deft_2021_CVPR_Workshops,
author = {Chaabane, Mohamed and Zhang, Peter and Beveridge,
Ross and O'Hara, Stephen},
title = {DEFT: Detection Embeddings for Tracking},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2021}
}

Detailed Results

From all 29 test sequences, our benchmark computes the HOTA tracking metrics (HOTA, DetA, AssA, DetRe, DetPr, AssRe, AssPr, LocA) [1] as well as the CLEARMOT, MT/PT/ML, identity switches, and fragmentation [2,3] metrics. The tables below show all of these metrics.


Benchmark HOTA DetA AssA DetRe DetPr AssRe AssPr LocA
CAR 74.23 % 75.33 % 73.79 % 79.96 % 83.97 % 78.30 % 85.19 % 86.14 %

Benchmark TP FP FN
CAR 31745 2647 1006

Benchmark MOTA MOTP MODA IDSW sMOTA
CAR 88.38 % 84.46 % 89.38 % 344 74.04 %

Benchmark MT rate PT rate ML rate FRAG
CAR 84.31 % 13.54 % 2.15 % 241

Benchmark # Dets # Tracks
CAR 32751 862

This table as LaTeX