Method

DiTMOT [DiTMOT]
https://github.com/StevenWang30/DiTNet

Submitted on 13 Apr. 2021 07:34 by
Wang Sukai (The Hong Kong University of Science and Technology)

Running time:0.08 s
Environment:1 core @ >3.5 Ghz (Python)

Method Description:
End-to-end 3D object detection and tracking based
on point clouds is receiving more and more
attention in many robotics applications, such as
autonomous driving. Compared with 2D images, 3D
point clouds do not have enough texture
information for data association. Thus, we
propose an end-to- end point cloud-based network,
DiTNet, to directly assign a track ID to each
object across the whole sequence, without the
data association step. DiTNet is made location-
invariant by using relative location and
embeddings to learn each object’s spatial and
temporal features in the Spatio-temporal world.
The features from the detection module helps to
improve the tracking performance, and the
tracking module with final trajectories also
helps to refine the detection results.
Parameters:
Detailed in the paper.
Latex Bibtex:
@article{wang2021ditnet,
title={DiTNet: End-to-End 3D Object Detection
and Track ID Assignment in Spatio-Temporal
World},
author={Wang, Sukai and Cai, Peide and Wang,
Lujia and Liu, Ming},
journal={IEEE Robotics and Automation Letters},
volume={6},
number={2},
pages={3397--3404},
year={2021},
publisher={IEEE}
}

Detailed Results

From all 29 test sequences, our benchmark computes the HOTA tracking metrics (HOTA, DetA, AssA, DetRe, DetPr, AssRe, AssPr, LocA) [1] as well as the CLEARMOT, MT/PT/ML, identity switches, and fragmentation [2,3] metrics. The tables below show all of these metrics.


Benchmark HOTA DetA AssA DetRe DetPr AssRe AssPr LocA
CAR 72.21 % 71.09 % 74.04 % 75.98 % 83.28 % 76.57 % 89.97 % 86.15 %

Benchmark TP FP FN
CAR 30274 4118 1101

Benchmark MOTA MOTP MODA IDSW sMOTA
CAR 84.53 % 84.36 % 84.83 % 101 70.76 %

Benchmark MT rate PT rate ML rate FRAG
CAR 74.77 % 12.46 % 12.77 % 210

Benchmark # Dets # Tracks
CAR 31375 731

This table as LaTeX