Method

aUToTrack [la] [gp] [on] [aUToTrack]


Submitted on 12 Feb. 2019 03:34 by
Keenan Burnett (University of Toronto)

Running time:0.01 s
Environment:1 core @ >3.5 Ghz (C/C++)

Method Description:
The output of a vision-based CNN detector is used to cluster LIDAR points in order to obtain measurements for the 3D position of objects. Using GPS/IMU data, these objects are then tracked in 3D using an EKF.
Parameters:
N/A
Latex Bibtex:
@article{Burnett2019,
author = {Keenan Burnett and
Sepehr Samavi and
Steven L. Waslander and
Timothy D. Barfoot and
Angela P. Schoellig},
title = {aUToTrack: {A} Lightweight Object Detection and Tracking System for
the {SAE} AutoDrive Challenge},
journal = {arXiv:1905.08758},
year = {2019},
url = {http://arxiv.org/abs/1905.08758},
archivePrefix = {arXiv},
eprint = {1905.08758},
}

Detailed Results

From all 29 test sequences, our benchmark computes the HOTA tracking metrics (HOTA, DetA, AssA, DetRe, DetPr, AssRe, AssPr, LocA) [1] as well as the CLEARMOT, MT/PT/ML, identity switches, and fragmentation [2,3] metrics. The tables below show all of these metrics.


Benchmark HOTA DetA AssA DetRe DetPr AssRe AssPr LocA
CAR 59.83 % 67.82 % 53.68 % 72.66 % 79.60 % 55.94 % 86.52 % 83.10 %

Benchmark TP FP FN
CAR 30333 4059 1061

Benchmark MOTA MOTP MODA IDSW sMOTA
CAR 80.97 % 80.56 % 85.11 % 1424 63.83 %

Benchmark MT rate PT rate ML rate FRAG
CAR 72.77 % 23.39 % 3.85 % 484

Benchmark # Dets # Tracks
CAR 31394 2234

This table as LaTeX