Method

Learning Appearance and Motion Cues for Panoptic Tracking [MAPT]
https://rl.uni-freiburg.de/

Submitted on 18 Sep. 2024 15:32 by
Juana-Valeria Hurtado (UNIVERSTY OF FREIBURG)

Running time:1 s
Environment:8 cores @ 2.5 Ghz (Python)

Method Description:
MAPT simultaneously captures general semantic
information and instance-specific appearance and
motion features.
Different from existing methods that overlook
dynamic scene attributes, our approach leverages
both appearance and motion cues through dedicated
network heads. These interconnected heads employ
multi-scale deformable convolutions that reason
about scene motion offsets with semantic context
and motion-enhanced appearance features to learn
tracking embeddings.
Parameters:
\alpha=0.5
Latex Bibtex:
@inproceedings{hurtado2024,
title={Learning Appearance and Motion Cues for
Panoptic Tracking},
author={Juana Valeria Hurtado, Sajad Marvi,
Rohit Mohan, and Abhinav Valada},

year={2024}
}

Detailed Results

From all 29 test sequences, our benchmark computes the STQ segmentation and tracking metric (STQ, AQ, SQ (IoU)). The tables below show all of these metrics.


Benchmark STQ AQ SQ (IoU)
KITTI-STEP 67.38 % 66.54 % 68.23 %

This table as LaTeX




eXTReMe Tracker