Method

Convolutional Neural Network with Context Fusion [CN24]


Submitted on 23 Oct. 2014 23:27 by
Clemens-Alexander Brust (FSU Jena)

Running time:30 s
Environment:>8 cores @ 2.5 Ghz (C/C++)

Method Description:
Neural Network that combines a 3-layer CNN with
context information (in this case, the UV
coordinates) in the last two layers.
Parameters:
Configuration:
CNet-O
Latex Bibtex:
@InProceedings{Brust2015:CPN,
Title = {Convolutional Patch Networks with Spatial Prior for Road Detection and Urban Scene Understanding},
Author = {Clemens-Alexander Brust and Sven Sickert and Marcel Simon and Erik Rodner and Joachim Denzler},
Booktitle = {{VISAPP} 2015 - Proceedings of the 10th International Conference on
Computer Vision Theory and Applications, Berlin, Germany,
11-14 March, 2015},
Year = {2015}
}

Evaluation in Bird's Eye View


Benchmark MaxF AP PRE REC FPR FNR
UM_ROAD 86.32 % 89.19 % 87.80 % 84.89 % 5.37 % 15.11 %
This table as LaTeX

Behavior Evaluation


Benchmark PRE-20 F1-20 HR-20 PRE-30 F1-30 HR-30 PRE-40 F1-40 HR-40
This table as LaTeX

Road/Lane Detection

The following plots show precision/recall curves for the bird's eye view evaluation.



This figure as: png eps pdf

Distance-dependent Behavior Evaluation

The following plots show the F1 score/Precision/Hitrate with respect to the longitudinal distance which has been used for evaluation.


Visualization of Results

The following images illustrate the performance of the method qualitatively on a couple of test images. We first show results in the perspective image, followed by evaluation in bird's eye view. Here, red denotes false negatives, blue areas correspond to false positives and green represents true positives.



This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png

This figure as: png


eXTReMe Tracker