Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation [DeepViewAggregation] https://github.com/drprojects/DeepViewAgg
Submitted on 7 Feb. 2022 17:19 by Damien Robert (Institut Géographique National)
Running time: | | - | Environment: | | NVIDIA V100 |
Method Description: | Recent work on 3D semantic segmentation proposes to
exploit the synergy between images and point clouds by
processing each modality with a dedicated network and
projecting learned 2D features onto 3D points. Merging
large-scale point clouds and images raises several
challenges, such as constructing a mapping between
points and pixels and aggregating features between
multiple views. Current methods rely on mesh
reconstruction or specialized sensors to recover
occlusions, and use heuristics to select and aggregate
images. In contrast, we propose an end-to-end trainable
multi-view aggregation model leveraging the viewing
conditions of 3D points to merge features from images
taken at arbitrary positions. Our method can combine
standard 2D and 3D networks and outperforms both 3D
models operating on colorized point clouds and hybrid
2D/3D networks without requiring colorization, meshing,
or true depth maps. | Parameters: | epochs=60, sample_per_epoch=12000, r_max=20,
camera=1 | Latex Bibtex: | @inproceedings{robert2022dva,
title={Learning Multi-View Aggregation In the Wild for
Large-Scale 3D Semantic Segmentation},
author={Robert, Damien and Vallet, Bruno and Landrieu,
Loic},
booktitle={Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition},
year={2022}
} |
|