Abstract: While surface-based view synthesis algorithms are appealing due to their low computational requirements, they often struggle to reproduce thin structures. In contrast, more expensive methods that model the scene's geometry as a volumetric density field (e.g. NeRF) excel at reconstructing fine geometric detail. However, density fields often represent geometry in a "fuzzy" manner, which hinders exact localization of the surface. In this work, we modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures. First, we employ a discrete opacity grid representation instead of a continuous density field, which allows opacity values to discontinuously transition from zero to one at the surface. Second, we anti-alias by casting multiple rays per pixel, which allows occlusion boundaries and subpixel structures to be modelled without using semi-transparent voxels. Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training. Lastly, we develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting. The compact meshes produced by our model can be rendered in real-time on mobile devices and achieve significantly higher view synthesis quality compared to existing mesh-based approaches.
Latex Bibtex Citation:@inproceedings{
Reiser2024SIGGRAPH,
author = {Christian Reiser and Stephan Garbin and Pratul P. Srinivasan and Dor Verbin and Richard Szeliski and Ben Mildenhall and Jonathan T. Barron and Peter Hedman and
Andreas Geiger},
title = {Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis},
booktitle = {International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH)},
year = {2024}
}