We propose neural trace photography, a novel framework to automatically learn high-quality scanning of non-planar, complex anisotropic appearance. Our key insight is that free-form appearance scanning can be cast as a geometry learning problem on unstructured point clouds, each of which represents an image measurement and the corresponding acquisition condition. Based on this connection, we carefully design a neural network, to jointly optimize the lighting conditions to be used in acquisition, as well as the spatially independent reconstruction of reflectance from corresponding measurements. Our framework is not tied to a specific setup, and can adapt to various factors in a data-driven manner. We demonstrate the effectiveness of our framework on a number of physical objects with a wide variation in appearance. The objects are captured with a light-weight mobile device, consisting of a single camera and an RGB LED array. We also generalize the framework to other common types of light sources, including a point, a linear and an area light.
Paper [.PDF, 37.7MB] [ACM DL (Open Access)]
Supplemental Material
[.PDF]
Bibtex
[.BIB]
Video
[.MP4, 59.6MB]
[Youtube]
[Bilibili]
Slides
[.PDF, 27.1MB]
Our source code and data are released under the GPLv3 license for acadmic purposes. The only requirement for using them in your research is to cite our paper
[.BIB]. For commercial licensing options, please email hwu at acm.org. For technical issues, please email xiaohema1998 at gmail.com.
The link to our repository is
here. Please refer to the documents therein for more details.