Multiview Shape and Reflectance from Natural Illumination
Geoffrey Oxholm and Ko Nishino
Drexel University
The world is full of objects with complex reflectances, situated in complex illumination environments. Past work on full 3D geometry recovery, however, has tried to han- dle this complexity by framing it into simplistic models of reflectance (Lambetian, mirrored, or diffuse plus specular) or illumination (one or more point light sources). Though there has been some recent progress in directly utilizing such complexities for recovering a single view geometry, it is not clear how such single-view methods can be extended to reconstruct the full geometry. To this end, we derive a probabilistic geometry estimation method that fully exploits the rich signal embedded in complex appearance. Though each observation provides partial and unreliable informa- tion, we show how to estimate the reflectance responsible for the diverse appearance, and unite the orientation cues embedded in each observation to reconstruct the underlying geometry. We demonstrate the effectiveness of our method on synthetic and real-world objects. The results show that our method performs accurately across a wide range of real-world environments and reflectances that lies between the extremes that have been the focus of past work.
Shape and Reflectance Estimation in the Wild
G. Oxholm and K. Nishino,
in IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp376-389, Feb., 2016.
[ paper ][ database ][ project 1 ][ project 2 ]
Multiview Shape and Reflectance from Natural Illumination
G. Oxholm and K. Nishino,
in Proc. of IEEE Conference on Computer Vision and Pattern Recognition CVPR’14, Jun., 2014.
[ paper ] [ database ][ project ]
See also
Overview
The main contribution of this work is a probabilistic 3D geometry and reflectance estimation method that fully exploits the com- plexity of non-trivial reflectance and non-trivial illumination. The appearance of a pixel in an image provides a multimodal distribution of possible orientations, the shape of which depends on the illumination environment and the reflectance properties of the object. Those pixels reflecting unique scene components (like the sun) provide stronger constraints, while those reflecting less descriptive compo- nents (like the sky or a tree) provide weaker constraints.
Such weak constraints, however, become strong when the orientation distributions of multiple observations corroborate a tighter range of orientations. When a point on the mesh (dashed) is accurate, the observations will agree (a), resulting in a dense orientation distribution with a clear peak (bright region). When the point is not yet well aligned, the observations will disagree (c), resulting in a flat, near-zero distribution. Our overall method is to jointly optimize reflectance and shape by keeping one fixed as the other is optimized. To model the reflectance we use the Directional Statistics BRDF model. We use silhouette intersection as a starting point.
Results
In order to test the role of reflectance and illumination in shape estimation, we have performed hundreds of experiments with a wide range of real-world environments and BRDFs. Each of the 10 blobs from the Blobby Shapes database is rendered in 5 publicly available illumination environments with 7 different measured BRDFs from the MERL database. Table (a) shows the geometry results. As a baseline, these numbers should be compared with the mean initial RMS error of 1.19%, so even in the worst case the error is being reduced significantly. The worst geometry estimation result, with a RMS error of 0.98%, comes from the Green-Acrylic (G) reflectance in the Ennis (E) illumination environment. This is likely due to the lack of green in the scene, making the appearance due primarily to the light coming from the doorway in the center. Due to the diverse, and smoothly varying color, intensity, and texture of the scene, the Pisa (P) illumination environment gives the best performance overall with a mean RMS of 0.50%. Only one reflectance is challenging in this environment—Nickel (N), which has only a weak diffuse component. The best reflectance, Gold-Metallic-Paint (M), has the best of both worlds—strong diffuse, and moderate specular components. This enables the appearance to capture both low-frequency and high-frequency detail of the illumination. The consistency of results within each column of Table (b) shows clearly that certain reflectances are harder to accurately estimate than others. Most notably, the two metals Alum-Bronze (A) and Nickel (N) show the highest errors. These materials exhibit some uncommon grazing angle re- flectance properties that are difficult to recover. Other, reflectances such as Orange-Paint (O) and Green-Acrylic (G), however, are consistenly more accurately estimated.
To quantitatively evaluate our method on real-world objects we introduce a new data-set. The data-set contains four objects imaged in three different indoor and outdoor environments from multiple angles (approximately 18) using a tripod at two different heights. Along with the high-dynamic-range (HDR) images, the data-set contains HDR illumination maps acquired using multiple images of a steel ball, and ground-truth 3D models of the objects acquired using a laser light-stripe range finder and manually finished.The first two columns in each section compare full appearance, while the last three are rendered with a diffuse model to highlight geometric differences with the initial estimate and ground truth. The center images show the captured illumination environment and the recovered reflectances rendered on spheres with a moving point light.