Shape and Reflectance from Natural Illumination

Geoffrey Oxholm and Ko Nishino
Drexel University

singleshape header image
We introduce a method to jointly estimate the BRDF and geometry of an object from a single image under known, but uncontrolled, natural illumination. We show that this previously unexplored problem becomes tractable when one exploits the orientation clues embedded in the lighting environment. Intuitively, unique regions in the lighting environment act analogously to the point light sources of traditional photometric stereo; they strongly constrain the orientation of the surface patches that reflect them. The reflectance, which acts as a bandpass filter on the lighting environment, determines the necessary scale of such regions. Accurate reflectance estimation, however, relies on accurate surface orientation information. Thus, these two factors must be estimated jointly. To do so, we derive a probabilistic formulation and introduce priors to address situations where the reflectance and lighting environment do not sufficiently constrain the geometry of the object. Through extensive experimentation we show what this space looks like, and offer insights into what problems become solvable in various categories of real-world natural illumination environments.
  • Shape and Reflectance Estimation in the Wild
    G. Oxholm and K. Nishino,
    in IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp376-389, Feb., 2016.
    [ paper ][ database ][ project 1 ][ project 2 ]

  • Shape and Reflectance from Natural Illumination
    G. Oxholm and K. Nishino,
    in Proc. of European Conference on Computer Vision ECCV’12, Part I, pp528-541, Oct., 2012.
    [ paper ][ database ][ project ]

See also


singleshape likelihood image
In our formulation, the reflectance and the object geometry are represented as latent variables that are linked through their joint contribution on the observed appearance layer. By adopting a probabilistic framework, we enable the use of priors on both factors to incorporate basic observations, and help reduce the search space. To model the reflectance function, we adopt the Directional Statistics Bidirectional Reflectance Distribution Function (DSBRF) model, introduced by Nishino, which models a wide range of isotropic BRDFs while remaining amenable to a strong prior. Assuming a linear camera, the irradiance is computed as the reflectance radiance by integrating the incident irradiance modulated by the reflectance over the illumination map. A single reflectance map may have various degrees of certainty. The hemispherical distribution in the figure is the likelihood dis tribution for the pixel indicated with the green X. In these images, brighter values correspond to more likely orientations. Because the reflectance map is also a function of the unstructured illumination environment, this distribution is multimodal and nonparametric: (a) has a large bright area corresponding to the likely orientations and (c) shows a closer to single modal likelihood distribution corresponding to a purple pixel just above it.
singleshape prior image
We would like to leverage surface points with well defined orientation (e.g., the purple point in the above figure) and propagate that certainty to other points. We introduce two spatial priors to help propagate this information. We also utilize the occluding boundary as a strong unary prior on pixels at the edge of the object. In these hemispherical distributions, brighter pixels correspond to more likely orientations. The occluding boundary prior (a) strongly encourages pixels on the boundary to be oriented orthogonally to the viewing direction. The gradient prior (b) encourages the estimated appearance to have the same gradient as the observed image. The smoothness prior (c) penalizes sharp changes in orientation. The Bayesian MAP estimation basically finds the mode of the distribution in across the product distribution of the above likelihood and these priors for all surface points.
singleshape coarsetofine image
Since neither the reflectance nor the geometry can be optimized without knowledge of the other, we employ an iterative, expectation maximization (EM) framework in which one is held fixed as a “pseudo-observable” while the other is estimated in the maximization step, while the expectation step estimates the Gaussian likelihood variance. We begin our optimization with a naive geometry estimation: just using the smoothness and boundary priors which results in an inflated contour shape (b). We then run a coarse-to-fine discrete global optimization using graph cuts (c-e). We partition the domain of possible surface orientations into a geodesic hemisphere. By incrementally increasing the resolution of this hemisphere we are able to avoid local minima. Our final step is a gradient-descent based minimization with the integrability constraint to fine-tune the result (g).


singleshape geomerror image
We evaluated our results on 600 synthetic images: 10 different shapes from the Blobby Shapes database each rendered under 6 different publicly-available real-world illumination environments with 10 different real-world measured BRDFs which were chosen from the MERL database to span a wide variety of reflectances. This figure illustrates the accuracy of our geometry recovery for each of the 60 different illumination environment and reflectance combinations averaged over the 10 different shapes. The heat map (a) shows the median angular error with brighter colors used to indicate increased accuracy. Some reflectances, such as GRAY PLASTIC (3) and SPECIAL-WALNUT (2) consistently yield more accurate results. These reflectances, which are the most matte, are more successful when large regions of the lighting environment differ strongly from each other. The notable exceptions to this occur in the eucalyptus forest lighting environment (F) and in the St. Peter’s environment (E). These two lighting environments have high-frequency texture whose details become nearly uniform under these diffuse reflectances. The brightest value corresponds to the reflectance and lighting environment combination that yielded the most accurate geometry esti- mation, with a median error of 14◦. The darkest value corresponds to a median error of 49◦.
singleshape reflectanceerror image
This figure illustrates the accuracy of the reflectance estimation. In (b) we show an estimate for GRAY-PLASTIC (3). This reflectance was among the most reliably estimated, due to the smoothly varying appearance resulting from its matte finish. In (d) we show an estimate for BLUE-ACRYLIC (10). This material has subsurface scattering, which our model cannot explain well, causing the global illumination effect to be absorbed in erroneous surface normal and reflectance estimates. The brightest value corresponds to the most accurate reflectance estimation with a relative RMS error of 0.54. The darkest regions correspond to an error of greater than 3.0.
singleshape allresults image
We also acquired images, and aligned ground-truth geometry for several objects in both out-door and in-door real-world scenes. The ground-truth geometry was acquired using a Canon VIVID 910 light-stripe range-finder. Illumination environments were acquired using a reflective sphere. Although real-world data comes with added sources of noise and inaccuracy, our method is able to recover the shape of the objects quite well in each of the environments. In each of the four sections of the figure we show the illumination environment along with the image, recovered normal field and ground-truth normals for each of the objects in the scene. The differences in the lighting environments have a clear impact on the accuracy of the recovered geometry.