Reflectance and Natural Illumination from a Single Image

Stephen Lombardi and Ko Nishino
Drexel University

rani header image
Estimating reflectance and natural illumination from a single image of an object of known shape is a challenging task due to the ambiguities between reflectance and illumination. Although there is an inherent limitation in what can be recovered as the reflectance band-limits the illumination, explicitly estimating both is desirable for many computer vision applications. Achieving this estimation requires that we derive and impose strong constraints on both variables. We introduce a probabilistic formulation that seamlessly incorporates such constraints as priors to arrive at the maximum a posteriori estimates of reflectance and natural illumination. We begin by showing that reflectance modulates the natural illumination in a way that increases its entropy. Based on this observation, we impose a prior on the illumination that favors lower entropy while conforming to natural image statistics. We also impose a prior on the reflectance based on the directional statistics BRDF model that constrains the estimate to lie within the bounds and variability of real-world materials. Experimental results on a number of synthetic and real images show that the method is able to achieve accurate joint estimation for different combinations of materials and lighting.
  • Reflectance and Illumination Recovery in the Wild
    S. Lombardi and K. Nishino,
    in IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 38, no. 1, pp129-141, Jan., 2016.
    [ paper ][ project ]

  • Reflectance and Natural Illumination from a Single Image
    S. Lombardi and K. Nishino,
    in Proc. of European Conference on Computer Vision ECCV’12, Part VI, pp582-595, Oct., 2012.
    [ paper ][ database ][ code ][ project ]

See also

Overview

rani dsbrdfvis image
We develop a framework for reflectance and illumination inference from a single image. Our approach incorporates an expressive yet low-dimensional reflectance model. The reflectance model is based on the Directional Statistics BRDF (DSBRDF) model originally introduced by Nishino [6] and later extended by Nishino and Lombardi. The DSBRDF represents reflectance as a sum of lobes that are each written as a directional statistics distribution in the half vector BRDF parameterization. This figure illustrates how we arrive at a compact analytical reflectance model while retaining expressiveness. Column (a) is a ground-truth rendering of the MERL BRDF in three different illumination environments. Column (b) shows renderings of the DSBRDF model with (κ,γ)-curves represented as B-splines with color integrated into each lobe representation. This model has three colors, three lobes, and six parameters per B-spline for a total of 108 free variables. Column (c) shows renderings of the DSBRDF model with (\kappa, \gamma)-curves represented with the learned bases bi truncated at 16 parameters with color integrated into each lobe representation. This model has 16 free variables. Column (d) shows renderings of the DSBRDF model with color represented explicitly for each lobe (see journal paper). This model uses 10 basis coefficients and 2 variables per lobe for chromaticity for a total of 16 free variables. From the figure we can see qualitatively that the DSBRDF model with color represented separately has expressiveness comparable to the full DSBRDF model (Column (b)) but with only 16 parameters.
rani dsbrdffit image
Comparison of the DSBRDF model to the non-parametric bivariate model and Cook-Torrance. This figure shows the log-space RMSE of fitting Lambertian and 1 lobe of Cook-Torrance (10 free parameters), Lambertian and 3 lobes of Cook-Torrance (24 free parameters), the DSBRDF with color separation modeled with a small number of learned bases (13 free parameters), the full DSBRDF model with color separation (42 free parameters), and the non-parametric bivariate BRDF (24,300 parameters). The vertical grey bars highlight the BRDFs used in the previous figure. The figure demonstrates that the DSBRDF model accurately captures real-world reflectance functions with a low-order parameterization.
rani dsbrdfprior image
This figure shows a visualization of the prior overlaid onto the DSBRDF space. The DSBRDF space is visualized by plotting the projections of each BRDF onto the first two basis functions. We overlay the ellipses of the Gaussian mixture to observe how it models the space of BRDFs. We can see that the mixture of Gaussians is able to naturally identify different types of reflectance functions. For example, it captures primarily diffuse reflectance functions in one cluster, many shiny metals and plastics in another cluster, and those in-between in the third cluster. This observation supports the use of a mixture model as a distribution on likely reflectance parameters.
rani entropy image
We empirically show that, due to the band-limited transmittance of incident irradiance, the entropy of the distribution of reflected radiance becomes higher than when there was no bandpass filtering (i.e., the reflectance has all frequencies–perfect mirror reflection). This figure demonstrates the effect of the reflectance on the entropy of the reflected radiance for a variety of materials. As our intuition suggests, the action of the BRDF as a bandpass filter causes a blurring of the illumination and thus a spreading of the histogram. This, in turn, increases the entropy of the reflected radiance. We’d like to recover the true illumination environment and to do this we assume that the entropy increase in the observed image is due entirely to the BRDF. To this end, we constrain the illumination to have minimum entropy, so that the BRDF will be responsible for causing the increase in entropy of the outgoing radiance.

Results

rani sphereresults1 image
Results of the alum-bronze material under three lighting environments. The top right shows the ground truth cascaded rendering (a sphere rendered with different point source directions) of the alum-bronze material. Column (a) shows the ground truth alum-bronze material rendered with one of the three lighting environments, column (b) shows a rendering of the estimated BRDF with the next ground truth lighting environment, column (c) shows the estimated illumination map, column (d) shows the ground truth illumination map, and column (e) shows a cascaded rendering of the recovered reflectance. The lighting environments used were Kitchen (1), Eucalyptus Grove (2), and the Uffizi Gallery (3). We achieve good estimates of reflectance and illumination, although the recovered illumination is missing high frequency details lost during image formation.
rani sphereresults2 image
Quality of reflectance estimates. Each subfigure demonstrates the reflectance estimates for the blue-acrylic (1), nickel (2), and gold-metallic-paint (3) materials. The top row of each subfigure shows the ground truth and the bottom row shows our estimates. Columns (a), (b), and (c) show renderings using the Uffizi Gallery, St. Peter’s Basilica, and Grace Cathedral lighting environments, respectively. Column (d) is a cascaded rendering of the material with a series of point lights. The top left image of each subfigure was used as the input image. These results demonstrate the accuracy of our reflectance estimation.
rani sphereresults3 image
Predicting the appearance of materials with recovered illumination. We use the recovered illumination map to predict the appearance of materials with lower frequency reflectance. The top row shows the ground truth and the bottom row shows the predicted appearances. The input image for each subfigure is the top-left image. These results demonstrate the ability to accurately predict object appearance with the recovered illumination map.
rani realresults image
Results on our new data set which includes four different objects under four different illumination environments (1–4). Columns (a), (b), and (c) show results for the apple and horse object in the four lighting environments. Column (a) shows the input image, column (b) shows the recovered illumination map, and column (c) shows a cascaded rendering of the recovered BRDF. Columns (d), (e), and (f) show the results for the milk bottle and bear object in the four lighting environments. Column (d) shows the input image, column (e) shows the recovered BRDF relit with the next ground truth illumination map, and column (f) shows the recovered illumination map. Our method recovers a good estimate of the band-limited lighting environment and BRDF despite some errors in the white balance of the camera which cause some small inaccuracies in the color of the recovered BRDF.
rani realresults2 image
Results on lobby and spiralStairs illumination environments. Each row shows the recovered reflectance and illumination of a different object with the ground-truth illumination for comparison. Note how features of the reflectance function are accurately captured: the apple, bear, horse, and milk objects are shiny and the recovered reflectance function has sharp specular highlights; the tree object has a softer glossy reflection that is captured in the reflectance estimate.
rani realresults3 image
Results on garden, main, and picnic illumination environments. Each row shows the recovered illumination of a different object with the ground-truth illumination for comparison. This illustrates that our algorithm is able to capture important features of the illumination environment. For example, in the picnic scene, the method captures the sun behind a building from the apple object.