Multivew Objects Under Natural Illumination Database
Kyoto University Computer Vision Lab
The database contains images of four objects taken under natural illumination environments with calibrated ground-truth geometry and illumination. In each environment, each object is imaged from approximately 18 different directions. Each picture is taken with three exposures and combined into a single high dynamic-range image. The illumination environments are captured from multiple directions and combined into a single hdr image. Calibration targets are used to align the range scans to the hdr images. Each dataset contains the necessary intrinsic and extrinsic data to align the image data with the coordinate frame of the ground-truth meshes.
Please cite the following work when this database is used for your research.
- Multiview Shape and Reflectance from Natural Illumination
G. Oxholm and K. Nishino,
in Proc. of IEEE Conference on Computer Vision and Pattern Recognition CVPR’14, Jun., 2014.
[ paper ][ database ][ project ]
The content is laid out into three subdirectories – one per illumination environment. Within each subdirectory is a high dynamic-range image (EXR file format) that contains a spherical panorama of that illumination environment (latlong format), as well as four directories – one for each object. Within each object subdirectory is a text file describing the intrinsic and extrinsic parameters for each viewing direction of the object, as well as the HDR image and its mask for each of the viewing directions. The images have been corrected for radial distortion and cropped. The cropping information is also contained in the text file, along with more detailed documentation.
A modified tripod is used to establish a fixed global coordinate frame.
Illumination environments are captured using multiple images of a high performance ball bearing.
All images were captured with either a Canon EOS 5D Mark II or a Canon TS1. For each object, three raw image files were taken with different exposures and combined into a single HDR image. Each object was also scanned many times with a laser-stripe range scanner to capture its geometry. A fixed global coordinate frame is established using a modified tripod.
The illumination environments were captured by photographing a high performance ball bearing (using three exposures as before) from different angles. The images are then unwrapped into spherical panoramas, aligned with each other, and fused to a seamless single panorama.
Note: As can be seen in the images above, in the first illumination environment, “Hall”, the Pig object was purely specular. We then sprayed it with a diffuser so that it would have a more interesting reflectance. In the other two environments it therefore appears glossy, and not mirrored.
Our current results, from an improved version of the above cited paper, are as follows. All numbers are reported as the root-mean-squared (RMS) error, where the error is measured as the distance from the resulting mesh to the ground truth mesh, as a percentage of the diagonal of the bounding box of the ground-truth mesh. (See text for additional information.)