Multivew Objects Under Natural Illumination Database
Multivew Objects Under Natural Illumination Database
Kyoto University Computer Vision Lab
The database contains images of four objects taken under natural illumination environments with calibrated ground-truth geometry and illumination. In each environment, each object is imaged from approximately 18 different directions. Each picture is taken with three exposures and combined into a single high dynamic-range image. The illumination environments are captured from multiple directions and combined into a single hdr image. Calibration targets are used to align the range scans to the hdr images. Each dataset contains the necessary intrinsic and extrinsic data to align the image data with the coordinate frame of the ground-truth meshes.
Please cite the following work when this database is used for your research.
- Multiview Shape and Reflectance from Natural Illumination
G. Oxholm and K. Nishino,
in Proc. of IEEE Conference on Computer Vision and Pattern Recognition CVPR’14, Jun., 2014.
[ paper ][ database ][ project ]
File Layout
- 2D/
- ${environment}/
- ${environment}.exr – illumination environment
- ${object}/
- views.txt – file describing viewing conditions for the images
- view-${n}.exr – linear HDR image
- view-${n}.jpg – tone-mapped preview (not to be used as data)
- view-${n}_m.png – object segmentation mask
- 3D/
- ${object}.ply – ground-truth geometry
The content is laid out into three subdirectories – one per illumination environment. Within each subdirectory is a high dynamic-range image (EXR file format) that contains a spherical panorama of that illumination environment (latlong format), as well as four directories – one for each object. Within each object subdirectory is a text file describing the intrinsic and extrinsic parameters for each viewing direction of the object, as well as the HDR image and its mask for each of the viewing directions. The images have been corrected for radial distortion and cropped. The cropping information is also contained in the text file, along with more detailed documentation.
Image Capture
A modified tripod is used to establish a fixed global coordinate frame.
Illumination environments are captured using multiple images of a high performance ball bearing.
All images were captured with either a Canon EOS 5D Mark II or a Canon TS1. For each object, three raw image files were taken with different exposures and combined into a single HDR image. Each object was also scanned many times with a laser-stripe range scanner to capture its geometry. A fixed global coordinate frame is established using a modified tripod.
The illumination environments were captured by photographing a high performance ball bearing (using three exposures as before) from different angles. The images are then unwrapped into spherical panoramas, aligned with each other, and fused to a seamless single panorama.
Note: As can be seen in the images above, in the first illumination environment, “Hall”, the Pig object was purely specular. We then sprayed it with a diffuser so that it would have a more interesting reflectance. In the other two environments it therefore appears glossy, and not mirrored.
Results
Our current results, from an improved version of the above cited paper, are as follows. All numbers are reported as the root-mean-squared (RMS) error, where the error is measured as the distance from the resulting mesh to the ground truth mesh, as a percentage of the diagonal of the bounding box of the ground-truth mesh. (See text for additional information.)
Horse | Pig | Shell | Milk | |
Hall | 1.1% | 0.7% | 0.7% | 0.7% |
Indoor | 1.0% | 0.9% | 0.5% | 0.7% |
Outdoor | 1.1% | 0.5% | 1.0% | 0.7% |