Surface Normals and Shape from Water
*, Meng-Yu Jennifer Kuo *, Ryo Kawahara, Shohei Nobuhara, and Ko Nishino
We introduce a novel method for reconstructing surface normals and depth of dynamic objects in water. Past shape recovery methods have leveraged various visual cues for estimating shape (e.g., depth) or surface normals. Methods that estimate both compute one from the other. We show that these two geometric surface properties can be simultaneously recovered for each pixel when the object is observed underwater. Our key idea is to leverage multi-wavelength near-infrared light absorption along different underwater light paths in conjunction with surface shading. We derive a principled theory for this surface normals and shape from water method and a practical calibration method for determining its imaging parameters values. By construction, the method can be implemented as a one-shot imaging system. We prototype both an off-line and a video-rate imaging system and demonstrate the effectiveness of the method on a number of real-world static and dynamic objects. The results show that the method can recover intricate surface features that are otherwise inaccessible.
Surface Normals and Shape from Water
S. Murai *, M-Y. J. Kuo *, R. Kawahara, S. Nobuhara, and K. Nishino,
in Proc. of International Conference on Computer Vision ICCV’19, Oct., 2019. (Oral) ( *Equal contribution)
[ paper ][ supp. material ][ project ][ talk ]
(skip to 32:36)
We show that per-pixel surface normals and shape can be simultaneously but separately recovered for an object immersed in water. In other words, we introduce a novel 3D sensing method to directly recover 3D geometry as oriented points. Underwater 3D reconstruction may sound peculiar and limiting, but it finds significant applications in a wide range of fields including medicine (e.g., endoscopy), biology, oceanography, archaeology, as well as general surveillance and navigation. Moreover, immersing objects in water for measurements is non-invasive as long as the object is nonabsorbent and is as practical as other 3D reconstruction methods. Our key idea is to leverage multi-wavelength near-infrared light absorption along different underwater light paths in conjunction with surface shading. We show that surface normals and shape from water requires at least four near-infrared directional light sources, each illuminating the object surface whose radiance is captured with an orthographic camera. When using four light sources, the theory reveals that one of the light sources, which we refer to as the base light source, should lie within the convex cone spanned by other light sources, and that the remaining light sources can have the same polar angle with respect to the viewing direction as long as they realize different effective absorption coefficients and also span a 3D space. Most important, we show when and how the depth and surface normals can be separately and uniquely estimated, leading to the identification of preferred combinations of directions and wavelengths of light sources.
We demonstrate the effectiveness of our method on a number of static and dynamic real-world objects with complex shape. We implement the method with two imaging systems, one for off-line capture using an off-the-shelf monochromatic camera and interchangeable near-infrared bandpass filters, and another for video-rate capture using a custom-built multi-wavelength camera. We implement a video-rate surface normal and shape from water imaging system using four light sources each placed with a Fresnel lens and a near-infrared bandpass filter and a custom-built 10bit multi-wavelength camera built by EBA Japan. A visible spectrum light source is used to also capture texture. Experimental results demonstrate the method’s ability to recover intricate details of shape that dynamically change, which would be challenging for conventional methods. Please see supplementary video for results.