Depth cameras project a pattern of dots or lines over a scene and then they calculate the 3D contours within their view by analyzing the way the patterns are deformed and how much time it takes light to reflect back to the camera. However, these devices operate at low power, so they have trouble in brightly lit settings because ambient light which washes out the signals that would detect the scene’s contours.
Now, the researchers developed technology that helps depth-sensing cameras record only the light from the spot that is illuminated. They built a system made from a CMOS camera with an ordinary lens, a timing circuit, and an off-the-shelf laser projector. The researchers synchronized the projector with the camera so that as the laser scans a plane, the camera only accepts light from that plane. “We have a way of choosing the light rays we want to capture and only those rays,” said researcher Srinivasa Narasimhan. “We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor.”
The resulting cameras could pick up the contours of objects, even in directly glaring light and through smoke. The researchers suggest many potential applications for the improved depth-sensing cameras: medical imaging, improved video gaming, automotive sensors, and industrial quality control (manufacturers also could check for anomalies in shiny components). The technology is also particularly well suited for robots exploring in space, where extreme darkness and the sun’s glare pose big challenges for cameras. The researchers are presenting their findings this week at SIGGRAPH 2015, the International Conference on Computer Graphics and Interactive Techniques, in Los Angeles.