Researchers, including those of Indian-origin, have developed a new 3D imaging technology that allows depth-sensing cameras to work in bright sunlight.
The researchers from Carnegie Mellon University and the University of Toronto created a mathematical model to help programme these devices so that the camera and its light source work together efficiently, eliminating extraneous light, or “noise,” that would otherwise wash out the signals needed to detect a scene’s contours.
“We have a way of choosing the light rays we want to capture and only those rays,” said Srinivasa Narasimhan, associate professor of robotics at Carnegie Mellon University.
One prototype based on this model synchronises a laser projector with a common rolling-shutter camera – the type of camera used in most smartphones – so that the camera detects light only from points being illuminated by the laser as it scans across the scene.
This not only makes it possible for the camera to work under extremely bright light or amidst highly reflected or diffused light, but also makes it extremely energy efficient.
This combination of features could make this imaging technology suitable for many applications, including medical imaging, inspection of shiny parts and sensing for robots used to explore the Moon and planets. It also could be readily incorporated into smartphones.
Depth cameras work by projecting a pattern of dots or lines over a scene. Depending on how these patterns are deformed or how much time it takes light to reflect back, it is possible to calculate the 3D contours of the scene.
However, these devices use compact projectors that operate at low power. Their faint patterns are undetectable when the camera captures ambient light from a scene.
But as a projector scans a laser across the scene, the spots illuminated by the laser beam are brighter, if only briefly, noted Kyros Kutulakos, professor at the University of Toronto in Canada.
“Even though we’re not sending a huge amount of photons, at short time scales, we’re sending a lot more energy to that spot than the energy sent by the sun,” he said.
The trick is to be able to record only the light from that spot as it is illuminated, rather than try to pick out the spot from the entire bright scene.
In the prototype using a rolling-shutter camera, the projector is synchronised so that as the laser scans a plane, the camera accepts light only from that plane.
Alternatively, if other camera hardware is used, the mathematical framework can compute energy-efficient codes that optimise the amount of energy that reaches the camera.
In addition to Narasimhan and Kutulakos, the research team included Supreeth Achar, a CMU PhD student in robotics, and Matthew O’Toole, a U of T PhD computer science student.
If you have an interesting article / experience / case study to share, please get in touch with us at email@example.com