Scientists develop cheap accurate 3D camera
Researchers at Northwestern University in the United States have developed a cheap and accurate 3D camera that also works with brightly lit or shiny objects. The camera was shown at the IEEE International Conference on Computational Photography.
On the website of the research project, the researchers explain that 3D scanning systems that work like Microsoft’s Kinect are very fast, but have problems with objects that are, for example, brightly lit, shiny or have an interfering pattern. This is because cheap 3D scanners work with a system in which an infrared light source projects a dot or stripe pattern onto an object. A neighboring camera then captures this image and calculates a depth image from the changes in the pattern. When the environment already emits a lot of light, such as in the open air, this disrupts the stripe pattern and thus the 3D image. In addition, the resolution of the 3D images of the Kinect is not high.
Another technique to create 3D images uses laser scanners. These can take high-resolution images and are not easily fooled by a lot of light. However, the scanners are expensive and consume a lot of power. In addition, laser scanners work slowly, which makes them unsuitable for moving images.
The camera developed by the researchers should combine the advantages of laser scanners and systems such as the Kinect. The new technology has been given the name Motion Contrast 3D scanning, or MC3D. The system consists of a camera and a cheap laser projector. Using the laser, the scene is illuminated point by point. The camera registers the changing incidence of light on the object. A 3D image is constructed from the transition from light to dark areas. A special algorithm ensures that pixels that do not change are not included in the calculation, which saves computing power. Because the camera also detects changes in the movement of the image, moving scenes are no problem for the system. Using a laser to illuminate the image ensures that light sources or glare in the scene do not cause measurement errors.
According to the researchers, a cheap system that requires little energy and can be used in the open air can find many applications in normal life. Robots can use it to ‘see’ depth, it can be used in self-driving cars and there are applications for augmented reality. The research group received the Faculty Research Award from Google for integrating the technology with Google’s self-driving car system.
The research paper can be found here.