Google provides ARCore with function to calculate depth without separate sensors

Spread the love

Google is equipping its ARCore toolkit with a new api that allows users on their Android smartphone to estimate depth in ar applications without a time-of-flight depth sensor or the need for multiple cameras. Occlusion also becomes available.

According to Google, the Depth api added to ARCore allows developers to introduce occlusion, a technique where the view of an ar object or subject is partially blocked by real world objects. This could include, for example, a cat running behind physical planters. This makes the virtual objects seem much more part of the environment. Occlusion will be available first in Scene Viewer, a developer tool that enables augmented reality in Search. This will be available on those Android devices that are compatible with ARCore, more than 200 million devices according to Google.

The Depth API automatically captures multiple images from multiple angles while moving the smartphone and camera. Then the images are compared to estimate the distance from the user to each pixel. For example, during a demonstration, the employee could virtually throw food at a robot, whereby the physically present sofa was displayed as a sofa on the smartphone screen and blocked the food if the throw was too low. The depth feature only requires a single smartphone camera and would allow developers to play around with real-world physics and interactions with surfaces.

For now, the Depth function is only available via Google Search and the American interior design app Houzz. ARCore can be regarded as a further development of Tango and is more or less the counterpart of ARKit from Apple. They are tools to enable all kinds of augmented reality applications on smartphones.

You might also like