ARCore’s New Depth API Helps Create Depth Maps Using Single Cameras


ARCore is Google’s platform for augmented reality available to Android and iOS apps. To allow for more immersive experiences, a new ARCore Depth API allows any compatible phone to create depth maps from a single camera.




The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.
One principle that makes AR more realistic is occlusion, or the “ability for digital objects to accurately appear in front of or behind real-world objects.” It lets applications make sure objects are not just floating in space or placed in a physically impossible position. This is particularly useful for making apps that let you demo furniture in your living room more realistic.




The example below is an updated version of AR animals in Google Search. The cat — with its hind legs hidden — appears behind the furniture instead of just standing in front of wherever you point the camera. It’s treating the background as being 3D with depth, not just as a flat surface. This more realistic experience will begin rolling out to some of the 200 million ARCore-enabled Android devices with the Google app today.
This one lens approach lowers the barrier to technology by not requiring specialized cameras and sensors. That said, the Depth API will only improve with better hardware in phones. Google is letting developers collaborate on the new ARCore Depth API today.
For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion — the ability to occlude behind moving objects.

Post a Comment

Previous Post Next Post