There are a few enabling technologies at work here. Including the AR cloud, which could share data from your social graph. © Re’flect
Yesterday, Ori Inbar’s guest post The Search Engine Of AR introduced the idea of the AR Cloud. Today, Matt Miesnieks follows up on the implications and challenges of building it.
If you were asked what is the single most valuable asset in the tech industry today, you’d probably answer that it’s Google’s search index, or Facebook’s social graph or maybe Amazon’s supply chain system. I believe in 15 years time there’ll be another asset at least as valuable as these, The AR Cloud, that doesn’t exist today.
Will one company eventually own the AR Cloud? History says probably. Will it be a new company? Also probably. Just as it was hard to imagine Microsoft losing its position in 1997, its hard to imagine in 2017 Google or Facebook losing their position. But nothing is guaranteed. I’ll try to lay out the arguments supporting each of three sides playing here (incumbents, startups, open web).
ARKit was released at the end of September 17, 2017, and ARCore (replacing Tango) has been with us since 2016. After much hype and excitement, we have found that despite the surface detection iPhones can now do, and the ease with which ARKit allows us to place a digital object in the world is basically the limits of what ARKit can do on iPhones night now. Not much. The key piece of infrastructure needed to make the phone detect places and people, context, is missing.
What is the ARCloud
To get beyond ARKit and ARCore we need to start thinking bigger than ourselves. How do other people on other types of AR devices join us & communicate with us in AR? How do our apps work in areas bigger than our living room? How do our apps understand & interact with the world? How can we leave content for other people to find & use? To deliver these capabilities we need cloud-based software infrastructure for AR. In the previous part of this chapter, my Super Ventures partner Ori Anbar (co-founder of AWE) refers to all this stuff as the ARCloud.
The ARCloud can be thought of as a machine-readable 1:1 scale model of the real world. Our AR devices are the real-time interface to this parallel virtual world which is perfectly overlaid onto the physical world.
Why all the “meh” from the press for ARKit & ARCore?
When ARKit was announced at WWDC this year Apple Chief Executive Tim Cook touted augmented reality, telling analysts: “This is one of those huge things that we’ll look back at and marvel on the start of it.”
A few months went by. Developers worked hard on the next big thing, but the reaction to ARKit at the iPhone launch keynote was “meh”. Why was that?
It’s because ARKit & ARCore are currently at version 1.0. They only give developers three very simple AR tools: (1) The phone’s 6DoF pose, with new coordinates each session; (2) a partial & small ground plane; and (3) a simple average of the scene lighting.
In our excitement overseeing one of the hardest tech problems solved (robust 6DoF pose from a solid VIO system) and Tim Cook saying the words “augmented” and “reality” together on stage, we overlooked that you really can’t build anything too impressive with just those 3 tools. Their biggest problem is people expecting amazing apps before the full set of tools to build them existed. Note that it’s not the if but the when that we’ve gotten wrong.
,
Source: Forbes