How Steve Jobs Set Apple On Course To Rule AR

Apple last month launched what could soon end up being the largest augmented reality (AR) platform in the world, and its rivals may have a tough time answering in kind. Apple, it turns out, is probably the only company that could launch an AR platform with the kind of performance and broad consumer reach of its offering, ARKit. That’s because it controls so many parts of the software and hardware involved–just like Jobs envisioned.
 
While Google has spent years working on its own AR platform, Tango, having it opened it up to developers back in 2014, Apple now seems poised to swoop in and carry AR into the mainstream, leaving the kids from Mountain View in the dust.
 
The AR apps developers create for ARKit will immediately work on many millions of iPhones out in the wild. No special 3D sensors required, just the iPhone’s camera, and an A9 or A10 chip inside.
 
Tango AR, meanwhile, works only on a couple of phones–Lenovo’s Phab 2 Pro and Asus’s ZenFone AR. Tango may be more accurate at mapping objects and rooms because it requires the host device to have active 3D sensors to measure the precise locations of surfaces and objects within the frame. The device also has to have a large battery to power the 3D lasers.
 
“Google must realize they dramatically overshot with Tango,” says IDC analyst Tom Mainelli, referring to the platform’s rigorous hardware requirements.
 
ARKit, on the other hand, relies on no special 3D sensors on the back of the iPhone. ARKit apps can place digital objects in mid-air, and on horizontal surfaces. What ARKit can’t yet do is attach content to real-world objects in the frame. Nor can it make digital content interact with more complex surfaces (it can’t make a virtual car race across the floor and bounce off a piece of furniture, for example).
 
Despite those limitations, the quality of the AR we’ve seen developed on ARKit is surprisingly good.

,

,

[Photo: courtesy of Apple]
 
WHY ARKIT WORKS
 
The secret sauce isn’t much of a secret. ARKit experiences look compelling because they are supported by several key components of the phone–the camera, the processors, and the phone’s various inertia sensors (accelerometer, gyroscope, etc.)–all working in tight concert together. They’re tuned and optimized to precisely determine the position of the iPhone relative to the environment, as the user moves it around. Apple can can do this because all the components are made by the same company, Apple.
 
Apple can bring a lot of processing power to bear, too. ARKit processing happens on the A9 or A10 system-on-a-chip (SOC) within the iPhone. The dual-core application processor on the SOC works with both the M9 motion processing chip (which processes data coming in from the motion sensors) and with an image signal processing chip (which processes image data coming in from the camera). This all happens extremely quickly and in a way that doesn’t rapidly drain the battery or overheat the phone, explains Jay Wright, president of PTC’s Vuforia AR platform.
 
Apple says millions of computer vision computations are already happening during ARKit experiences on the iPhone. Right now, however, “computer vision” software can only understand the rough outlines of objects in AR settings. “This is because it is not yet capable of determining an accurate 3D position or the accurate contours of an object,” Wright explains. “Both are required for the more interesting AR use cases.”
 
Right now the image signal processor (ISP) does much of the computer vision processing needed to recognize imagery seen through the camera lens. Apple is said to be working on a dedicated AI processor, which might take over the bulk of the image recognition duties for AR. Other reports have said Apple is working on its own graphics processing unit (GPU), the kind of chip most commonly used for heavy-duty computation and AI algorithms.

,

 

Source: FAST COMPANY

more insights