James Kane discusses tomorrow’s metaversal inputs today
Widespread consumer adoption of XR devices will redefine how humans interact with both technology and each other. In coming decades, the standard mouse and QWERTY keyboard may fade as the dominant computing UX, giving way to holographic UI, precise hand/eye/body-tracking and, eventually, powerful brain-to-computer interfaces. One key UX pattern that must be answered by designers and developers is: How to input?
That is, by what means does a user communicate and interact with your software and to what end? Aging 2D input paradigms are of limited use, while new ones are little understood or undiscovered altogether. Further, XRI best practices will vary widely per application, use case and individual mechanic.
The mind reels. Though these interaction patterns will become commonplace in time, right now we’re very much living through the “Cinema of Attractions” era of XR tech. As such, we’re privileged to witness the advent of a broad range of wildly creative immersive design solutions, some as fantastic as they are impractical. How have industry best practices evolved?
Controllers
These may seem pedestrian, but it’s easy to forget that the first controllers offering room-scale, six degrees-of-freedom (6-DoF) tracking only hit the market in 2016 (first Vive’s Wands then Oculus’ more ergonomic Touch, followed by Windows’ muddled bastardization of the two in 2017). With 6-DoF XR likely coming to mobile and standalone systems in 2018, where are controller interfaces headed?
Well, Vive’s been developing its “Grip controllers” (aka the “knuckles controllers”) — which are worn as much held, allowing users freer gestural tracking and expression — for over a year, but they were conspicuously excluded from the CES launch announcement of the Vive Pro.
One controller trend we did see at CES: haptics. Until now, handheld inputs have largely utilised general vibration to indicate haptic feedback. The strength of the rumble can be throttled up or down, but limited to just one vibratory output, developers’ power to express information with physical feedback has been limited. It’s a challenging problem: how to simulate physical resistance where there is none?
,
,
Left: the HaptX Glove, Right: the Tactical Haptics Reactive Grip Motion Controller
HaptX Inc. is one firm leading advances in this field with their HaptX Gloves, two Nintendo Power Glove-style offerings featuring tiny air pockets that dynamically expand and contract to provide simulated touch and pressure in VR in real-time. All reports indicate some truly impressive tech demos, though perhaps at the cost of form-factor — the hardware involved looks heavy-duty and removing the glove would appear to be several degrees more difficult than setting down a Vive Wand, for contrast.
Theirs strikes me as a specialty solution, perhaps more suited to location-based VR or commercial/industrial applications. (Hypothetical: would a Wand/Touch-like controller w/ this type of “actuators” built into the grips provide any UX benefit at the consumer level?). Meanwhile, Tactical Haptics is exploring this tech through a different lens, using a series of sliding plates and ballasts in their Reactive Grip Motion Controller, which tries to simulate some of the physical forces and resistance one feels wielding objectives with mass in meatspace. This is perhaps a more practical haptics approach for consumer adoption — they’re still simple controllers, but the added illusion of physics force could be a truly compelling XRI mechanic (for more, check out their white paper on the tech).
Hand-Tracking
Who needs a controller? For some XR applications, the optimal UX will take advantage of the same built-in implements with which humans have explored the material world for thousands of years: their hands.
Tracking a user’s hands in real-time 27 degrees of freedom (four per finger, five in the thumb, six in the wrist) absent any handheld implement allows them to interact with physical objects in their environment as one normally would (useful in MR contexts)— or to interact with virtual assets and UI in a more natural, frictionless and immersive way than, say, the pulling of a trigger on a controller.
And of course, I defy you to test such software without immediately making rude gestures with it.
,
,
It will likely be decades before you’re able to dictate an email via inner monologue or directly drive a cursor with your thoughts — and who knows whether such sensitive operations will even be possible without invasive surgery to hack directly into the wetware. (Yes, that was a fun and terrifying sentence to write). I would wager, however, that an eye-tracking-based cursor combined with “click” or “select” actions driven by an external BCI will become possible within a few hardware generations, and may well end up being the fastest, most natural input in the world.
Machine Learning
Imagine an AI-powered XR OS a decade from now: one that can utilise and analyse all the above inputs, divining user intent and taking action on their behalf. One that, if unsure of itself, can seek clarification in natural language or in a hundred other ways. It can acquire your likes and dislikes through experience and observation as easily as you might for a new friend, constructing a model your overall XR interaction preferences — with the AI itself, with other humans, and with the virtual realities your visit and the physical ones you augment. This system will, at the very least, be able to model and emulate human social graces and friendship.
Any such system will also have unparalleled access to your most sensitive personal and biometric data. The security, privacy and ethical concerns involved will enormous and should be given all due consideration. In his talk on XR UX at Unity HQ last fall, Unity Labs designer and developer Dylan Urquidi said he sees blockchain technology as a possible medium for context-aware, OS-level storage of these kinds of permissions or preferences. This allows ultimate ownership and decision-making power re: this data to remain with the user, who can allow or deny access to individual applications and subsystems as desired.
I’m currently working on a VR mechanic using a neural net trained from Google QuickDraw data to recognize basic shapes drawn with Leap Motion hand-tracking — check out my next piece for more.
,
Source: VR FOCUS