Leap Motion Tests Controller-free Input For VR

There’s an intuitive appeal to using controller-free hand-tracking input like Leap Motion’s; there’s nothing quite like seeing your virtual hands and fingers move just like your own hands and fingers without the need to pick up and learn how to use a controller. But reaching out to touch and interact in this way can be jarring because there’s no physical feedback from the virtual world. When your expectation of feedback isn’t met, it can be unclear how to best interact with this new non-physical world. In a series of experiments, Leap Motion has been exploring how they can apply visual design to make controller-free input hand input more intuitive and immersive. Leap Motion’s Barrett Fox and Martin Schubert explain:

,

,

Exploring the Hand-Object Boundary in VR
 
When you reach out and grab a virtual object or surface, there’s nothing stopping your physical hand in the real world. To make physical interactions in VR feel compelling and natural, we have to play with some fundamental assumptions about how digital objects should behave. This is usually handled by having the virtual hand penetrate the geometry of that object/surface, resulting in visual clipping. But how can we take these interactions to the next level?
 
With interaction sprints at Leap Motion, our team sets out to identify areas of interaction that developers and users often encounter, and set specific design challenges. After prototyping possible solutions, we share our results to help developers tackle similar challenges in their own projects.

,

,

This execution felt really good across the board. When the glow strength and and depth were turned down to a minimum level, it seemed like an effect which could be universally applied across an application without being overpowering.
 
Experiment #2: Fingertip Gradients for Proximity to Interactive Objects and UI Elements

,

,

This experiment definitely helped us judge the distance between our fingertips and interactive surfaces more accurately. In addition, it made it easier to know which object we were closest to touching. Combining this with the effects from Experiment #1 made the interactive stages (approach, contact, and grasp vs. intersect) even clearer.
 
Experiment #3: Reactive Affordances for Unpredictable Grabs
 
 
 
 
 
 
 
 
 
 
 

,

,

Image courtesy Leap Motion
 
How do you grab a virtual object? You might create a fist, or pinch it, or clasp the object. Previously we’ve experimented with affordances – like handles or hand grips – hoping these would help guide users in how to grasp them.

,

,

Three raycasts per finger (and two for the thumb) that check for hits on the sphere.
 
Pushing this concept of reactive affordances even further we thought what if instead of making the object deform in response to hand/finger penetration, the object could anticipate your hand and carve out finger holds before you even touched the surface?

,

,

These effects made grabbing an object feel much more coherent, as though our fingers were being invited to intersect the mesh. Clearly this approach would need a more complex system to handle objects other than a sphere – for parts of the hands which are not fingers and for combining ACME holes when fingers get very close to each other. Nonetheless, the concept of reactive affordances holds promise for resolving unpredictable grabs.
 
Hand-centric design for VR is a vast possibility space—from truly 3D user interfaces to virtual object manipulation to locomotion and beyond. As creators, we all have the opportunity to combine the best parts of familiar physical metaphors with the unbounded potential offered by the digital world. Next time, we’ll really bend the laws of physics with the power to magically summon objects at a distance!

 

Source: Road to VR

more insights