Making VR Controllers From The Ground Up

Foreword
 
Just as a foreword, this is a project I worked on for my senior project for my final year of high school. And yes, it does look very similar to the PS Move, however the goal of the project was not to have usable controllers for VR, but rather to build a solution from the ground-up. Admittedly, the quality of the final result could have been much better, however I did accomplish my goal and I ended up learning quite a bit. It should also be noted that this project was completed in March before the Vive was released, using a DK2 on Linux.
 
Drafting
 
My initial project goal, from the beginning, was a pair of controllers for use in VR. Optimally, both hands would have their position and rotation tracked. In addition to this, I found that the most optimal input would involve at least one analog joystick for movement (since analog joysticks provide partial angles in any direction), one button per finger for binary finger input, and a few general-purpose buttons.
 
For position and rotation tracking, one possibility I investigated was using pure IMU-based tracking by integrating acceleration twice for position and rotational velocity once for rotation. After actually investigating this, however, it was quickly apparent that this would not work due to the lack of accuracy and relatively slow sampling rate even in modern IMUs, since an accurate position can only be gained by integrating accurate samples extremely quickly. Without quick and accurate enough sampling, the only way to use IMUs is via correction. For rotation, the accelerometer can tell the direction of gravity while the device is standing still, and a magnetometer can tell magnetic north. As such, rotational tracking is in a relatively accurate state with several known and tested fusion algorithms available. Position tracking, however, does not have much of a luxury for correction in terms of pure-IMU tracking.
 
My second idea ended up being similar to what Free Track does: using LEDs and a camera to determine position. Given a known distance between two LEDs, it would be possible to calculate a position. While looking into the process for this, however, I found that tracking two hands at the same time would be relatively complicated with this, in addition to it being extremely easy to block the LEDs when facing left or right relative to the camera, or if the controller itself was rotated in a direction not facing the camera. I did, however, accidentally discover a solution which would work.
 
Infrared Diffusion
 
The New 3DS’s face tracking works using a camera without an IR filter, in addition to an infrared LED next to the camera in order to light the user’s face in dark or nighttime environments. Experimenting with my DK2 camera, I was able to examine some of the patterns with this LED, while also being able to experiment with diffusing the infrared light into different objects, such as a green bouncy ball. To test this, I drafted a basic OpenCV script in Python which was able to detect my infrared-lit ball:

,

,

Once my styrofoam spheres had arrived, I began to work on actually extending my Python script into something which could actually track position. For the most part this involved doing a lot of tuning with OpenCV in order to best isolate my infrared sphere, especially in environments which may have a good amount of existing infrared light (ie incandescent lightbulbs, sunlight, etc). Once I had my sphere isolated, I spent some time figuring out my X and Y camera pixel -> real life millimeter position, and from there also figuring out the relationship between our radius to Z position, and the Z position to the X and Y position. I ended up getting some funky ratios which worked surprisingly well, however they probably aren’t *really* that accurate. In my own tests, though, it seemed accurate enough, so I went with it.
 
Demonstration and Printing
 
With position tracking said and done, I also realized that as great as showing a controller is, I needed a demo. As such I opted to modify my existing DK2 port of OpenJK to communicate with my Python daemon. To get data from my Python script to OpenJK, I ended up just using memmap for some simple one-way IPC:

,

,

For my second half, I ended up using an HDMI cable to transfer all the signals I needed straight to the microcontroller, and with that I had my controller’s hardware 100% done:

,

,

The main issue from here, for me was an issue of duplicating all my inputs twice, and also dealing with two IMUs and two circles to track instead of one. There was also the issue that my OpenJK tracking was a bit… subpar. The weapon model moved with the controllers on all axis, but actually aiming was just a hack which also happened to map the controller X and Y to the mouse. Luckily it was enough to get a pass from my program teacher during our check-in a few weeks before our actual presentation:
 
After a fair bit of work, I managed to work out a good method of tracking which would prevent both controllers from being confused in the event of controllers crossing over the other or one controller moving outside of the view of the DK2 camera. I also managed to get OpenJK to actually use purely the rotation and position from my controller for the controlling of the actual gun in-game, and after tuning it enough to not fail on presentation day, my time effectively ran out and I honestly had no time to work on other demos, unfortunately. I would say that the product came out rather well, better than I had anticipated even:

,

 

Source: Douevenknow

more insights