Low cost methods to increase perceived field of view in virtual reality headsets without additional electronics.

Human vision is about 210°, we can see pretty much 180° side to side and moving our eyes can see slightly behind our head. Useful for noticing predators sneaking up on us… The DK1 from Oculus was 110°, the DK2 is about 100°. This was due to a few reasons but it’s generally expected the FOV will increase in the next iteration. A low FOV tends to pull you out of the experience, no one wants that blinkered feeling you get in a diving mask. When I was a boy these were made from black rubber, so the tunnel effect was really pronounced, modern masks are made from clear silicone. Your view is still blocked but since light passes through you can still sense movement from your outer peripheral vision. Much nicer.

lightpackSome creative people took this idea and turned it around to create LED strips that project an average light reading and display it on the wall behind your TV. Ambilight and lightpack are of limited use, but a cool idea. It didn’t take long for someone to suggest putting one inside an Oculus Rift to increase the perception of a wide FOV but HMD’s are quite a bit too small for this to work well. It also seems Apple has a Patent for this idea but my following suggestions are not the same.

Still the idea is quite good, most of our vision is concentrated in a 6° arc and the amount of detail we can perceive drops off away from the center of our retina. A method of putting extra light into our peripheral vision would be really nice, especially if we can do it for ‘free’.

But how? I thought of a couple of possibilities, but please remember this is just idle speculation, a thought experiment, although if Oculus want to hire me to try them out they better hurry up before I apply to the HAXLR8R program 🙂

Well, first we would need white (or probably neutral grey) borders around the screen to allow for some reflectivity.

A fairly simple idea is to create a clear plastic rectangle that fits around the edge of the lens, this refracts a small portion of the image out and onto the sides of the borders. We might sacrifice 2° of ‘real’ FOV to create an impression of an extra 10° or so, on all four borders, per eye. This ‘lens’ might have to be precisely aligned however since we might need to increase the luminance at the edge of the screen to compensate for the loss of light as it’s refracted. Screenshot 2014-10-17 20.47.43

 

Look at the simple example I drew in Inventor. The right side shows the normal eye -> lens -> screen. BUT the version on the left has a strip of clear plastic around the edges. Now part of this light is refracted out and onto the HMD plastic panels on the side. We lose a little detail but make the Rift feel less contricted and enclosed.

 


 

Another idea would require some minor changes to the HMD casing but could produce a much more impressive effect. We would lose NO viewable area but could gain a really bright peripheral effect with some clever design engineering.

Consider that the current DK2 wastes a huge amount of screen estate. This is the nature of the design and not a massive flaw, but we’re throwing away pixel light that could be used in out peripheral vision.

UE4Rollercoaster-2014-05-10-11-46-29-72 Each of the eight corners are only displaying black. Instead we could cover these areas with translucent plastic that bounces these pixels light out to the edges of the screen in a similar way that fiber optics can relocate light. Instead of wasting this potential light we can add it to the experience and a 110° FOV could be perceived as perhaps 130°, bringing us one step closer to a more immersion.

Here is a very crude example. The top of the right screen is sampled, the corner is illuminated and reflected into the top border. This doesn’t have to be quite so general, we can split the areas up into smaller bands to improve the effect.

 

coaster with border

 

Wide angle, low distortion camera tracking for the Oculus Rift

I thought I would write a quick demo about one problem of the Oculus Rift DK2 that not been addressed, the coverage of the positional tracking camera. Generally it works really quite well, but if you move outside it’s field of view then tracking will stop, immersion will be lost and the experience degraded. The camera is pretty standard, they’ve not designed anything new, just adapted a fast, reliable (and cheap) off the shelf sensor. It doesn’t have an amazing field of view so it’s quite easy to move outside this range.

m_5329e14e84308_s

52º is actually pretty narrow…

If Oculus hopes to have a system whereby you can navigate a whole room then 52º just can’t cut it. A 90º view would mean you could place the camera in the corner of a room and it would be able to look along the walls.

Not everyone will want to want to sit near a corner….

Ideally we have a system that has 180º coverage so it can be placed on a suitable wall and the user has little danger of moving outside it’s field of view.

 

So just get a wide angle camera right?

iphone-accessories-fish-eye
Actually this isn’t an optimal solution since wide angle lens create not only a huge amount of distortion but the center of the lens, where the user is likely to spend most of their time, is compressed so the amount of ‘pixels on target’ is actually quite low. Great for expressive photography but not so much for tracking LEDs at sub milimeter accuracy…

Instead a better solution is to simply use multiple cameras at a slight angle to each other providing 180º+ coverage.Screenshot 2014-10-11 22.22.52 By this I mean we have three camera sensors on a single circuit board, not three separate cameras.

 

three lens camera

 

 

 

 

 

 

I created a quick playable demo in Unity to show the idea. A camera on either side of the central camera gives us significant overlap. The three bottom ‘screens’ show what each of the camera can see, when you move to the side the LED markers are transferred to the other camera.

We don’t have to worry about strictly lining up the images since they won’t be displayed. The cost is marginally more expensive, but imaging sensors are really very cheap, so the camera would only be a couple of dollars more expensive.** There is a slight processing overhead when moving from one sensor to the next as more markers would need to be tracked.

It is inevitable that Oculus will move to using camera on the HMD to track position (and pass through a picture to the user) but this may not come for another couple of consumer versions, so in the mean time using multiple low field of view cameras together to give a wide field of view tracking is quite possible.

You can ‘play’ the demo here. Press 1 and 2 to switch from a 90º camera watching three 72º overlapping cameras to a 160º camera in the same position and move your mouse. You can see the sphere behind the ‘displays’. In a real application the cameras would not move, the user would, but the demo allows you to move the camera to show how they would overlap and still give you a wide, undistorted field of view.

 

** A quick Goggle reveals the sensors in the Oculus camera are actually about $9 each! More than I hoped but still not crazy money…

 

The experts over on Reddit had this to say:

3rd_Shift: “It seems utterly preposterous to pursue a multiple camera solution with the added cost and complexity that entails when you could achieve the same result with a wide-angle lens and a higher resolution camera.”

Randomoneh “Have you ever used a fisheye lens?It seems like you’re confusing fisheye for rectilinear lens.”

My reply :

yes of course.

So lets see, how about just using say a lens like the rectilinear Nikon 13mm f/5.6, well we don’t actually need that lens, just clone it in plastic… it only has 118 degree field of view but we can’t go beyond that without going into fish eye territory…

http://www.kenrockwell.com/nikon/13mm.htm

But not to worry…. so we’ll clone in plastic which will make it cheaper right? Lets make it only 1% of the cost of its original price… despite the fact that it has 12 groups / 16 elements (ie 16 lens) and weighs over 1kg.

So 1% of the original price is about $80, yes, in 1979 the lens cost $8k.. now they’re, god knows… $25k+??

So $80 1kg plastic lens for 118 degrees vs. three $10 10g cameras for 180 degrees.

“utterly preposterous” you say….?

Please feel free to link to a nice 180 degree rectilinear lens for $30….

 

 

Open sourced, gloveless, 6DOF hand/arm/finger tracking + gun for Virtual Reality for less than $50…

Leap motion had the right idea when they recently released a mount for the Oculus Rift. The idea was that you can see your hands from the front of your face and give you a cool way to put your hands inside VR. It was a good idea but due to the limited range of the device fundamentally flawed. Tracking beyond 30cm is just too far away for the LM to handle reliably. Still, it’s a step in the right direction. Where else can you find a solution to track fingers and hands with excellent accuracy for less than $25 (second hand)?

Hand tracking really is key to a more immersive VR experience, a problem that the people at Perception neuron used to garner a sweet $570k on kickstarter recently. But gloves? In 2014? It’s a cool set of kit, but so many parts to break, wires to cut, parts to snag and you look somewhat foolish wearing the full setup. And $200 minimum investment? Ouch!

So what do we really need, at a bare minimum? Being able to see our dominant hand (preferably both) in VR space reliably, if we can track our lower arm accurately we can pretty much track the rest of the arm too. Your elbow is a pretty simple joint, approximating the upper arm isn’t hard if you know where your wrist is and where it’s pointing. The VR guns that are appearing more and more don’t give us that information, but guns are easy, fingers are where it’s hard. Showing your hand and fingers, which 99% of the world automatically look for when they first don a Rift, is really what we’re always going to wanting to do.

The main issue is cost, it’s mostly a solved problem if you have the money, but unless you can do it cheaply no one will adopt it and you’ll end up with early adopter blues. Nintendo powerglove anyone?

So, who wouldn’t want to be able to see their hand and fingers move accurately in front of their face while in the Oculus Rift. Everyone right? Now who wants that for less than $50?

It’s easy if you know what to hack together 🙂

Once again, it’s a PROTOTYPE! It’s not perfect, nothing ever is, but show me how to track your hand in VR, move around and shoot for less than $50 and I (and everyone else) will be very happy! I threw this together in a couple of days, if you like it, tell me! If you don’t like it… well, go make your own bloody controller!! :p

And if Oculus is watching, please give us access to the ‘skeleton’ of the camera in the SDK. We know you’re working on your own controller, but you never know, someone else might come up with a better solution and it will die on the vine because it’s impossible to support easily.

Here is a quick video, some pics, a break down of the hardware and how you can hack the software together to make it work.

IMG_2939

 

IMG_2941

Note: The MPU isn’t attached, and the nunchuck isn’t plugged in.

Hardware

The hand tracker is actually quite simple. I’m going to break down each part, what they do and why they’re needed.

Cuff

wrist

This is the base of the prototype, everything hangs on the wrist cuff. In its current incarnation it’s a little rough and could be more comfortable. A later version will have foam padding for comfort and a quick release buckle, or velcro, for a snug fit. It will also need a box for the arduino and gyro. Ideally it would also contain a small USB3 hub so the leap motion and the arduino can communicate to the the PC over only one lead. This raises the price a little for convenience. We also have the option of adding a few more buttons to the cuff which can be activated by the off hand.

(the tubes on the side are for support, printed plastic can be quite weak when printed like this, the tubes allow us to insert a 3mm length of filament for strength)

 

Wii Nunchuck

Why reinvent the wheel? The nunchuck can be found for as little as $3 online, it has a joystick and two trigger buttons as well as a 3 axis accelerometer for simple motion detection. It’s cheap, reliable and easy to replace. It also has a convenient arduino library just waiting to be used.

Ideally we would be able to connect and disconnect two nunchucks, for use in both hands, although using only one is perfectly fine. The cable is long enough to allow use in the ‘off-hand’ so the user could move with one hand and aim/’finger shoot’ with the other. it has a slightly inconvenient proprietary plug but this is easily adapted with a $1 gizmo from ebay. nunchuck

Arduino

A fully formed computer for $3? How can we refuse. This forms the brains of the gyro sensors and interprets the nunchuck signals. Using a pro-micro we can also emulate a joystick with no drivers. Handy!

Gyrometer/accelerometer/magnetometer

This tiny miracle on a chip provides a mass of information a thousand times a second. With this we can accurately measure where your wrist is pointing since it’s attached to the Cuff. They’re also only $8 each.

Female Arm

female

A simple printed arm is attached to the cuff. This provides some simple cable management too. Requires a couple of bolts to attach to the cuff.

Male arm.

male

Another printed part that can be used to adjust the total distance of the arms. A bolt locks it to the female arm and allows adjustment.

The leap holder.

holder

The fourth and final printed part. This hold the leap motion sensor which will be pointing at the hand and providing constant hand and finger tracking. It also has some holes for wires and needs two bolts to attach to the arm.

Leap Motion

 

Another miracle in a small package. You can find them second hand online now for $25. It has fairly mature drivers and a SDK for use with games. By mounting it to the wrist we can over come it’s problems with range and free it from the desk. The leap gives a better experience than putting on sweaty gloves for a fraction of the price. It also has zero moving parts, so there is nothing to break.

Ping Pong Ball

This is attached to the end of the leap holder and has a hole inside it. Illuminated from the inside it gives a cheap way to give us positional information. 25c

LED and wires.

This gives the eleventh and last component something to look at in the darkness. <$1.

PS3 Camera (or equivalent).

The camera tells the PC where your arm is in space, just like the Sony Move works. We can use open source software to track it quite easily and if we use an IR filter and Infrared LED inside the ball we can cut a lot of the tracking processing. Available on ebay for $5.

 

3D Printing Alternatives:

If you don’t have access to a 3D printer you could always make a leap holder from some wood, or even better use friendly plastic. You melt the pellets in boiling water and mould them into shape, this stuff is perfect to make a project like this and it’s very cheap (and reusable).

Bill of Materials

  1. Printed Cuff
  2. Printed female arm
  3. Printed male arm
  4. Printed Leap Holder
  5. Arduino $3 (new)
  6. MPU-9150 $8 (new)
  7. Wii Nunchuck $3 (new)
  8. Leap Motion $25 (second hand)
  9. PS3 Eye camera $5 (second hand)
  10. Ping Pong ball 25c
  11. LED + 40cm wire + 10 oh resistor $1
  12. 5 bolts and washers ~$1
  13. Extra cuff buttons 25c each (optional)
  14. Wii nunchuck adapter $1 (optional)
  15. Cuff Velcro $1 (optional)
  16. Cuff comfort foam $1 (optional)
  17. Short pieces of wire to connect the MPU and arduino
  18. USB lead for arduino (on hand)
  19. 5 meter USB3.0 extender for the Leap Motion $7
  20. 5v motor for cuff vibration $1 (optional)

Total : $46.25

or $58 with optional extras.

notes : The Cuff is ideally printed using Nylon since this is more flexible than ABS or PLA and should last longer. The leap can be position to be facing the palm or the back of the hand, although the latter, while more convenient, might be less accurate. It’s also possible that if an optimal position can be found we can reduce the required printed parts to a complete arm and cuff, removing the need for the bolts, and improving the appearance. A USB 2.0 lead can be used for the Leap Motion but this lowers the data speed and may affect accuracy.

 Software

This is a work in progress… but we can break it down into 4 distinct areas, all of which have open source software available.

Finger Tracking : Sign up with Leap here for their SDK

Position Tracking : Choose from two open computer vision projects, SimpleCV or OpenCV

Rotation Tracking : It’s a work in progress over at Arduino

Nunchuck libraries : Tim Teatros or check the arduino Site