2560 x 1440 (WQHD) @ 80fps + hand input for Oculus Rift CV1?

spectra7

Hot on the announcement of new gesture recognition chips on the 8th of November Spectra7 released a press announcement for what looks likely to be the replacement for the VR7100 chip that significantly reduces the thickness of the DK2 HDMI cable, the VR7200.

From the press release :

“With Spectra7’s new VR7200 chip which features the Company’s patented high-speed, active signal processing and power delivery technology, dual screen VR HMDs with a single super-thin cable and ultra-compact connector are now possible. Next generation VR interconnects built with Spectra7’s VR7200 are capable of dual 2560 x 1440 Wide Quad High Definition (WQHD) display resolution with 4:4:4 Chromaat up to 80 FPS perscreen without any image degradation as a result of Luma and/or Chroma subsampling and do not require a separate external HMD power connection”

Samples are available next month which pretty much rules out the rumour of a February release for the CV1 I read today on twitter. This falls a little short on the 90fps we were expecting but is still an improvement on the 75fps from the DK2.

We also saw an announcement for actual order amount for the new gesture chip, 500,000 of the VR7050 according to TMXMoney, along with the VR7100 replacement, which is undoubtedly the VR7200. “The order calls for delivery of over 500,000 devices including the Company’s recently announced VR7050 Gesture and Motion Backhaul Processor and the second generation of the previously announced VR7100 ultra-miniature Digital Video Link Processor chip.” This is half of the usual ‘million unit’ figures Oculus talk about for the CV1 but it’s still early days for further orders.

So are these two new chips for the Oculus consumer release and when could they arrive?

Well, the VR7100 chip was announced in October last year and began production in May 2014 for the DK2, so this is likely to mean we follow the same schedule and see assembly begin in the middle of next year….

So, CV1 confirmed for Xmas 2015 and WITH a built in hand/gesture recognition system? Who else would order half a million of these new chips?

Spectra7 also recently took out a 2-year loan of $4.75m which might be Oculus ensuring that the supply of chips isn’t in any danger of drying up while the CV1 is being built.

So what screen? A Samsung S5 panel fits the resolution and these should be widely available now…

Maybe we’ll know for sure in about 10 months 🙂

PDFs:

Gesture : http://spectra7.com/S7-VR7050-Press-Release-20141008-F4.pdf

New HDMI : http://spectra7.com/pdfs/Spectra7-VR7200-Press-Release-20141113-F.pdf

Old HDMI: http://www.spectra7.com/pdfs/products/VR7100-S-Product-Brief-Rev2.pdf

The VR7100 inside the DK2 cable (ifixit)

Combining two motion sickness solutions…

Isn’t it a shame I don’t get motion sickness… then I could try my own solutions on myself… :p

Anyway, I’m about to code another idea for eliminating motion sickness. If the US Army research found an LCD shutter closing 8 times a second was effective (see previous post) and humans don’t notice their ‘eyes’ closing if it’s done by the computer can’t we combine the two. I propose opening some mostly transparent ‘eyelids’ 8 times a second, it might work better to then close them too, but I think a quick fade to the eyelids colour could also work.

Something like this could also be baked into the firmware of a HMD and run independent of the display and very predictably…

Solving motion sickness

I don’t suffer from motion sickness, but I recognise it’s a big enough problem that if I want VR to succeed it needs to be looked at a little closer.

I watched the Tom Forsyth Oculus Connect talk about developing VR and he said something interesting at around the 40 minute mark about ‘blink transitions’. So you get into a car, your virtual ‘eyes’ in the Rift blink and you switch positions nearly instantly but your brain ignores this transition because of the ‘blink’ effect in the HMD….

To repeat, you’re not PHYSICALLY blinking, the screen is doing it for you.

I asked Tom, and he doesn’t know if anyone has tried it, so I threw my idea into the Tuscany demo.

You can rotate as usual with Q and E with 45 degree turns but now ‘shutters’ close for 150ms, you turn and your ‘eyes’ open at the new orientation 150ms later. You can look around as normal with your head. It’s reasonably crude but demonstrates the idea.

So if Tuscany makes you sick please give this a go and give me some feedback. If you want to steal the idea or code for your own implementation please do, or if you want to add some ideas to my repo go right ahead. If you don’t suffer from nausea then this thread is probably not for you… congratulations! :p

https://github.com/traveltrousers/Blink_Comfort

Constructive criticism always welcome, and if you don’t like it… well, lets see your solution to sim sickness! :p

 

relevant videos :

http://youtu.be/addUnJpjjv4?t=40m5s

Quantifying and identifying VR motion sickness causes, in order to solve them.

Brendan Iribe’s, the Oculus CEO, comments at the Web Summit in Dublin today were interesting not only for the warning he gives to other companies, but for the problems Oculus still face.

“We’re a little worried about some of the bigger companies putting out product that isn’t quite ready. That elephant in the room is disorientation and motion sickness.” He said.

By bigger companies of course, they mean Sony. “Don’t poison the well here”, but I think perhaps the bigger danger could be from Oculus developers rather than Sony. The PS4 is a known variable, as will the Sony headset when it’s released, and developers will be more comfortable with the Sony development ecosystem when their HMD is made available. If I decide to develop a VR game for Sony, well, that decision isn’t mine. I have to apply to the development program, buy a dev kit for $2,500 (assuming I’m accepted) and then persuade them to give (or sell) me a Morpheus HMD. Even if I manage to create (what I regard as) a great game it’s likely that Sony will veto any VR title that doesn’t create a good experience for users.

Oculus on the other hand is totally open, so I can cheaply buy a DK1/2 (and even this is optional) and create whatever I like. I’m not arguing against this openness since the low barrier to entry is a great way of encouraging new demos and games but poorly executed code are far more likely to create the ‘disorientation and motion sickness’ that Iribe is so concerned about.

It’s a well known fact that perhaps 10% of the population are susceptible to simulator sickness and while Oculus are attempting to address this by recommending following their ‘best practices’ guide it’s still a black art knowing what will cause nausea in some people and not others. Generally this is some kind of movement, either unexpected or something that throws their vestibular system off. If you have sub-millimeter positional tracking and are rendering at a locked framerate of 75-90hz 99.9% of people will be fine when they’re not moving and just looking around. Figuring out exactly what the problem is once the user begins to move is something that needs a more methodical approach to solving.

From 6:30 to about 9 minutes, Tom Forsyth talks about how even Oculus are still not sure about why people get sick and what specifically causes it in different people. Now if the egg heads at Oculus are still figuring this out then what chance do regular developers have? We can follow the best practices and our users might still get sick when faced with the wrong type of stairs :p

So here are my three simple suggestions for tackling the whole ‘disorientation and motion sickness’ issue that Oculus, and all the VR companies, still face.

Demos

For most first time Rift owners the standard demo that is loaded first is usually the ‘Demo Scene’ with a simple desk, a plant and lamp… it’s simple, effective, non nauseating…. and quite boring. Now a ‘boring demo’ is fine to show what the rift can do to ‘VR virgins’ but you’ll have a hard time convincing them that they also need to go out and buy a HMD and good PC just so they can look at a desk. Most people jump into the Tuscany Demo from here, but sadly even that can cause nausea in some people. A better option would be for Oculus to release the demos that they recently released for the Crescent Bay demo, or something similar. Give Rift owners a well constructed suite of demos known to run well and guarantee the first time user a great experience. Sadly too many rift owners seem to enjoy throwing first time users into far too intense or scary experiences. Dreadhalls or rollercoaster demo’s are great fun, but you might be showing the rift to someone who’s never really played a video game since Pacman and you run the risk of giving them immediate motion sickness, a delayed nightmare or even face planting into concrete.

A users first time experience should be fun, safe and non nauseating, make something not enjoyable and you might put them off for a long time. We need some awesome introductory experiences to amaze VR virgins, not make them ill.

If Oculus want a cheap way of finding some great new intro VR experiences, run a competition giving a few CV1 headsets away.

Training and testing

After giving someone a ‘nice’ experience give them a disguised ‘motion sickness’ test. We construct a carefully designed level, perhaps constructed as a museum or art gallery, that the user can explore while we test their comfort level. So they move around the various floors looking at various exhibitions and the like, but at each intersection or ‘level’ we ask them to rate their comfort on a scale from 1 to 10. Assuming they stay comfortable we can then begin to alter the parameters of the test, such as walking speed, comfort mode turning, stairs, blink transitions and the like. Our aim is not really to make users sick but identify when it happens.

This gives Oculus a chance to not only test users but train them in the (yet to be determined) new standards of movement, UI and the like. It also gives you a chance to teach users about recognising nausea, what causes it and assure them that it’s temporary and gets better with exposure.

This gives us a standardised, repeatable test that we can use to strip away, and hopefully identify, the causes of simulator sickness. If we also anonymously gather the users age, sex, IPD, height and perhaps glasses prescription, plus the computer specifications and frame-rate, we have an easy way to quickly and scientifically test hundreds of thousands of people, and their hardware, to look for patterns. This also lets content creators identify what experiences are most likely to affect their users so they can either alter them for wider comfort, or warn users that a certain game might make them sick.

Now it’s also understood that simulation sickness is something that can be mitigated through repeated exposure so perhaps after a few weeks playing one game the users simulator sickness level could be reduced, so the test can be run again and the comfort remeasured. At the end we can now tell the user that their perceived comfort has improved by say, 20% overall or in certain areas, and the user would now probably handle “GTA VR” with no problems, whereas three weeks before they wouldn’t last  5 minutes in that game. Perhaps some games will push certain aspects of users nausea and these could be used to acclimatise users to the effect, so if a user can’t do ‘stairs’ they can train on demo that uses gently sloping ramps instead, improving their tolerance for virtual stairs.

So Oculus, give us a nice training and test mode and we’ll give you the data to pin point exactly what makes some of us sick, letting you nail down a solution to motion sickness.

Reporting

I think it would also be incredibly useful to bake a ‘report nausea’ feature into the SDK, which sends some screenshots and fps graphs back to Oculus. This would allow Oculus and devs to identify elements in games, demos and practices that are causing problems and find fixes. Perhaps devs fail to spot areas where there is a problem and this would help pin point problems.  This could be dynamic, so you could ‘power through’ something that affects you temporarily, while noting it’s affecting you, or as simple as adding a ‘Nausea Quit’ button in the menu.

 

So, nice demos, a training and test game and better ways of reporting what makes us ill sometimes. Three easy things.

 

Comments are always welcome, or join in the argument on reddit 🙂

Low cost methods to increase perceived field of view in virtual reality headsets without additional electronics.

Human vision is about 210°, we can see pretty much 180° side to side and moving our eyes can see slightly behind our head. Useful for noticing predators sneaking up on us… The DK1 from Oculus was 110°, the DK2 is about 100°. This was due to a few reasons but it’s generally expected the FOV will increase in the next iteration. A low FOV tends to pull you out of the experience, no one wants that blinkered feeling you get in a diving mask. When I was a boy these were made from black rubber, so the tunnel effect was really pronounced, modern masks are made from clear silicone. Your view is still blocked but since light passes through you can still sense movement from your outer peripheral vision. Much nicer.

lightpackSome creative people took this idea and turned it around to create LED strips that project an average light reading and display it on the wall behind your TV. Ambilight and lightpack are of limited use, but a cool idea. It didn’t take long for someone to suggest putting one inside an Oculus Rift to increase the perception of a wide FOV but HMD’s are quite a bit too small for this to work well. It also seems Apple has a Patent for this idea but my following suggestions are not the same.

Still the idea is quite good, most of our vision is concentrated in a 6° arc and the amount of detail we can perceive drops off away from the center of our retina. A method of putting extra light into our peripheral vision would be really nice, especially if we can do it for ‘free’.

But how? I thought of a couple of possibilities, but please remember this is just idle speculation, a thought experiment, although if Oculus want to hire me to try them out they better hurry up before I apply to the HAXLR8R program 🙂

Well, first we would need white (or probably neutral grey) borders around the screen to allow for some reflectivity.

A fairly simple idea is to create a clear plastic rectangle that fits around the edge of the lens, this refracts a small portion of the image out and onto the sides of the borders. We might sacrifice 2° of ‘real’ FOV to create an impression of an extra 10° or so, on all four borders, per eye. This ‘lens’ might have to be precisely aligned however since we might need to increase the luminance at the edge of the screen to compensate for the loss of light as it’s refracted. Screenshot 2014-10-17 20.47.43

 

Look at the simple example I drew in Inventor. The right side shows the normal eye -> lens -> screen. BUT the version on the left has a strip of clear plastic around the edges. Now part of this light is refracted out and onto the HMD plastic panels on the side. We lose a little detail but make the Rift feel less contricted and enclosed.

 


 

Another idea would require some minor changes to the HMD casing but could produce a much more impressive effect. We would lose NO viewable area but could gain a really bright peripheral effect with some clever design engineering.

Consider that the current DK2 wastes a huge amount of screen estate. This is the nature of the design and not a massive flaw, but we’re throwing away pixel light that could be used in out peripheral vision.

UE4Rollercoaster-2014-05-10-11-46-29-72 Each of the eight corners are only displaying black. Instead we could cover these areas with translucent plastic that bounces these pixels light out to the edges of the screen in a similar way that fiber optics can relocate light. Instead of wasting this potential light we can add it to the experience and a 110° FOV could be perceived as perhaps 130°, bringing us one step closer to a more immersion.

Here is a very crude example. The top of the right screen is sampled, the corner is illuminated and reflected into the top border. This doesn’t have to be quite so general, we can split the areas up into smaller bands to improve the effect.

 

coaster with border

 

Wide angle, low distortion camera tracking for the Oculus Rift

I thought I would write a quick demo about one problem of the Oculus Rift DK2 that not been addressed, the coverage of the positional tracking camera. Generally it works really quite well, but if you move outside it’s field of view then tracking will stop, immersion will be lost and the experience degraded. The camera is pretty standard, they’ve not designed anything new, just adapted a fast, reliable (and cheap) off the shelf sensor. It doesn’t have an amazing field of view so it’s quite easy to move outside this range.

m_5329e14e84308_s

52º is actually pretty narrow…

If Oculus hopes to have a system whereby you can navigate a whole room then 52º just can’t cut it. A 90º view would mean you could place the camera in the corner of a room and it would be able to look along the walls.

Not everyone will want to want to sit near a corner….

Ideally we have a system that has 180º coverage so it can be placed on a suitable wall and the user has little danger of moving outside it’s field of view.

 

So just get a wide angle camera right?

iphone-accessories-fish-eye
Actually this isn’t an optimal solution since wide angle lens create not only a huge amount of distortion but the center of the lens, where the user is likely to spend most of their time, is compressed so the amount of ‘pixels on target’ is actually quite low. Great for expressive photography but not so much for tracking LEDs at sub milimeter accuracy…

Instead a better solution is to simply use multiple cameras at a slight angle to each other providing 180º+ coverage.Screenshot 2014-10-11 22.22.52 By this I mean we have three camera sensors on a single circuit board, not three separate cameras.

 

three lens camera

 

 

 

 

 

 

I created a quick playable demo in Unity to show the idea. A camera on either side of the central camera gives us significant overlap. The three bottom ‘screens’ show what each of the camera can see, when you move to the side the LED markers are transferred to the other camera.

We don’t have to worry about strictly lining up the images since they won’t be displayed. The cost is marginally more expensive, but imaging sensors are really very cheap, so the camera would only be a couple of dollars more expensive.** There is a slight processing overhead when moving from one sensor to the next as more markers would need to be tracked.

It is inevitable that Oculus will move to using camera on the HMD to track position (and pass through a picture to the user) but this may not come for another couple of consumer versions, so in the mean time using multiple low field of view cameras together to give a wide field of view tracking is quite possible.

You can ‘play’ the demo here. Press 1 and 2 to switch from a 90º camera watching three 72º overlapping cameras to a 160º camera in the same position and move your mouse. You can see the sphere behind the ‘displays’. In a real application the cameras would not move, the user would, but the demo allows you to move the camera to show how they would overlap and still give you a wide, undistorted field of view.

 

** A quick Goggle reveals the sensors in the Oculus camera are actually about $9 each! More than I hoped but still not crazy money…

 

The experts over on Reddit had this to say:

3rd_Shift: “It seems utterly preposterous to pursue a multiple camera solution with the added cost and complexity that entails when you could achieve the same result with a wide-angle lens and a higher resolution camera.”

Randomoneh “Have you ever used a fisheye lens?It seems like you’re confusing fisheye for rectilinear lens.”

My reply :

yes of course.

So lets see, how about just using say a lens like the rectilinear Nikon 13mm f/5.6, well we don’t actually need that lens, just clone it in plastic… it only has 118 degree field of view but we can’t go beyond that without going into fish eye territory…

http://www.kenrockwell.com/nikon/13mm.htm

But not to worry…. so we’ll clone in plastic which will make it cheaper right? Lets make it only 1% of the cost of its original price… despite the fact that it has 12 groups / 16 elements (ie 16 lens) and weighs over 1kg.

So 1% of the original price is about $80, yes, in 1979 the lens cost $8k.. now they’re, god knows… $25k+??

So $80 1kg plastic lens for 118 degrees vs. three $10 10g cameras for 180 degrees.

“utterly preposterous” you say….?

Please feel free to link to a nice 180 degree rectilinear lens for $30….