Wednesday, June 26, 2013

Coherent UI Mobile Beta is now available on Unity Asset Store

Basic and Standard Packages of Coherent UI for Unity3D get mobile support as a free update.



Our mission in Coherent Labs is to help You - game developers and game production studios save money and time in the UI implementation process. We are working hard on meeting your needs and in the past few months have received many requests for an iOS and Android support of Coherent UI for Unity3D. Today, we are glad to announce that we are now officially supporting mobile platforms as well. The mobile support is included as a free update in the Basic and Standard packages of Coherent UI for Unity3D.


For those of you, who create games for mobile platforms only, we are also introducing our new standalone product - Coherent UI Mobile, which is now available on Unity Asset Store. Order our product now, during Beta and receive 30% discount for the price of $70 only.



Coherent UI Mobile features:
- In-game browser
- Easy exposure of game objects to the UI
- Hardware-accelerated CSS3, canvas and WebGL
- Fully functional preview on Windows and Mac OS X
- only iOS support at the moment. Android will be added soon.

All versions of Coherent UI for Unity3D are published on the Asset Store.

Tuesday, June 18, 2013

Modern Game UI with the Oculus Rift - Part 2

In this second part of the series I'd like to share some thoughts about how to create, integrate and organize UI in Virtual Reality(VR) and more specifically the Oculus Rift.
The conclusions I reached are very similar to the ones the Valve team got in their work in porting Team Fortress 2 for the device. I'll also mention some ideas I will try in the future but hadn't enough time to complete.

UI in Virtual reality

In traditional applications the UI can be divided conceptually in two types - UI elements residing in the 3D world and elements that get directly composed on the screen and hence 'live' in the 2D plane of the display. Recently the distinction between these types of interfaces is diminishing - almost all modern game UIs have 3D or pseudo 3D elements in their HUDs and menus. In VR the difference vanishes as we'll see later in the post.

Some UI elements rendered with Coherent UI in a traditional non-vr application
3D world UI elements usually will need no change when transitioning a game to VR. They already are in the game world so no special care should be taken. The overlay UI however will need significant modifications in it's rendering and probably also in the elements itself to cope with the specifics of the Rift.

If you leave the same 2D overlay for the UI in VR you'll get something like this:

This result is obviously wrong, most of the elements won't be visible at all because they fall outside the field of view of the player. The central region of every eye in the Rift is where the player sees more clearly, everything else is periphery - the same applies to your eyes in the 'real' world.
If we composite the HUD before the distortion we'll get this:
At least now everything is in the FOV of the player but the peripheral vision of the result remains. More importantly the UI is not adjusted for stereo rendering - the left and right eyes see different things and the result will surely cause at least a headache.

Stereo UI

What I did is the same as the Valve team in TF 2 - drew the UI on a plane that always stays in front of the player.
The correct result
TF 2 is a FPS game where you have a body that holds a weapon while you can freely rotate your head. Valve made a very clever decision when they noticed players were forgetting where their bodies are facing after looking around with their virtual heads. They always put the UI in front of the body, not the head. In this way the player always has a point of reference and can return facing forward in respect to her body.
In the Coherent UI demo we have a free flying camera so I locked the UI plane to always face the camera. This produced the very cool effect of feeling like a pilot of a fighter jet. The 3D effect that can be seen on some of the HUD elements becomes even more convincing in VR and adds a 'digital helmet' feeling.
Notice in the screenshot how small and concentrated the UI becomes - this is the position I personally felt most comfortable. It is unpleasant if you need to move your eyes inside the Oculus to look at a gauge or indicator that is too far from your focus point. The UI is semi-transparent so it doesn't get in the way with the exception of the big element in the upper right corner with the CU logo. It is too big.

UI design considerations for VR

This brings me to point that having a UI that is correct in VR is not enough - it must be tailored for it. What looks very good and usable in non-VR will most probably be very different in VR.
First notice that the aspect of the HUD is different - in non-VR it is very wide and with the aspect of the screen. In the Rift however it needs to be replicated for every eye that by itself has a much more narrow aspect. This means that some elements that were very far apart will get closer and might even overlap.
UI elements close and in front of the player also means that they shouldn't get in the way of gameplay. I think that transparency mostly solves this because the HUD is still in the semi-peripheral region of the player's sight.
The resolution of the current generation of the Rift SDK is very low and makes reading text a real pain. The UI should be created with that in mind and numerical and textual information should be kept to a minimum and exchanged with more pictographic and color-coded elements.

In his TF 2 presentation Joe Ludwig argues that in VR the UI should be avoided but I think that it actually becomes even more interesting and compelling. The jet pilot helmet feeling I got after adding the HUD felt a lot more immersive to me than the 3D world alone.

I decided to also modify the sample menu scene with which the demo starts. The normal scene is just the logo of Coherent UI with some 3D animated buttons and a cool background. It is nice in 2D but looks somewhat dull in VR.
The old menu scene

I did a very simple modification - just removed the background and placed the menu in an empty 3D world with just a skybox. This allows the player to look around even in the menu scene and the game responds to head movement immediately immersing in VR.
The new VR menu scene

Future

There are some ideas that I plan to try out but didn't have the time while doing this integration.
The most interesting plan I have is to try to simulate true 3D stereo elements in the HUD. Currently the gadgets are transformed in 3D in the space of the UI itself and then the resulting image is splatted on a plane in the game world. As Coherent UI supports all CSS3 transformations, it is possible to pass all relevant matrices to the UI itself to draw the elements as if they are correctly positioned in the 3D world and then just composite the whole image on the screen.
As far as the Rift goes, the biggest challenge in the context of UI is still the resolution. It is very difficult and tiring to read text. This however makes creating VR-aware UI even more interesting as new ways of expressing content must be found and employed.

UI in VR is a very new topic with many challenges still in the way and I will continue to share our experiments and ideas in the field.

Wednesday, June 12, 2013

Coherent UI Mobile comes for Unity3D

We have started Coherent Labs with the vision to bring to developers the best tools to build game user interface. For the past months we have been fully supporting desktop platforms and that allowed us to help some great projects like Planetary Annihilation. Coherent UI has also been integrated with the Unity3D engine and added to the Asset Store.


A lot of Unity3D developers have asked us for mobile support.


To help game creators put their games on as many platforms as possible we are introducing our new product - Coherent UI Mobile. And to add more value to developers already using Coherent UI for Unity3D we will add mobile support to the existing versions for free! So if you already own any version of  Coherent UI you’ll just need to update to get mobile support.


We are aware that many developers create games for mobile platforms only, so Coherent UI Mobile will also be available as a standalone product for just $100. We have even included a special version of the desktop library, so developers will be able to preview their work in the Unity3D editor.


So check out the Asset Store on June 17 when we will open the beta with iOS support. Sign up now to get a notification and join the beta. For a limited time, we will be also offering 30% discount if you pre-order the final version of Coherent UI Mobile. A free evaluation version is available from our site.


We will be working hard on the Android version so expect it to be added to the beta very soon.


  
What can you do with Coherent UI Mobile?


Coherent UI Mobile comes with functionality similar to Coherent UI for desktop and it can be used for development of HUDs, menus and in-game browsers. Check our blog next week when we are planning to release our first tutorial explaining how to use Coherent UI Mobile to easily create amazing game UI.


We encourage you to sign up for the beta today and get a download link on the first day of the release.

Wednesday, June 5, 2013

Modern Game UI with the Oculus Rift - Part 1

In this series I would like to share with you the impressions I had while porting the Coherent UI demo graphics framework and demo interfaces for the Oculus Rift dev. kit. In the first part I'll cover the rendering integration of the Rift while in the following posts I'll talk about the strictly UI-related issues in virtual reality.
Much of the problems I'll cover have been tackled by Valve in their porting of Team Fortress 2 for the Rift as well as in their R&D team. Extremely valuable resources on the current state of VR that helped me a lot while doing the port are given below in the references section.

The Rift

The Oculus Rift is a device that encompasses a head-mounted display and an array of sensors that track the orientation of the head. A good explanation on how to perform the integration and the details of the device can be found in the Rift SDK. It can be freely downloaded after a registration and I encourage anybody who is interested in VR to take a look at it even if you don't have a Rift device yet. The dev. kit is still a 'beta' version of the final product. The major issues I find currently is the somewhat low resolution of the display and that there is some yaw drift that really makes the developer life tough. Oculus are working on these and I'm positive that the final consumer product will be amazing. Other than that the experience playing and developing with the Rift is a great one and I'd encourage anyone who hasn't ordered his kit yet to hurry up and do it.

Porting the demo client (application)

Here at Coherent Labs we have a small 3D application we use for some of our demos. It is based on DirectX 11 and an in-house framework designed for rapid prototyping of rendering tasks. It is not a complete engine but has a fairly good rendering backbone and a complete integration with Coherent UI - a lot of the functionality is exposed to JavaScript and you can create Coherent UI views as well as animate and record the camera movement for demos through the script in the views themselves.
The task at hand was to add support for VR, implemented via the Oculus Rift.
I'll give a brief summary of what the process looked like for our framework. The Oculus SDK is very good at pointing what should be done and the process is not very complicated either. If the graphics pipeline of the engine is written with VR in mind it is actually trivial. Ours was not, so modifications were necessary.

From this..

.. to this


The pipeline of the framework we use is based on a list of render phases that get executed every frame in order. We use the light pre-pass(LPP) technique and have several post-processing effects.
In order to support stereo rendering some phases must be done twice - once for the left eye and once for the right. Usually when drawing for the eyes we simply draw in the left and right helves of the RT for each eye respectively with different view and projection matrices.

The non-VR events look like this:

1) Set View & Projection matrices for the frame
2) Shadow maps building
3) Clear render targets
4) Fill GBuffer
5) Fill lights buffer (draw lights)
6) Resolve lighting (re-draw geometry)
7) Draw UI Views in the world
8) Motion blur
9) HDR to LDR (just some glow)
10) FXAA
11) Draw HUD
12) Present

Of those 4-7 must be done for each eye. LPP can be quite costly in terms of draw calls and vertex processing and even more so in the VR case. Our scenes are simple and hadn't any problems but that's something to be aware of.
I directly removed the motion blur because really makes me sick in VR and the Oculus documentation also points that motion blurs should be avoided. I also removed the HUD drawing as it is handled in another way than a full-screen quad as I'll explain in next posts.


The VR pipeline looks like:

1) Set central View & Projection matrices for the frame
2) Shadow maps building
3) Clear render targets
4) Set left eye View & Projection
 4.1) Fill GBuffer
 4.2) Fill lights buffer (draw lights)
 4.3) Resolve lighting (re-draw geometry)
 4.4) Draw UI Views in the world
 4.5) Draw HUD
5) Set right eye View & Projection
 5.1) Fill GBuffer
 5.2) Fill lights buffer (draw lights)
 5.3) Resolve lighting (re-draw geometry)
 5.4) Draw UI Views in the world
 5.5) Draw HUD
6) HDR to LDR
7) FXAA
8) VR Distortion
9) Present

Conceptually it is not that much different and complicated but especially post-effects have to be modified to work correctly.

As I said I draw the left & right eye in the same RT.
The render target before the distortion

Render routines modifications

The shared render target has several implications regarding any post-processing routines. The HDR to LDR routine in our application does some glow effect by smearing bright spots in the frame in a low-res texture that gets re-composed on the main render target. This means that some smearing might cross the edge between the eyes and 'bleed' on the other one. Imagine a bright light on the right side of the left eye (near the middle of the image) - if no precautions are taken the halo of the light will cross in the right eye and appear on it's left side. This is noticeable and unpleasant looking like some kind of dust on the eye.
Post-process anti-aliasing algorithms might also suffer as they usually try to find edges as discontinuities in the image and will identify one where the left and right image meet. It is perfectly vertical however so no changes should be done.

The VR Distortion routine is the one that creates the interesting 'goggle' effect seen in screenshots and videos for the Rift. The lenses of the HMD introduce a large pincushion distortion that has to be compensated in software with a barrel distortion. The shader performing this is provided in the Oculus SDK and can be used verbatim. It also modifies the colors of the image slightly because when viewing images through lenses color get distorted by a phenomenon called "chromatic aberration" and the shader compensates for that too.

An important point that is mentioned in the Oculus documentation is that you should use a bigger render target to draw the image and have the shader distort it to the final size of the back-buffer (1280x800 on the current model of the Rift). If you use the same size, the image is correct but the fov is limited. This is extremely important. At least for me having the image synthesized from a same-size texture was very sick-inducing as I was seeing the 'end' of the image. The coefficient to scale the render target is provided by the StereoConfig::GetDistortionScale() method in the Rift library. In my implementation steps 4-8 are actually performed on the bigger RT.

The StereoConfig helper class is provided in the SDK an is very convenient. The SDK works with a right-handed coordinate system while we use a left-handed one - this requires attention when taking the orientation of the sensor (the head) from the device and if you use directly the projection and view adjustment matrices provided by the helper classes. I decided to just calculate them myself from the provided parameters - the required projection matrix is documented in the SDK and the view adjustment is trivial because it only involves moving each eye half the distance between the eyes left or right.

One small detail that kept me wondering for an hour is that if you plug the distortion parameters directly in the shader for both eyes (given by StereoConfig::GetDistortionConfig()) the image will not be symmetric with the outline of the right eye looking like the left one. For the right eye you have to negate the DistortionConfig::XCenterOffset. This is done in the Oculus demo but not very prominently and while there usually are parameter getters for both eyes, there is just one for the DistortionConfig which leads to think it might be the same for both eyes. If you analyze carefully the code in the shader you notice the discrepancy but the API successfully puzzled me for some time.

In the next posts I'll specifically talk about UI in the Rift.

References

Michael Abrash's blog
Lessons learned porting Team Fortress 2 to Virtual Reality
John Carmack - Latency Mitigation Strategies
Oculus Dev. Center