Rendering transparency and black on HoloLens

HoloLens is an additive display. It can only add light to the real world and not block light to make the world look darker. This means that if you show an image with a black and white gradient the black part will be transparent and the white part will be opaque. Alpha (transparency) does not matter for blending with the real world. However alpha does play a role when blending with other 3D content that is rendered behind the image.

When recording a photo or video the on-board colour camera is used to capture the outside world. This is blended with the rendered scene and this typically results in a different result than what the user actually sees on the device.

The all encompassing alpha blending experiment

First I created 4 textures to show on a quad. All textures have an alpha of 1 (opaque) in the center and 0 (fully transparent) on the outside. The differences are in the RGB colors of these textures:

  • white center to white outside
  • white center to black outside
  • black center to white outside
  • black center to black outside

The images I used on a blue background for contrast:

This image has an empty alt attribute; its file name is whitefade.pngThis image has an empty alt attribute; its file name is whiteblackfade.pngThis image has an empty alt attribute; its file name is blackwhitefade.pngThis image has an empty alt attribute; its file name is blackfade.png
Images as used in the columns

Unity scene

I setup a simple test scene with a number of quads with these different textures and blend modes. The rows in this scene show the common shader blend modes. The stretched box behind each row is there to show the difference between blending the real world (where the dark gray background is in Unity).

Here’s what it looks like in Unity:

The common shader blend modes that I keep needing to lookup:

Blend SrcAlpha OneMinusSrcAlpha // Traditional transparency
Blend One OneMinusSrcAlpha // Premultiplied transparency
Blend One One // Additive
Blend OneMinusDstColor One // Soft additive
Blend DstColor Zero // Multiplicative
Blend DstColor SrcColor // 2x multiplicative

In HoloLens simulation

Rendering this scene on HoloLens will make all black parts invisible because light can only be added. It’s almost impossible to capture this with a camera through the actual device. So I created a composition to show a simulation of what it looks like in the HoloLens. Basically an additive blend with the background image.

HoloLens PhotoVideo camera capture

The HoloLens has a colour camera in the middle above the eyes to capture photos and videos. The resulting photos and videos are a composition of what the colour camera captured and a image of the rendered scene. It’s important to know that this composition uses the alpha value that is rendered in the scene instead of doing an additive blend like I did in the composition above. This will make the outcome a bit different in some cases.

Here’s what it looks like when this scene is captured on HoloLens:

However this is not what the user sees. When you look at the fourth column you can clearly see a few parts where there is black shown on top of the real world. That is impossible on the actual device (as seen in the simulation).

Another noticeable difference is that between the white image with traditional transparency and the white to black image with premultiplied transparency blending. In Unity and on HoloLens these images look similar, but on the captured image they look different. There’s a black halo around the image. This tells me that for HoloLens capture purposes it is better to use traditional transparency.

Black halo in capture

Rendering black: a matter of perception

Although technically an additive light display cannot render black it is possible to perceive darkening. When a dark part is rendered in front of a lighter area the user will perceive this as a darker colour. In practice this is highly dependent on the brightness of the HoloLens display, the brightness of the environment and the amount of lighter surroundings in the 3D scene.

When you look at the top right quads in the photos above you see that the simulation (left) shows the background and the captured image (right) shows a black colour on top of the background bar. In practice the perceived effect will be a mix of these two images.

HoloLens Simulation vs HoloLens camera capture

This effect of visual perception is demonstrated in this well known Checker shadow illusion by Edward Adelson. Tiles A and B in this image have the exact same colour, but the perceived colours differ because of it’s surrounding.

On HoloLens this can be used to create parts that are perceived as almost black by surrounding it with a lighter area. For instance black text with a light background plane will be visible, but a black text alone will not. This will work best if you use it in a dimmed environment and when the brightness of the HoloLens is set to its highest.

More

Building a holographic card with the MRTK Standard Shader

Another blog post based on one of the TechArt challenges by Harry Alisavakis. This time the theme was Trading card foil. The challenge was a perfect opportunity to work with the MRTK Standard Shader that is part of the Mixed Reality ToolKit. The MRTK Standard Shader is a shader that is optimized for use in Mixed Reality applications. It is a so called übershader that contains many options that can be enabled when needed. Besides regular lighting it also contains options to add a stencil portal and iridescence.

In this blog post I will go into the three pieces that I used to construct the card shown above:

  • The portal effect that masks out the character and his background
  • The rainbow colors that run across the card and depend on viewing angle (not in the video recording due to a bug)
  • The character pose controlled by viewing angle

The hand interaction is based on the commonly used ObjectManipulator that is part of the MRTK.

Portal Stencil mask

A portal card can easily be achieved with the MRTK Standard Shader when you know what to look for. Here’s the scene setup for the portal card. The top level Card object contains the ObjectManipulator, a BoxCollider, and a NearInteractionGabbable components to make it manipulable on HoloLens.

Without stencil masking the scene looks like the screenshot below. Visible are the PortalBackground, the Timmy character model and FrontSide of the card.

To create a stencil portal the following parts are needed:

Stencil mask producer: The StencilPortal object is a quad that generates a stencil mask. This stencil mask will then be used to determine which pixels should be end up on screen. It is important to note that the stencil mask should be rendered before any object that needs it. Therefore the Render Queue of the StencilPortal material is set to 1999 so it renders before the regular geometry. Furthermore you can see that each rendered pixel of the StencilPortal will fill the mask with values 1 (Read: Always Replace with 1). It is possible to generate different masks with different values. Note that the StencilPortal object is not rendering visible pixels to the screen, but is only used to fill the stencil mask.

Stencil Mask Producer Material

Stencil mask consumer: The PortalBackground and Timmy materials are also using the MRTK Standard Shader, but they have their Stencil setting set to the values below. Basically the shader is told to Keep a pixel when the stencil mask contains a value Equal to 1 and ignore all other pixels of the object.

Stencil Mask Consumer Material

After adjusting the stencil setting this will result in the image below. Note that the FrontSide of the card is disabled in this sceenshot to clearly show the result of the stencil testing.

Stencil Masking Result

With the FrontSide enabled it looks like the image below. The object is a bit bigger than the portal that was rendered so the rounded corners don’t show the stencil masked object behind it. The FrontSide is a backface culled card.

The BackSide object is also a backface culled quad with a different texture, but facing the other side

Iridiscence

Iridiscence is used to generate the varying colors across the surface and view direction. It contains a few variables that you can control. First of all there is a SpectrumMap texture. This is a 1 dimensional lookup texture that will be used as a lookup map for the iridescence color. I used a rainbow color texture, but it could as wel be any other fancy gradient. (See also: Improving the Rainbow) Note that this texture is sampled twice to not only create a color variation based on viewing angle, but also based on UV coordinate. The iridiscence color is added to the albedo (base color) in the fragment shader so this means that iridiscence will be most noticeable on the darkest parts of the material. is Note that the iridescence color is calculated per vertex in the MRTK Standard Shader.

Intensity is a simple scale factor for the amount of iridescence that will be added.
Threshold controls the amount of gradient falloff across the surface, a value of 0 will make it fully viewing angle dependent, a value of 1 will make if fully depend on UV coordinates.
Angle controls the direction of the gradient. A value of 0 will make it the gradient perfectly horizontal, a value -.78 will rotate it 45 degrees left and a value of .78 will rotate the gradient 45 degrees to the right.

To better show how the Iridescene Threshold and Angle influence the final result I created a small test.

MRTK Standard Shader Iridescence Threshold and Angle test setup

Viewing angle dependent character pose

I made the Timmy character inside the card change pose based on viewing angle. This was done by placing a character animation on a timeline that is manually controlled from the PlayableDirector in the character.

Finally here is the PoseByViewAngle script that is used to calculate the animation time based on the viewing angle.

using Microsoft.MixedReality.Toolkit.Utilities;
using UnityEngine;
using UnityEngine.Playables;

public class PoseByViewingAngle : MonoBehaviour
{
    [SerializeField]
    [Tooltip("The PlayableDirector that controls pose")]
    private PlayableDirector PosePlayableDirector;

    [SerializeField]
    private Transform targetTransform;

    private void Start()
    {
        if (!targetTransform)
        {
            targetTransform = CameraCache.Main.transform;
        }
    }

    protected void Update()
    {
        if (!targetTransform)
            return;

        // Get a Vector that points from the target to the main camera.
        Vector3 directionToTarget = targetTransform.position - transform.position;

        var angle = Vector3.Angle(directionToTarget, -transform.forward);

        PosePlayableDirector.time = Map(angle, 0, 90, 1.666f, .333f); 
        PosePlayableDirector.Evaluate();
    }

    public static float Map(float value, float from1, float to1, float from2, float to2)
    {
        return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
    }
}

More

Blowing bubbles with Shader Graph

I ran into a two weekly Techart challenge by Harry Alisavakis with the theme Watercolors. I decided to participate and use Shader Graph in Unity for doing some soap bubble rendering.

Reflections & Iridescence

Real life soap bubble

The most visible features of a soap bubble are its reflections and its rainbow like colours. You can see reflections from both the front and back side of the bubble surface. This results in a mirrored reflection of the environment of the bubble.

The colours are caused by interference of the light that is reflected from the outside and the inside of the thin bubble surface. This phenomenon is called iridescence and it can also be found in sea shells, butterflies, insects, but you may also know it from the surface of a CD. When you inspect the bubble surface closely you can see a complex pattern of dancing colours. This is caused by variations in the bubble surface thickness due to complex fluid interactions that are known as the Marangoni effect.

Reflections in Shader Graph

For the reflections I created a simple room with with two fake windows. A reflection probe was used to bake it into a cubemap. Alternatively you could also use a HDRI image captured in real-life.

Bubbles need windows to reflect

A basic PBR graph with the Workflow set to Specular, Surface to Transparent and a Smoothness of 1 will already give you a nice reflective material. Add in a bit of Fresnel effect so the reflections are mostly noticeable on the outside and here’s the result.

Shaders are made easy with Shader Graph
Cubemap reflection with Fresnel falloff

Iridescence in Shader Graph

I took a more artistic approach for the simulation of the bubble surface variations by blending two layers of scrolling noise. This misses the swirls that you typically see in a real bubbles, but this won’t be noticeable at a distance. I added a vertical falloff to simulate that the thickness of a bubble is a bit larger at the bottom of the bubble. The surface thickness variations result an animated grayscale value. A gradient lookup gradient is then used to determine the iridescence colour. The gradient is based on a paper by Andrew Glassner.

The iridescence part hooks into the specular colour of the PBR node
Front face reflection and iridescence

Double sided materials

To create a double-sided material with different handling of forward and backward facing materials you can use the IsFrontFace node and branch into different parts of the shader based on the value. This works well with opaque materials and can be done in a single pass. Below you see a simple example that displays the result on an open cylinder.

Opaque material marked as a Two Sided
Two-sided opaque material ShaderGraph
Open cylinder with two-sided material applied

Double sided transparency

With transparent materials we typically want to do a separate backface pass and frontface pass to at least have a coarse way of sorting the model surfaces. That is easy to do in a ShaderLab (hand-coded) shader, but a bit harder when using Shader Graph.
Since the Universal Rendering Pipeline (URP) only supports one pass materials by default we need to come up with a different trick. We could create a copy of the mesh and render that with a separate material for the backside, but that would mess up the scene tree. And it would even become worse if the geometry were dynamic. Instead I chose to add a second material that uses the same Shader Graph shader but with a RenderFront flag to enable the front or backface rendering.
Note that Unity shows a warning and advises the use of multiple shader passes, which are not supported when using URP. 🤷‍♂️

Two-pass mesh rendering with different materials

The Shader Graph is using the RenderFront flag combined with the IsFrontFace node to determine if the front or backface needs to be rendered. I used AlphaClipping the prevent rendering of the backface when frontface rendering is active. Note that for backface rendering the surface normal needs to be flipped.

Two-sided reflections part
Bubble with two sided reflection and iridiscence

More

Augmented Reality avatar on a web page

Augmented reality on the web is happening and it’s becoming easier to create! Below you will find a small test of an avatar model I created online and included in this post. The avatar can be viewed in your environment on ARCore and ARKit enabled devices.

ReadyPlayer.me avatar

I used ReadyPlayer.me to create a 3D personal avatar of me. On that website you can use a selfie to create a 3D model that resembles the your image. Afterwards you can make adjustments to its choices. And finally you can download the generated model as a .glb file.
A .glb file is the binary version of a glTF file. glTF is a specification for the efficient transmission and loading of 3D scenes and models by applications.

Google modelviewer

I used Google’s modelviewer to embed the the glb model file in the webpage below and enabled AR viewing.

Here’s the source of the snippet I used:

<script type="module" src="https://unpkg.com/@google/model-viewer/dist/model-viewer.js"></script>
<script nomodule src="https://unpkg.com/@google/model-viewer/dist/model-viewer-legacy.js"></script>
<model-viewer src="avatar.glb" ios-src="avatar.usdz" ar ar-modes="webxr scene-viewer quick-look fallback" ar-scale="auto" alt="Readyplayer.me avatar" auto-rotate camera-controls></model-viewer>

If AR mode is available this button will be visible in the bottom right corner. Depending on which mode is available on your device you will go into AR or open a model viewer.

Here’s the result of running it on an Android phone with ARCore support.

Universal Scene Description

Running the demo on an iPad was a bit more work than I anticipated. Apple only supports the use of models in the .usdz format (Pixar’s Universal Scene Description) As you can see in the modelviewer declaration above there is a separate ios-src for use on iOS devices.

I could not find a simple tool (running on Windows) to convert the .glb to .usdz. There seem to be better solutions on iOS.
I finally found a solution by importing the .glb in Blender, saving a .blend file, importing the .blend file into Unity and finally exporting the model to .usdz.

More

Demos to install on your HoloLens 2

So you finally laid hands on a brand new HoloLens 2, but you discovered that there are only a few apps preinstalled on the device. Most of them are holographic versions of common Windows apps like Mail, Calender, Photos or Microsoft Edge and do not justice to the capabilities of the device.

You run into the same problem in the Microsoft Store. Most apps that are available for desktops are Universal Windows apps and can technically be installed on HoloLens. This does not mean that you should.

There are apps that were specifically developed for HoloLens, but most of the apps that are currently available were developed for HoloLens 1. Furthermore a lot of HoloLens apps in the Store are developer prototypes that do not adhere to the Microsoft design guidelines all that well.

Below you will find a list of apps that were developed with HoloLens 2 in mind (mostly by Microsoft).

HoloLens Tips

A small app that introduces you to basic hand interactions by picking up a holographic flower, and scaling and rotating it. There is also a section about voice commands.

Download HoloLens Tips from the Microsoft Store

HoloLens Playground

An application that is similar to the HoloLens Tips application, but a bit more playful. It is built by Microsoft Design Labs. The most striking demo is the one where you interact with a holographic hummingbird. Reach out your hand and the hummingbird will hover above it. There’s also a piano and eye tracking demo that you can select from a hand menu.

Download HoloLens Playground from the Microsoft Store

Surfaces

Another application by Microsoft Design Labs. This one lets you play with 9 different interactive surfaces creating different visual and sound effects. There’s a hand menu that allows you to switch between the different scenes. Some interactions reminded me a bit of Magic Leap’s Tonandi although the experiments in Surfaces are of a much smaller scale.

Surfaces

Download Surfaces from the Microsoft Store
Download source code from GitHub

Mixed Reality Toolkit examples

The Mixed Reality Toolkit is the goto library for HoloLens developers. The latest iteration applies the design guidelines that Microsoft compiled for Mixed Reality applications. The examples let you familiarize with the available interactions and their visual and audio design. A prebuilt app that contains most of the examples is available from GitHub. Note that you will need to use the HoloLens Device Portal to install it on your device. For HoloLens 2 you will need to download the ARM version.

Hand Interaction Examples

Download the prebuilt app from GitHub
Download source code from GitHub

Galaxy Explorer

Galaxy Explorer is an open source project that Microsoft developed as an example for developers. The version you can download from the Microsoft Store (with female voice over) is the old version with HoloLens 1 airtap interaction.

Galaxy Explorer

The project was updated to work with hand interaction of the HoloLens 2 (male voice over), but currently you will have to build it yourself if you want to run it on your HoloLens.

Download Galaxy Explorer from Microsoft Store (HoloLens 1 version)
Download the source code from GitHub (HoloLens 2 version)

Periodic Table

Another open source application developed by Microsoft Design Labs. This app was initially developed for HoloLens 1. The design and development process was described here. Later it was ported to HoloLens 2 using the new MRTK. That migration process is described here. A prebuilt version of the app is available from GitHub, but you will need use the HoloLens Device Portal to install it.

Periodic Table of the Elements

Download prebuilt app from GitHub
Download source code from GitHub

Ford GT40

A new HoloLens 2 app developed by Microsoft Design Labs. It showcases the Ford G40 and direct hand manipulations for selecting different car features. Furthermore it demonstrates technical instructions to replace a part in the car.

Download Ford GT40 from Microsoft Store

Kippy’s Escape

Most of the HoloLens 2 apps shown above were built with Unity. Kippy’s Escape is a HoloLens 2 app built with Unreal Engine. Microsoft Design Labs built it as an example for developers so it also has the source code available. It’s a fun little game where you help Kippy the robot to reach his rocket. It has a few puzzles that you need to solve with direct hand interaction.

Download Kippy’s Escape from Microsoft Store
Download the source code from GitHub

More

ARCore supported devices; a detailed list of phones & tablets

When you want to use an Augmented Reality app that depends on Google’s ARCore (AKA “Google Play Services for AR”) you will need to know which devices support it. There’s the official list of ARCore supported devices, but that only shows brief names of the supported devices. If you need more details you will have to search for them. Not very efficient with an ever growing list of supported devices especially if you want to find which tablets currently support ARCore apps.

The official list with extra details

There’s a more detailed list available from the Google Play Console, but to be able to download that you will need to have a developer account and upload an app that uses ARCore. Quite a bit of friction if all you are looking for is which hardware you need for ARCore apps to work.

So I decided to bite the bullet and upload a Unity app with ARCore support to my Google Play developer account. I cloned Unity’s arfoundation samples on GitHub, built the app and made an internal release in the Play Store console. After that I was able to access the Device Catalog under Release Management. As you can see below the app was supported by 216 Android devices of a total of 13579 Android devices back in january 2020.

The Download Device List button lets you download a text file (.csv) that also describes details like Form Factor (PC/Phone/Tablet), the System On Chip, Screen Sizes, Screen Densities and more.

The downloaded devicelist.csv can be found on GitHub here.

ARCore supported tablets

A quick filter of the ARCore supported devicelist brings up the tablets that currently support ARCore (September 2020)

  • Acer Chromebook Tab 10
  • LG G Pad 5 10.1 FHD
  • Samsung Galaxy Tab S3
  • Samsung Galaxy Tab S4
  • Samsung Galaxy Tab S5e
  • Samsung Galaxy Tab S6
  • Samsung Galaxy Tab S7
  • Samsung Galaxy Tab Active Pro

The tablets-only devicelist.csv can be found on GitHub here.

Depth API support

In June 2020 Google officially introduced the Depth API. This API allows developers to retrieve a depth map from their phone. Depth maps can be useful for generating occlusions of virtual objects or for scanning your environment. Not all ARCore supported devices support the Depth API. To see which ones do you can add a filter for the devices that are shown in the device catalogue in the Google Play Console.
Add filter, select System Feature, search and select com.google.ar.core.depth

The list of ARCore devices that also support the Depth API can be found on GitHub here.

More

Softening the HoloLens FOV border

To hide the limited Field of View of a Mixed Reality headset like the HoloLens or the Magic Leap you can fade out the holograms at the border of the view. I will discuss three possible techniques with different advantages and disadvantages.

Note that all techniques have the same visual result in the HoloLens. However when recorded with Mixed Reality Capture the border effect seems to largely fall outside of the MRC camera field of view.

Postprocessing effect

The most modern solution is applying a post-processing effect. Post-processing effects in Unity can be a heavy hit on fillrate so that is why Microsoft advises not to use them on HoloLens. The Magic Leap has a bit more graphics power to spend so it may be a viable solution on that device. Typically a post-processing effect works by rendering the scene to a texture and then rerender that texture on a screen-aligned quad with a filter effect (e.g. grayscale, bloom, vignette etc.) applied during that rerender. This can be done multiple times in succession, but since each rerender also means modifying each screen pixel this will cost you fillrate.

Post-processing a scene in Unity needs two assets to work together:

  • a script that is attached to the camera and implements OnRenderImage
  • a shader that is applied when we rerender the scene image in OnRenderImage

The biggest advantage of using this technique is that it doesn’t require changes to the content of your scene. But it may be a bit overkill for only add a fading border.

Postprocessing shader with red/green output

Postprocessing final result

BorderFadePostProcess.cs

using UnityEngine;
public class BorderFadePostProcess : MonoBehaviour
{
    [Range(0, 500)]
    public float borderWidth = 100;
    private Material material;
    void Awake()
    {
        // Creat a Material using the FadeBorder shader
        material = new Material(Shader.Find("Hidden/FadeBorder"));
    }
    // Postprocess the image
    void OnRenderImage(RenderTexture source, RenderTexture destination)
    {
        material.SetFloat("_borderWidth", borderWidth);
        
        // Rerender the scene on a screen aligned quad with the given material/shader
        Graphics.Blit(source, destination, material);
    }
}

Fadeborder.shader

Shader "Hidden/FadeBorder"
{
 Properties
 {
  _MainTex ("Texture", 2D) = "white" {}
 }
 SubShader
 {
  // No culling or depth
  Cull Off ZWrite Off ZTest Always
  Pass
  {
   CGPROGRAM
   #pragma vertex vert
   #pragma fragment frag
   
   #include "UnityCG.cginc"
   struct appdata
   {
    float4 vertex : POSITION;
    float2 uv : TEXCOORD0;
   };
   struct v2f
   {
    float2 uv : TEXCOORD0;
    float4 vertex : SV_POSITION;
   };
   v2f vert (appdata v)
   {
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = v.uv;
    return o;
   }
   
   sampler2D _MainTex;
   
   // Width of the border in pixels
   fixed _borderWidth;
   float linearStep(float a, float b, float t)
   {
    return saturate((t - a) / (b - a));
   }
   fixed4 frag (v2f i) : SV_Target
   {
    fixed4 col = tex2D(_MainTex, i.uv);
    // Distance to border along x and y
    float2 distanceInPixels = (0.5 - abs(i.uv.xy - 0.5)) * _ScreenParams.xy;
    // Linear border fade
    float mask = linearStep(0, _borderWidth, min(distanceInPixels.x, distanceInPixels.y));
    //return col + lerp(fixed4(1, 0, 0, 1), fixed4(0, 1, 0, 1), mask);
    // Return masked color
    return col * mask;
   }
   ENDCG
  }
 }
}

Fading materials

Another solution is to let the material/shader do the fading out. The used shader needs to calculate the screen position and then fade out the material when at the edge of the screen. Each hologram in the scene needs to have a material applied with this edge fading in it’s shader.
Modifying the used shaders may result in an improved performance, but this highly depends on what is visible in your scene. No separate postprocessing render is needed, but each shader uses a few more instructions and it requires all shaders in your scene to be modified.

Material fade with red/green output

Material fade final result

See GitHub project link for the source code of the Standard_BorderFade shader. It’s a modification of the Standard shader from the Holotoolkit.

Screenspace border

The third solution is rendering a screen aligned border that fades from transparent block to opaque black. It may sound a bit old skool, but it does the job with minimal performance impact and no need to modify the content of your scene.


Screenspace border with red/green output

Screenspace border final result

Borderdrawer.cs

using UnityEngine;
public class BorderDrawer : MonoBehaviour
{
    [Range(0, 500)]
    public float borderWidth = 100;
    private Material material;
    private Color OutsideColor = new Color(0, 0, 0, 1);
    private Color InsideColor = new Color(0, 0, 0, 0);
    private void Start()
    {
        // Creat a Material using the FadeBorder shader
        material = new Material(Shader.Find("Unlit/VertexColor"));
    }

    void OnPostRender()
    {
        // HoloLens resolution 1280 x 720 
        // HoloLens screen/safe area 1268 x 720
        float left = (float)borderWidth / (float)Screen.width;
        float bottom = (float)borderWidth / (float)Screen.height;
        float right = 1 - left;
        float top = 1 - bottom;
        GL.PushMatrix();
        material.SetPass(0);
        GL.LoadOrtho();
        GL.Begin(GL.TRIANGLE_STRIP);
        GL.Color(OutsideColor);
        GL.Vertex3(0.0F, 0.0F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(left, bottom, 0);
        GL.Color(OutsideColor);
        GL.Vertex3(1F, 0F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(right, bottom, 0);
        GL.Color(OutsideColor);
        GL.Vertex3(1F, 1F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(right, top, 0);
        GL.Color(OutsideColor);
        GL.Vertex3(0F, 1F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(left, top, 0);
        GL.Color(OutsideColor);
        GL.Vertex3(0F, 0F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(left, bottom, 0);
        GL.End();
        GL.PopMatrix();
    }
}

Github

The source code for these three techniques is available from this Github repository

References

More

HoloLens scanning effect in Unity

In a previous blog post I talked about my attempt to rebuild the HoloLens scanning effect as shown in this video. After following the HoloLens Academy tutorials I decided to see how easy my existing shader could be integrated in Unity. It turned out that only a minimal amount of plumbing was needed.

HoloLens room scan

I took the project files from the HoloLens course on spatial mapping (Holograms 230). That course explains how you can apply your own material and a custom shader to the mesh that is generated by the spatial mapper. For quick iterations you can even load a previously saved room mesh. I added a new unlit shader and a material using it. This material is used by the Spatial Mapping Manager script to apply to the mesh coming from the spatial mapper.

Unity shader variables

Most of the plumbing came down to defining variables and using them in the shader. The main animation is driven by the global time. Unity provides this as a built-in variable vector _Time, where the y-component contains the elapsed time in seconds. I added a few variables to control the looks and behavior of the effect like Main Color, Speed, Triangles Scale and Range Scale.

The center of the effect is also a variable that can be configured. It could be updated by using doing a Raycast intersection as explained in the HoloLens Academy tutorials. Currently the effect keeps pulsating every 5 seconds. To only trigger the effect on an event the used global time could be replaced by a separately controlled progress variable.

Differences with original effect

To create the effect of triangles walking across the floor and up the walls the shader needs to calculate uv coordinates based on a world location. Preferably with as little seams as possible. I used the horizontal distance to the configured center point and added the vertical distance instead of using the direct distance to the center point. This works reasonably well on connected surfaces, but note that it is not a real walk across the topology of the mesh.

The effect in the original video has a slightly different border effect that has some more distortions and a different color. I experimented with mimicking that effect, but decided to leave that out. I used the effect in an interactive installation where I preferred a stronger border that looked like a wave was expanding outwards.

The source code of the project is available on Github.

HoloLens Shader Pack

A new version of this shader was optimized for running on the actual HoloLens device. This shader and many others are available in the HoloLens Shader Pack that is available on the Unity Asset Store.
More

Interactive Luminous Carpets with Kinect

carpetheader

The boundaries between the digital and physical world around us are fading. Traditionally passive objects like mirrors, windows, coffee tables, clothes and even plants are now becoming interactive. One of the innovations in this area is Luminous Carpets. It is a carpet with embedded LED lights. The product is a collaboration between Philips and Dutch carpet maker Desso.

Making it interactive

Although the carpet itself does not contain sensors to detect who is standing where, it is possible to add external sensors to detect the presence of people. I was invited by Luminous Carpets to setup an experiment to see if their demo floor could become interactive by adding a Kinect sensor. Here’s a video that shows some of the results of the singe day trial.

The Kinect was positioned at the front of the floor at a relatively large distance (> 3 meters) to be able to see all of the floor area. After setting up the equipment a calibration had to be applied for lining up the Kinect coordinate frame with the coordinate frame of the Luminous Carpet demo floor. The demo application was a WPF based application that generated a small grayscale image in the top left corner of the screen with a size of 112 x 56 pixels. The floor controller could then use this as a HDMI input for controlling the pixels of the luminous carpet.

By using Kinect in a frontal view body tracking can be used to build interactivity that understands joint positions. Individual joints can be projected to the floor taking into account their height above the floor. This can for instance be used to give the user a feeling that he is holding a lantern that lights up the floor. This positioning of the Kinect also implies looking into the direction of the sensor for best tracking results. For interaction with a floor however there is no direct directional preference for interaction other than suggested by the walls surrounding it.

Using the Kinect in a frontal setup may also cause that multiple users can occlude each other. Typically this is prevented by using a sensor that looks down from a large height and using blob detection and tracking for building the interactivity.

Natural surface interactions

Blending the digital and the real world can go beyond using hand-held devices like iPad or head-mounted devices like HoloLens. Any surface that surrounds us in our everyday lives can be made interactive by combining sensors and visual, tactile or auditive feedback. The surfaces that have the highest potential for natural interaction are those that we are already used to interact with on a daily basis. And if we don’t have to equip ourselves with wearables it can feel even more natural.

More

Build your own holographic studio with RoomAlive Toolkit

holoheader

With the current wave of new Augmented Reality devices like Microsoft HoloLens and Meta Glasses the demand for holographic content is rising. Capturing a holographic scene can be done with 3D depth cameras like Microsoft Kinect or the Intel RealSense. When combining multiple 3D cameras a subject can be captured from different sides. One challenge for achieving this is the calibration of  multiple 3D cameras so their recorded geometry can be aligned. In this article I will show you how I used RoomAlive Toolkit to calibrate two Kinects. The resulting calibration is used for rendering a live 3D scene. The source code for this application is available on GitHub.

Calibration with RoomAlive Toolkit

Calibration of multiple Kinects typically requires a manual process of recording a collection of calibration points. With RoomAlive Toolkit it is possible to automatically calibrate multiple projectors and Kinects. Since projectors are used to connect multiple Kinects during calibration a minimum of one projector is required. The only requirement is that the Kinects used can see a fair amount of the projected calibration images. When the calibration is done the projector is no longer needed.

Holographic studio setup

In the picture above you see the geometry and color image that was recorded by two Kinect V2’s. There’s an overlap in the middle. Two magenta frustums indicate where the Kinects were positioned and a purple frustum shows the position of the projector.  Note that both Kinects where placed in portrait mode for better coverage of the room. I placed a few extra items to help the solver algorithm in the RoomAlive Toolkit solve the calibration.

Collecting data from multiple Kinects

Since each Kinect V2 needs to be attached to a separate PC we need a way to collect the depth and color data on the main system. RoomAlive Toolkit also contains a networking solution for doing that. Each system runs a KinectServer WCF service that provides the raw Kinect color and depth data. All data is updated as it arrives without further attempt for synchronization. This results in some artifacts at the seams of the captured scenes.

Rendering live Kinect data

The simplest form of rendering is rendering the raw pieces of geometry of each Kinect. RoomAlive Toolkit contains a SharpDX based example that shows how to do this. It also shows how to filter the depth and how to apply lens distortion correction. Using this information I built a new application based on SharpDX Toolkit (XNA-like layer) to do the rendering. The application contains a few extras so you can tweak a few of the rendering parameters at runtime. I added a clipping cylinder so you can easily reduce the rendered geometry to a single person or object as seen in the video. And just for fun I added a shader to give the live 3D scene a more cliché holographic look. More info about the parameters and controls can be found on the Holographic Studio GitHub page.

Other demos of real time video capture

Holoportation by Microsoft Research
3D Video Capture with Three Kinects by Oliver Kreylos

More