Blowing bubbles with Shader Graph

I ran into a two weekly Techart challenge by Harry Alisavakis with the theme Watercolors. I decided participate and use Shader Graph in Unity for doing some soap bubble rendering.

Reflections & Iridescence

Real life soap bubble

The most visible features of a soap bubble are its reflections and its rainbow like colours. You can see reflections from both the front and back side of the bubble surface. This results in a mirrored reflection of the environment of the bubble.

The colours are caused by interference of the light that is reflected from the outside and the inside of the thin bubble surface. This phenomenon is called iridescence and it can also be found in sea shells, butterflies, insects, but you may also know it from the surface of a CD. When you inspect the bubble surface closely you can see a complex pattern of dancing colours. This is caused by variations in the bubble surface thickness due to complex fluid interactions that are known as the Marangoni effect.

Reflections in Shader Graph

For the reflections I created a simple room with with two fake windows. A reflection probe was used to bake it into a cubemap. Alternatively you could also use a HDRI image captured in real-life.

Bubbles need windows to reflect

A basic PBR graph with the Workflow set to Specular, Surface to Transparent and a Smoothness of 1 will already give you a nice reflective material. Add in a bit of Fresnel effect so the reflections are mostly noticeable on the outside and here’s the result.

Shaders are made easy with Shader Graph
Cubemap reflection with Fresnel falloff

Iridescence in Shader Graph

I took a more artistic approach for the simulation of the bubble surface variations by blending two layers of scrolling noise. This misses the swirls that you typically see in a real bubbles, but this won’t be noticeable at a distance. I added a vertical falloff to simulate that the thickness of a bubble is a bit larger at the bottom of the bubble. The surface thickness variations result an animated grayscale value. A gradient lookup gradient is then used to determine the iridescence colour. The gradient is based on a paper by Andrew Glassner.

The iridescence part hooks into the specular colour of the PBR node
Front face reflection and iridescence

Double sided materials

To create a double-sided material with different handling of forward and backward facing materials you can use the IsFrontFace node and branch into different parts of the shader based on the value. This works well with opaque materials and can be done in a single pass. Below you see a simple example that displays the result on an open cylinder.

Opaque material marked as a Two Sided
Two-sided opaque material ShaderGraph
Open cylinder with two-sided material applied

Double sided transparency

With transparent materials we typically want to do a separate backface pass and frontface pass to at least have a coarse way of sorting the model surfaces. That is easy to do in a ShaderLab (hand-coded) shader, but a bit harder when using Shader Graph.
Since the Universal Rendering Pipeline (URP) only supports one pass materials by default we need to come up with a different trick. We could create a copy of the mesh and render that with a separate material for the backside, but that would mess up the scene tree. And it would even become worse if the geometry were dynamic. Instead I chose to add a second material that uses the same Shader Graph shader but with a RenderFront flag to enable the front or backface rendering.
Note that Unity shows a warning and advises the use of multiple shader passes, which are not supported when using URP. ūü§∑‚Äć‚ôāÔłŹ

Two-pass mesh rendering with different materials

The Shader Graph is using the RenderFront flag combined with the IsFrontFace node to determine if the front or backface needs to be rendered. I used AlphaClipping the prevent rendering of the backface when frontface rendering is active. Note that for backface rendering the surface normal needs to be flipped.

Two-sided reflections part
Bubble with two sided reflection and iridiscence

More

Augmented Reality avatar on a web page

Augmented reality on the web is happening and it’s becoming easier to create! Below you will find a small test of an avatar model I created online and included in this post. The avatar can be viewed in your environment on ARCore and ARKit enabled devices.

ReadyPlayer.me avatar

I used ReadyPlayer.me to create a 3D personal avatar of me. On that website you can use a selfie to create a 3D model that resembles the your image. Afterwards you can make adjustments to its choices. And finally you can download the generated model as a .glb file.
A .glb file is the binary version of a glTF file. glTF is a specification for the efficient transmission and loading of 3D scenes and models by applications.

Google modelviewer

I used Google’s modelviewer to embed the the glb model file in the webpage below and enabled AR viewing.

Here’s the source of the snippet I used:

<script type="module" src="https://unpkg.com/@google/model-viewer/dist/model-viewer.js"></script>
<script nomodule src="https://unpkg.com/@google/model-viewer/dist/model-viewer-legacy.js"></script>
<model-viewer src="avatar.glb" ios-src="avatar.usdz" ar ar-modes="webxr scene-viewer quick-look fallback" ar-scale="auto" alt="Readyplayer.me avatar" auto-rotate camera-controls></model-viewer>

If AR mode is available this button will be visible in the bottom right corner. Depending on which mode is available on your device you will go into AR or open a model viewer.

Here’s the result of running it on an Android phone with ARCore support.

Universal Scene Description

Running the demo on an iPad was a bit more work than I anticipated. Apple only supports the use of models in the .usdz format (Pixar’s Universal Scene Description) As you can see in the modelviewer declaration above there is a separate ios-src for use on iOS devices.

I could not find a simple tool (running on Windows) to convert the .glb to .usdz. There seem to be better solutions on iOS.
I finally found a solution by importing the .glb in Blender, saving a .blend file, importing the .blend file into Unity and finally exporting the model to .usdz.

More

Demos to install on your HoloLens 2

So you finally laid hands on a brand new HoloLens 2, but you discovered that there are only a few apps preinstalled on the device. Most of them are holographic versions of common Windows apps like Mail, Calender, Photos or Microsoft Edge and do not justice to the capabilities of the device.

You run into the same problem in the Microsoft Store. Most apps that are available for desktops are Universal Windows apps and can technically be installed on HoloLens. This does not mean that you should.

There are apps that were specifically developed for HoloLens, but most of the apps that are currently available were developed for HoloLens 1. Furthermore a lot of HoloLens apps in the Store are developer prototypes that do not adhere to the Microsoft design guidelines all that well.

Below you will find a list of apps that were developed with HoloLens 2 in mind (mostly by Microsoft).

HoloLens Tips

A small app that introduces you to basic hand interactions by picking up a holographic flower, and scaling and rotating it. There is also a section about voice commands.

Download HoloLens Tips from the Microsoft Store

HoloLens Playground

An application that is similar to the HoloLens Tips application, but a bit more playful. It is built by Microsoft Design Labs. The most striking demo is the one where you interact with a holographic hummingbird. Reach out your hand and the hummingbird will hover above it. There’s also a piano and eye tracking demo that you can select from a hand menu.

Download HoloLens Playground from the Microsoft Store

Surfaces

Another application by Microsoft Design Labs. This one lets you play with 9 different interactive surfaces creating different visual and sound effects. There’s a hand menu that allows you to switch between the different scenes. Some interactions reminded me a bit of Magic Leap’s Tonandi although the experiments in Surfaces are of a much smaller scale.

Surfaces

Download Surfaces from the Microsoft Store
Download source code from GitHub

Mixed Reality Toolkit examples

The Mixed Reality Toolkit is the goto library for HoloLens developers. The latest iteration applies the design guidelines that Microsoft compiled for Mixed Reality applications. The examples let you familiarize with the available interactions and their visual and audio design. A prebuilt app that contains most of the examples is available from GitHub. Note that you will need to use the HoloLens Device Portal to install it on your device. For HoloLens 2 you will need to download the ARM version.

Hand Interaction Examples

Download the prebuilt app from GitHub
Download source code from GitHub

Galaxy Explorer

Galaxy Explorer is an open source project that Microsoft developed as an example for developers. The version you can download from the Microsoft Store (with female voice over) is the old version with HoloLens 1 airtap interaction.

Galaxy Explorer - Mixed Reality | Microsoft Docs

The project was updated to work with hand interaction of the HoloLens 2 (male voice over), but currently you will have to build it yourself if you want to run it on your HoloLens.

Download Galaxy Explorer from Microsoft Store (HoloLens 1 version)
Download the source code from GitHub (HoloLens 2 version)

Periodic Table

Another open source application developed by Microsoft Design Labs. This app was initially developed for HoloLens 1. The design and development process was described here. Later it was ported to HoloLens 2 using the new MRTK. That migration process is described here. A prebuilt version of the app is available from GitHub, but you will need use the HoloLens Device Portal to install it.

Periodic Table of the Elements

Download prebuilt app from GitHub
Download source code from GitHub

More

ARCore supported devices; a detailed list of phones & tablets

When you want to use an Augmented Reality app that depends on Google’s ARCore (AKA “Google Play Services for AR”) you will need to know which devices support it. There’s the official list of ARCore supported devices, but that only shows brief names of the supported devices. If you need more details you will have to search for them. Not very efficient with an ever growing list of supported devices especially if you want to find which tablets currently support ARCore apps.

The official list with extra details

There’s a more detailed list available from the Google Play Console, but to be able to download that you will need to have a developer account and upload an app that uses ARCore. Quite a bit of friction if all you are looking for is which hardware you need for ARCore apps to work.

So I decided to bite the bullet and upload a Unity app with ARCore support to my Google Play developer account. I cloned Unity’s arfoundation samples on GitHub, built the app and made an internal release in the Play Store console. After that I was able to access the Device Catalog under Release Management. As you can see below the app was supported by 216 Android devices of a total of 13579 Android devices back in january 2020.

The Download Device List button lets you download a text file (.csv) that also describes details like Form Factor (PC/Phone/Tablet), the System On Chip, Screen Sizes, Screen Densities and more.

The downloaded devicelist.csv can be found on GitHub here.

ARCore supported tablets

A quick filter of the ARCore supported devicelist brings up the tablets that currently support ARCore (September 2020)

  • Acer Chromebook Tab 10
  • LG G Pad 5 10.1 FHD
  • Samsung Galaxy Tab S3
  • Samsung Galaxy Tab S4
  • Samsung Galaxy Tab S5e
  • Samsung Galaxy Tab S6
  • Samsung Galaxy Tab S7
  • Samsung Galaxy Tab Active Pro

The tablets-only devicelist.csv can be found on GitHub here.

Depth API support

In June 2020 Google officially introduced the Depth API. This API allows developers to retrieve a depth map from their phone. Depth maps can be useful for generating occlusions of virtual objects or for scanning your environment. Not all ARCore supported devices support the Depth API. To see which ones do you can add a filter for the devices that are shown in the device catalogue in the Google Play Console.
Add filter, select System Feature, search and select com.google.ar.core.depth

The list of ARCore devices that also support the Depth API can be found on GitHub here.

More

Softening the HoloLens FOV border

To hide the limited Field of View of a Mixed Reality headset like the HoloLens or the Magic Leap you can fade out the holograms at the border of the view. I will discuss three possible techniques with different advantages and disadvantages.

Note that all techniques have the same visual result in the HoloLens. However when recorded with Mixed Reality Capture the border effect seems to largely fall outside of the MRC camera field of view.

Postprocessing effect

The most modern solution is applying a post-processing effect. Post-processing effects in Unity can be a heavy hit on fillrate so that is why Microsoft advises not to use them on HoloLens. The Magic Leap has a bit more graphics power to spend so it may be a viable solution on that device. Typically a post-processing effect works by rendering the scene to a texture and then rerender that texture on a screen-aligned quad with a filter effect (e.g. grayscale, bloom, vignette etc.) applied during that rerender. This can be done multiple times in succession, but since each rerender also means modifying each screen pixel this will cost you fillrate.

Post-processing a scene in Unity needs two assets to work together:

  • a script that is attached to the camera and implements OnRenderImage
  • a shader that is applied when we rerender the scene image in OnRenderImage

The biggest advantage of using this technique is that it doesn’t require changes to the content of your scene. But it may be a bit overkill for only add a fading border.

Postprocessing shader with red/green output

Postprocessing final result

BorderFadePostProcess.cs

using UnityEngine;
public class BorderFadePostProcess : MonoBehaviour
{
    [Range(0, 500)]
    public float borderWidth = 100;
    private Material material;
    void Awake()
    {
        // Creat a Material using the FadeBorder shader
        material = new Material(Shader.Find("Hidden/FadeBorder"));
    }
    // Postprocess the image
    void OnRenderImage(RenderTexture source, RenderTexture destination)
    {
        material.SetFloat("_borderWidth", borderWidth);
        
        // Rerender the scene on a screen aligned quad with the given material/shader
        Graphics.Blit(source, destination, material);
    }
}

Fadeborder.shader

Shader "Hidden/FadeBorder"
{
 Properties
 {
  _MainTex ("Texture", 2D) = "white" {}
 }
 SubShader
 {
  // No culling or depth
  Cull Off ZWrite Off ZTest Always
  Pass
  {
   CGPROGRAM
   #pragma vertex vert
   #pragma fragment frag
   
   #include "UnityCG.cginc"
   struct appdata
   {
    float4 vertex : POSITION;
    float2 uv : TEXCOORD0;
   };
   struct v2f
   {
    float2 uv : TEXCOORD0;
    float4 vertex : SV_POSITION;
   };
   v2f vert (appdata v)
   {
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = v.uv;
    return o;
   }
   
   sampler2D _MainTex;
   
   // Width of the border in pixels
   fixed _borderWidth;
   float linearStep(float a, float b, float t)
   {
    return saturate((t - a) / (b - a));
   }
   fixed4 frag (v2f i) : SV_Target
   {
    fixed4 col = tex2D(_MainTex, i.uv);
    // Distance to border along x and y
    float2 distanceInPixels = (0.5 - abs(i.uv.xy - 0.5)) * _ScreenParams.xy;
    // Linear border fade
    float mask = linearStep(0, _borderWidth, min(distanceInPixels.x, distanceInPixels.y));
    //return col + lerp(fixed4(1, 0, 0, 1), fixed4(0, 1, 0, 1), mask);
    // Return masked color
    return col * mask;
   }
   ENDCG
  }
 }
}

Fading materials

Another solution is to let the material/shader do the fading out. The used shader needs to calculate the screen position and then fade out the material when at the edge of the screen. Each hologram in the scene needs to have a material applied with this edge fading in it’s shader.
Modifying the used shaders may result in an improved performance, but this highly depends on what is visible in your scene. No separate postprocessing render is needed, but each shader uses a few more instructions and it requires all shaders in your scene to be modified.

Material fade with red/green output

Material fade final result

See GitHub project link for the source code of the Standard_BorderFade shader. It’s a modification of the Standard shader from the Holotoolkit.

Screenspace border

The third solution is rendering a screen aligned border that fades from transparent block to opaque black. It may sound a bit old skool, but it does the job with minimal performance impact and no need to modify the content of your scene.


Screenspace border with red/green output

Screenspace border final result

Borderdrawer.cs

using UnityEngine;
public class BorderDrawer : MonoBehaviour
{
    [Range(0, 500)]
    public float borderWidth = 100;
    private Material material;
    private Color OutsideColor = new Color(0, 0, 0, 1);
    private Color InsideColor = new Color(0, 0, 0, 0);
    private void Start()
    {
        // Creat a Material using the FadeBorder shader
        material = new Material(Shader.Find("Unlit/VertexColor"));
    }

    void OnPostRender()
    {
        // HoloLens resolution 1280 x 720 
        // HoloLens screen/safe area 1268 x 720
        float left = (float)borderWidth / (float)Screen.width;
        float bottom = (float)borderWidth / (float)Screen.height;
        float right = 1 - left;
        float top = 1 - bottom;
        GL.PushMatrix();
        material.SetPass(0);
        GL.LoadOrtho();
        GL.Begin(GL.TRIANGLE_STRIP);
        GL.Color(OutsideColor);
        GL.Vertex3(0.0F, 0.0F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(left, bottom, 0);
        GL.Color(OutsideColor);
        GL.Vertex3(1F, 0F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(right, bottom, 0);
        GL.Color(OutsideColor);
        GL.Vertex3(1F, 1F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(right, top, 0);
        GL.Color(OutsideColor);
        GL.Vertex3(0F, 1F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(left, top, 0);
        GL.Color(OutsideColor);
        GL.Vertex3(0F, 0F, 0);
        GL.Color(InsideColor);
        GL.Vertex3(left, bottom, 0);
        GL.End();
        GL.PopMatrix();
    }
}

Github

The source code for these three techniques is available from this Github repository

References

More

HoloLens scanning effect in Unity

In a previous blog post I talked about my attempt to rebuild the HoloLens scanning effect as shown in this video. After following the HoloLens Academy tutorials I decided to see how easy my existing shader could be integrated in Unity. It turned out that only a minimal amount of plumbing was needed.

HoloLens room scan

I took the project files from the HoloLens course on spatial mapping (Holograms 230). That course explains how you can apply your own material and a custom shader to the mesh that is generated by the spatial mapper. For quick iterations you can even load a previously saved room mesh. I added a new unlit shader and a material using it. This material is used by the Spatial Mapping Manager script to apply to the mesh coming from the spatial mapper.

Unity shader variables

Most of the plumbing came down to defining variables and using them in the shader. The main animation is driven by the global time. Unity provides this as a built-in variable vector _Time, where the y-component contains the elapsed time in seconds. I added a few variables to control the looks and behavior of the effect like Main Color, Speed, Triangles Scale and Range Scale.

The center of the effect is also a variable that can be configured. It could be updated by using doing a Raycast intersection as explained in the HoloLens Academy tutorials. Currently the effect keeps pulsating every 5 seconds. To only trigger the effect on an event the used global time could be replaced by a separately controlled progress variable.

Differences with original effect

To create the effect of triangles walking across the floor and up the walls the shader needs to calculate uv coordinates based on a world location. Preferably with as little seams as possible. I used the horizontal distance to the configured center point and added the vertical distance instead of using the direct distance to the center point. This works reasonably well on connected surfaces, but note that it is not a real walk across the topology of the mesh.

The effect in the original video has a slightly different border effect that has some more distortions and a different color. I experimented with mimicking that effect, but decided to leave that out. I used the effect in an interactive installation where I preferred a stronger border that looked like a wave was expanding outwards.

The source code of the project is available on Github.

HoloLens Shader Pack

A new version of this shader was optimized for running on the actual HoloLens device. This shader and many others are available in the HoloLens Shader Pack that is available on the Unity Asset Store.
More

Interactive Luminous Carpets with Kinect

carpetheader

The boundaries between the digital and physical world around us are fading. Traditionally passive objects like mirrors, windows, coffee tables, clothes and even plants are now becoming interactive. One of the innovations in this area is Luminous Carpets. It is a carpet with embedded LED lights. The product is a collaboration between Philips and Dutch carpet maker Desso.

Making it interactive

Although the carpet itself does not contain sensors to detect who is standing where, it is possible to add external sensors to detect the presence of people. I was invited by Luminous Carpets to setup an experiment to see if their demo floor could become interactive by adding a Kinect sensor. Here’s a video that shows some of the results of the singe day trial.

The Kinect was positioned at the front of the floor at a relatively large distance (> 3 meters) to be able to see all of the floor area. After setting up the equipment a calibration had to be applied for lining up the Kinect coordinate frame with the coordinate frame of the Luminous Carpet demo floor. The demo application was a WPF based application that generated a small grayscale image in the top left corner of the screen with a size of 112 x 56 pixels. The floor controller could then use this as a HDMI input for controlling the pixels of the luminous carpet.

By using Kinect in a frontal view body tracking can be used to build interactivity that understands joint positions. Individual joints can be projected to the floor taking into account their height above the floor. This can for instance be used to give the user a feeling that he is holding a lantern that lights up the floor. This positioning of the Kinect also implies looking into the direction of the sensor for best tracking results. For interaction with a floor however there is no direct directional preference for interaction other than suggested by the walls surrounding it.

Using the Kinect in a frontal setup may also cause that multiple users can occlude each other. Typically this is prevented by using a sensor that looks down from a large height and using blob detection and tracking for building the interactivity.

Natural surface interactions

Blending the digital and the real world can go beyond using hand-held devices like iPad or head-mounted devices like HoloLens.¬†Any surface that surrounds us in our everyday lives can be made interactive by combining sensors¬†and visual, tactile or auditive feedback. The surfaces that have the highest potential for natural interaction are those that we are already used to interact with on a daily basis. And if¬†we don’t¬†have to equip ourselves with wearables¬†it can feel even more natural.

More

Build your own holographic studio with RoomAlive Toolkit

holoheader

With the current wave of new Augmented Reality devices like Microsoft HoloLens and Meta Glasses the demand for holographic content is rising. Capturing a holographic scene can be done with 3D depth cameras like Microsoft Kinect or the Intel RealSense. When combining multiple 3D cameras a subject can be captured from different sides. One challenge for achieving this is the calibration of  multiple 3D cameras so their recorded geometry can be aligned. In this article I will show you how I used RoomAlive Toolkit to calibrate two Kinects. The resulting calibration is used for rendering a live 3D scene. The source code for this application is available on GitHub.

Calibration with RoomAlive Toolkit

Calibration of multiple Kinects typically requires a manual process of recording a collection of calibration points. With RoomAlive Toolkit it is possible to automatically calibrate multiple projectors and Kinects. Since projectors are used to connect multiple Kinects during calibration a minimum of one projector is required. The only requirement is that the Kinects used can see a fair amount of the projected calibration images. When the calibration is done the projector is no longer needed.

Holographic studio setup

In the picture above you see the geometry and color image that was recorded by two Kinect V2’s. There’s an overlap in the middle.¬†Two magenta frustums indicate where the Kinects were positioned and a purple frustum shows the position of the projector. ¬†Note that both Kinects where placed in portrait mode for better coverage of the room. I placed a few extra items to help the¬†solver algorithm in the RoomAlive Toolkit¬†solve the calibration.

Collecting data from multiple Kinects

Since each Kinect V2 needs to be attached to a separate PC we need a way to collect the depth and color data on the main system. RoomAlive Toolkit also contains a networking solution for doing that. Each system runs a KinectServer WCF service that provides the raw Kinect color and depth data. All data is updated as it arrives without further attempt for synchronization. This results in some artifacts at the seams of the captured scenes.

Rendering live Kinect data

The simplest form of rendering is rendering the raw pieces of geometry of each Kinect. RoomAlive Toolkit contains a SharpDX based example that shows how to do this. It also shows how to filter the depth and how to apply lens distortion correction. Using this information I built a new application based on SharpDX Toolkit (XNA-like layer) to do the rendering. The application contains a few extras so you can tweak a few of the rendering parameters at runtime. I added a clipping cylinder so you can easily reduce the rendered geometry to a single person or object as seen in the video. And just for fun I added a shader to give the live 3D scene a more cliché holographic look. More info about the parameters and controls can be found on the Holographic Studio GitHub page.

Other demos of real time video capture

Holoportation by Microsoft Research
3D Video Capture with Three Kinects by Oliver Kreylos

More

Keep your magic a secret; hiding your Kinect V2

header

Like¬†any good magician¬†trying to keep it’s tricks a secret,¬†developers of interactive installations should try to hide the technology of their installation as good as possible. This will¬†keep your audience wondering and make your installation more magical. In this article I explore how the Kinect can be masked or hidden behind a Magic Mirror.

Masking the Kinect

A bulky device like Kinect V2 is hard to hide especially if it needs a proper view of your audience. Since the color and depth sensors in the Kinect V2 only cover part of the total front it makes sense to hide the rest of the device behind a thin wall. If you only use the depth camera for body tracking you can also cover the color camera and the microphone array. Since the front of the Kinect is covered with a flat piece of glass it requires some tweaking to find out where the sensors are exactly. I created a template PDF for masking the Kinect V2: KinectV2Masks

depthcolor

Color camera, IR camera and lights

depthonly
IR camera and lights if you only need body tracking or depth data

Note that you can also buy¬†predesigned skins for Kinect for Xbox One online to let your Kinect blend into it’s environment better.

Hiding behind a Magic Mirror

With the rising popularity of magic mirror screens I also wondered if the Kinect could be hidden behind a two way mirror (aka Magic Mirror). If you place the Kinect tightly against the glass it is possible to look through the glass, but the quality of the depth image is worse than that of an unblocked Kinect. The quality of the image is highly dependent on the type of glass that is used. I tested this with a piece of acrylic and a few types of glass. Here are some screenshots from the Windows Store app Kinect Evolution

reference

Regular Kinect view without glass

glass1

Kinect behind two way mirror type 1, fairly good color image, reasonable depth image and body recognized

glass2

Kinect behind two way mirror type 2, little darker color image, depth worse than previous, but body still recognized

acrylic

Kinect behind acrylic mirror, dark color image, no depth, no body

So Kinect-wise the first glass mirror type that I tested performed best and the acrylic performed worst. But there is a trade-off in the quality of the mirror reflection as can be seen if you compare the pictures below.

glass1mirror

Reflection of the glass mirror type 1 as seen from the front

acrylicmirror

Reflection of the acrylic mirror as seen from the front

The mirroring effect is a careful balance between the reflectivity/transparency of the mirror, but also the amount of light in front of the mirror and the brightness of the screen behind the mirror. If you also want to hide a sensor behind the mirror the transparency must be good enough.

Conclusion

User interfaces are most natural if they do not reveal their technological components. In case of a Kinect based interactive installation hiding is not easily done. However there are a few tricks for minimizing the visibility of the used device. Which one is the most suitable depends on the specific needs of your installation. I am always interested to hear if you have other ways for hiding your Kinect.

More

Rebuilding the HoloLens scanning effect with RoomAlive Toolkit

The initial video that explains the HoloLens to the world contains a small clip that visualizes how it can see the environment. It shows a pattern of large and smaller triangles that gradually overlay the real world objects seen in the video. I decided to try to rebuild this effect in real life by using a projection mapping setup that used a projector and a Kinect V2 sensor.

HoloLens room scan

Prototyping in Shadertoy

First¬†I¬†experimented with the idea by prototyping a pixel shader in Shadertoy. Shadertoy is an online tool that allows developers to prototype, experiment, test and share pixel shaders by using WebGL. I started with a raymarching example by I√Īigo Quilez and setup a small scene with a floor, wall and a bench.¬†The calculated 3D world coordinates could then be used for overlaying with a triangle effect. The raymarched geometry would later be replaced by geometry scanned with the Kinect V2. The screenshot below shows what the effect looks like. The source¬†code of this shader can be found on the Shadertoy website.

Shadertoy Room Scanning Shader

Projection mapping with RoomAlive Toolkit

During Build 2015 Microsoft open sourced a library called the RoomAlive Toolkit that contains the mathematical building blocks for building RoomAlive-like experiences. The library contains tools to automatically calibrate multiple Kinects and projectors so they can all use the same coordinate system. This means that each projector can be used to project onto the correct location in a room. This can even be done on dynamic geometry. The toolkit also includes an example of reprojecting the recorded image with a pixel shader effect. I used this example to apply the earlier prototyped scan effect pixel shader onto a live scanned 3D geometry.

Source code on GitHub

Bring Your Own Beamer

The installation was shown at the Bring Your Own Beamer event held on September 25th 2015 in Utrecht, The Netherlands. For this event I made some small artistic adjustments. In the original video the scanning of the world seems to start from the location of the person wearing the HoloLens. In the installation shown at the event people were able to trigger the scanning effect with their feet. The effect starts at the triggered location and expands across the floor and up their legs and any other geometry in the room.

 

The distance from the camera determines the base color used for a particular scan. Multiple scans interfere with each other and generate a colorful experience. The video shows how part of the floor and part of the wall are mapped with a single vertically mounted projector. People seemed to particularly like to play with the latency of the projection onto their body by moving quickly.

More