Tagged: Interactive

Interactive Luminous Carpets with Kinect

carpetheader

The boundaries between the digital and physical world around us are fading. Traditionally passive objects like mirrors, windows, coffee tables, clothes and even plants are now becoming interactive. One of the innovations in this area is Luminous Carpets. It is a carpet with embedded LED lights. The product is a collaboration between Philips and Dutch carpet maker Desso.

Making it interactive

Although the carpet itself does not contain sensors to detect who is standing where, it is possible to add external sensors to detect the presence of people. I was invited by Luminous Carpets to setup an experiment to see if their demo floor could become interactive by adding a Kinect sensor. Here’s a video that shows some of the results of the singe day trial.

The Kinect was positioned at the front of the floor at a relatively large distance (> 3 meters) to be able to see all of the floor area. After setting up the equipment a calibration had to be applied for lining up the Kinect coordinate frame with the coordinate frame of the Luminous Carpet demo floor. The demo application was a WPF based application that generated a small grayscale image in the top left corner of the screen with a size of 112 x 56 pixels. The floor controller could then use this as a HDMI input for controlling the pixels of the luminous carpet.

By using Kinect in a frontal view body tracking can be used to build interactivity that understands joint positions. Individual joints can be projected to the floor taking into account their height above the floor. This can for instance be used to give the user a feeling that he is holding a lantern that lights up the floor. This positioning of the Kinect also implies looking into the direction of the sensor for best tracking results. For interaction with a floor however there is no direct directional preference for interaction other than suggested by the walls surrounding it.

Using the Kinect in a frontal setup may also cause that multiple users can occlude each other. Typically this is prevented by using a sensor that looks down from a large height and using blob detection and tracking for building the interactivity.

Natural surface interactions

Blending the digital and the real world can go beyond using hand-held devices like iPad or head-mounted devices like HoloLens. Any surface that surrounds us in our everyday lives can be made interactive by combining sensors and visual, tactile or auditive feedback. The surfaces that have the highest potential for natural interaction are those that we are already used to interact with on a daily basis. And if we don’t have to equip ourselves with wearables it can feel even more natural.

More

Keep your magic a secret; hiding your Kinect V2

header

Like any good magician trying to keep it’s tricks a secret, developers of interactive installations should try to hide the technology of their installation as good as possible. This will keep your audience wondering and make your installation more magical. In this article I explore how the Kinect can be masked or hidden behind a Magic Mirror.

Masking the Kinect

A bulky device like Kinect V2 is hard to hide especially if it needs a proper view of your audience. Since the color and depth sensors in the Kinect V2 only cover part of the total front it makes sense to hide the rest of the device behind a thin wall. If you only use the depth camera for body tracking you can also cover the color camera and the microphone array. Since the front of the Kinect is covered with a flat piece of glass it requires some tweaking to find out where the sensors are exactly. I created a template PDF for masking the Kinect V2: KinectV2Masks

depthcolor

Color camera, IR camera and lights

depthonly
IR camera and lights if you only need body tracking or depth data

Note that you can also buy predesigned skins for Kinect for Xbox One online to let your Kinect blend into it’s environment better.

Hiding behind a Magic Mirror

With the rising popularity of magic mirror screens I also wondered if the Kinect could be hidden behind a two way mirror (aka Magic Mirror). If you place the Kinect tightly against the glass it is possible to look through the glass, but the quality of the depth image is worse than that of an unblocked Kinect. The quality of the image is highly dependent on the type of glass that is used. I tested this with a piece of acrylic and a few types of glass. Here are some screenshots from the Windows Store app Kinect Evolution

reference

Regular Kinect view without glass

glass1

Kinect behind two way mirror type 1, fairly good color image, reasonable depth image and body recognized

glass2

Kinect behind two way mirror type 2, little darker color image, depth worse than previous, but body still recognized

acrylic

Kinect behind acrylic mirror, dark color image, no depth, no body

So Kinect-wise the first glass mirror type that I tested performed best and the acrylic performed worst. But there is a trade-off in the quality of the mirror reflection as can be seen if you compare the pictures below.

glass1mirror

Reflection of the glass mirror type 1 as seen from the front

acrylicmirror

Reflection of the acrylic mirror as seen from the front

The mirroring effect is a careful balance between the reflectivity/transparency of the mirror, but also the amount of light in front of the mirror and the brightness of the screen behind the mirror. If you also want to hide a sensor behind the mirror the transparency must be good enough.

Conclusion

User interfaces are most natural if they do not reveal their technological components. In case of a Kinect based interactive installation hiding is not easily done. However there are a few tricks for minimizing the visibility of the used device. Which one is the most suitable depends on the specific needs of your installation. I am always interested to hear if you have other ways for hiding your Kinect.

More

Rebuilding the HoloLens scanning effect with RoomAlive Toolkit

The initial video that explains the HoloLens to the world contains a small clip that visualizes how it can see the environment. It shows a pattern of large and smaller triangles that gradually overlay the real world objects seen in the video. I decided to try to rebuild this effect in real life by using a projection mapping setup that used a projector and a Kinect V2 sensor.

HoloLens room scan

Prototyping in Shadertoy

First I experimented with the idea by prototyping a pixel shader in Shadertoy. Shadertoy is an online tool that allows developers to prototype, experiment, test and share pixel shaders by using WebGL. I started with a raymarching example by Iñigo Quilez and setup a small scene with a floor, wall and a bench. The calculated 3D world coordinates could then be used for overlaying with a triangle effect. The raymarched geometry would later be replaced by geometry scanned with the Kinect V2. The screenshot below shows what the effect looks like. The source code of this shader can be found on the Shadertoy website.

Shadertoy Room Scanning Shader

Projection mapping with RoomAlive Toolkit

During Build 2015 Microsoft open sourced a library called the RoomAlive Toolkit that contains the mathematical building blocks for building RoomAlive-like experiences. The library contains tools to automatically calibrate multiple Kinects and projectors so they can all use the same coordinate system. This means that each projector can be used to project onto the correct location in a room. This can even be done on dynamic geometry. The toolkit also includes an example of reprojecting the recorded image with a pixel shader effect. I used this example to apply the earlier prototyped scan effect pixel shader onto a live scanned 3D geometry.

Source code on GitHub

Bring Your Own Beamer

The installation was shown at the Bring Your Own Beamer event held on September 25th 2015 in Utrecht, The Netherlands. For this event I made some small artistic adjustments. In the original video the scanning of the world seems to start from the location of the person wearing the HoloLens. In the installation shown at the event people were able to trigger the scanning effect with their feet. The effect starts at the triggered location and expands across the floor and up their legs and any other geometry in the room.

 

The distance from the camera determines the base color used for a particular scan. Multiple scans interfere with each other and generate a colorful experience. The video shows how part of the floor and part of the wall are mapped with a single vertically mounted projector. People seemed to particularly like to play with the latency of the projection onto their body by moving quickly.

More