Tagged: natural user interface

Interactive Luminous Carpets with Kinect

carpetheader

The boundaries between the digital and physical world around us are fading. Traditionally passive objects like mirrors, windows, coffee tables, clothes and even plants are now becoming interactive. One of the innovations in this area is Luminous Carpets. It is a carpet with embedded LED lights. The product is a collaboration between Philips and Dutch carpet maker Desso.

Making it interactive

Although the carpet itself does not contain sensors to detect who is standing where, it is possible to add external sensors to detect the presence of people. I was invited by Luminous Carpets to setup an experiment to see if their demo floor could become interactive by adding a Kinect sensor. Here’s a video that shows some of the results of the singe day trial.

The Kinect was positioned at the front of the floor at a relatively large distance (> 3 meters) to be able to see all of the floor area. After setting up the equipment a calibration had to be applied for lining up the Kinect coordinate frame with the coordinate frame of the Luminous Carpet demo floor. The demo application was a WPF based application that generated a small grayscale image in the top left corner of the screen with a size of 112 x 56 pixels. The floor controller could then use this as a HDMI input for controlling the pixels of the luminous carpet.

By using Kinect in a frontal view body tracking can be used to build interactivity that understands joint positions. Individual joints can be projected to the floor taking into account their height above the floor. This can for instance be used to give the user a feeling that he is holding a lantern that lights up the floor. This positioning of the Kinect also implies looking into the direction of the sensor for best tracking results. For interaction with a floor however there is no direct directional preference for interaction other than suggested by the walls surrounding it.

Using the Kinect in a frontal setup may also cause that multiple users can occlude each other. Typically this is prevented by using a sensor that looks down from a large height and using blob detection and tracking for building the interactivity.

Natural surface interactions

Blending the digital and the real world can go beyond using hand-held devices like iPad or head-mounted devices like HoloLens. Any surface that surrounds us in our everyday lives can be made interactive by combining sensors and visual, tactile or auditive feedback. The surfaces that have the highest potential for natural interaction are those that we are already used to interact with on a daily basis. And if we don’t have to equip ourselves with wearables it can feel even more natural.

More

Keep your magic a secret; hiding your Kinect V2

header

Like any good magician trying to keep it’s tricks a secret, developers of interactive installations should try to hide the technology of their installation as good as possible. This will keep your audience wondering and make your installation more magical. In this article I explore how the Kinect can be masked or hidden behind a Magic Mirror.

Masking the Kinect

A bulky device like Kinect V2 is hard to hide especially if it needs a proper view of your audience. Since the color and depth sensors in the Kinect V2 only cover part of the total front it makes sense to hide the rest of the device behind a thin wall. If you only use the depth camera for body tracking you can also cover the color camera and the microphone array. Since the front of the Kinect is covered with a flat piece of glass it requires some tweaking to find out where the sensors are exactly. I created a template PDF for masking the Kinect V2: KinectV2Masks

depthcolor

Color camera, IR camera and lights

depthonly
IR camera and lights if you only need body tracking or depth data

Note that you can also buy predesigned skins for Kinect for Xbox One online to let your Kinect blend into it’s environment better.

Hiding behind a Magic Mirror

With the rising popularity of magic mirror screens I also wondered if the Kinect could be hidden behind a two way mirror (aka Magic Mirror). If you place the Kinect tightly against the glass it is possible to look through the glass, but the quality of the depth image is worse than that of an unblocked Kinect. The quality of the image is highly dependent on the type of glass that is used. I tested this with a piece of acrylic and a few types of glass. Here are some screenshots from the Windows Store app Kinect Evolution

reference

Regular Kinect view without glass

glass1

Kinect behind two way mirror type 1, fairly good color image, reasonable depth image and body recognized

glass2

Kinect behind two way mirror type 2, little darker color image, depth worse than previous, but body still recognized

acrylic

Kinect behind acrylic mirror, dark color image, no depth, no body

So Kinect-wise the first glass mirror type that I tested performed best and the acrylic performed worst. But there is a trade-off in the quality of the mirror reflection as can be seen if you compare the pictures below.

glass1mirror

Reflection of the glass mirror type 1 as seen from the front

acrylicmirror

Reflection of the acrylic mirror as seen from the front

The mirroring effect is a careful balance between the reflectivity/transparency of the mirror, but also the amount of light in front of the mirror and the brightness of the screen behind the mirror. If you also want to hide a sensor behind the mirror the transparency must be good enough.

Conclusion

User interfaces are most natural if they do not reveal their technological components. In case of a Kinect based interactive installation hiding is not easily done. However there are a few tricks for minimizing the visibility of the used device. Which one is the most suitable depends on the specific needs of your installation. I am always interested to hear if you have other ways for hiding your Kinect.

More

Real life Portal; a holographic window using Kinect

The game Portal (released in 2007 by Valve) is known for it’s gameplay where portals could be used to teleport between different locations. Portals where rendered as virtual windows into the connected location with the well-known orange and blue rings around them. The game spawned a huge amount of memes, fan art and YouTube videos that used elements from the game.

Portal by Valve

Real life portals without trickery

The Kinect V2 is a sensor that can be used to record a 3D view of the world in real-time. It can also track users and can see what their body pose is. This can be used to perform head tracking and reconstruct a camera view into a 3D world as if looking through a virtual window. By using one Kinect for head tracking and another Kinect for reconstructing a 3D view, the virtual window effect of a portal can be created in reality. By using both Kinects for 3D world reconstruction and head tracking a two way portal effect can be achieved.

Hardware setup

In the setup used for recording the video two Kinects V2 were used. My laptop was connected to a projector that projected onto the wall. The PC displayed the other much smaller portal. Smaller portals hide more of the other side of the scene and allow for larger head movements. With a bigger portal you will run into the limitations in the field of view of the Kinect much earlier.

Instead of transferring the recorded 3D scene I swapped the Kinects and only transferred the recorded bodyframes through a network connection to mimimize latency. This limits the maximum range that the portals can be placed from each other. (about 7 meters when using USB3 extensions cables)

A portal opens as soon as a user is detected by the Kinect so proper head tracking can be done.  For the video I used the right hand joint for controlling the camera so the viewer would experience what it would look like when head tracking is applied.

Holographic window

Holography is quickly becoming the buzzword of 2015. It’s getting harder to keep a clear understanding of what holographic actually means. (and yes I’ve abused the term too) I like the view that Oliver Kreylos has on the term holography. (See: What is holographic, and what isn’t?)

Since both worlds are already rendered in 3D it is a small step to add stereo rendering. For instance with a Microsoft HoloLens. This brings us closer to a holographic window.
Here’s the checklist that Oliver Kreylos uses in his article:

  1. Perspective foreshortening: farther away objects appear smaller
    Check, due to perspective projection used in rendering
  2. Occlusion: nearer objects hide farther objects
    Check, due to 3D reconstructed world, but objects can occluded
  3. Binocular parallax / stereopsis: left and right eyes see different views of the same objects
    Check, when using a stereo display
  4. Monocular (motion) parallax: objects shift depending on how far away they are when head is moved
    Check, due to head tracking
  5. Convergence: eyes cross when focusing on close objects
    Check, when using a stereo display
  6. Accommodation: eyes’ lenses change focus depending on objects’ distances
    No

Natural user interface

Looking through a window is a familiar experience for almost everyone. Imagine being in a Skype conversation and having the ability to move your head to see who your caller is looking at or talking to (when it’s not you). A holographic window has the power to give people the feeling of being in the same space and allows for interesting new interactions. Anyone care for a game of portal tennis?

More