Page 5 of 8

Interfaces with the Kinect

Walking in 3D

Here, the Kinect is used to navigate through Google Street View:

  • body orientation => rotate sideways
  • body tilt => rotate up/down
  • walking (on the spot) => move forward
  • jump => move forward by some distance

Also see post on developkinect.com

In-Air Interaction with ARCADE

A combination of interaction techniques to create and control virtual 3D objects that are placed into the live video of the presenter.

  • selection/picking: hold hand over object
  • menu: browse with finger position, select with swipe gesture
  • drawing: two finger touching switches to draw mode, go to 3D rotate mode with a key press
  • rotate/scale: rotate with finger up/down/left/right, scale with two fingers
  • delete: wave gesture over object

Also see post on developkinect.com

Gesture Recognition: Kinetic Space 2.0

The toolkit allows to define your own 3D gestures.

Also see post on developkinect.com

Real-Time Facial Animation

Photorealism is slowly reaching a stage where the “uncanny valley” does not apply anymore. Activision just showed an impressive video of a face animated in real-time at GDC 2013 (Games Developers Conference):

One of the collaborators is Paul Debevec (USC – ICT, California), one of the superstars of the computer graphics community, inventor of the “light stage“, a device that allows to recreate a huge amount of light moods for a piece of video! Another project of his was “digital emily” (2008) which is probably the base technology behind the Activision demo. Here’s a talk at TEDx:

Anybody interested in facial animation should have a look at the German Animationsinstitut of the Filmakademie Baden-Württemberg who developed the Facial Animation Toolset based on scientific findings on facial expressions (most notably by researcher Paul Ekman who developed the facial action coding system aka FACS).

References

Alexander O., Rogers M., Lambeth W., Chiang M., Debevec P. Creating a Photoreal Digital Actor : The Digital Emily Project. IEEE European Conference on Visual Media Production (CVMP), November 2009. (also to appear in IEEE Computer Graphics and Applications.)

Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. 2000. Acquiring the reflectance field of a human face. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques (SIGGRAPH 2000). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 145-156.

 

Finger Tracking with DUO: Competition for the Leap Motion

Today I saw a video on the channel of the NUI Group which featured a DIY device for close range finger tracking, not unlike the Leap Motion device. It is called DUO and here’s what it can do:


The leap motion device should ship soon (April 2013) whereas the DUO is just about to start a Kickstarter project to collect funding. The difference between the two projects is foremost the open source and DIY philosophy of the DUO against a strictly commercial license philosophy of the Leap Motion. Technically, the two technologies seems to differ (see a forum thread of the DUO makers).

Homepage: http://duo3d.com

Processing and Kinect: Resources

Java programmers can use the Kinect quite comfortably via the Processing language. On codasign there are a number of articles that’ll teach you how to do this quickly. It’s based on the OpenSimpleNI package.

The Kinect has become so popular because it can track a person in space by inferring a “skeleton” in 3D space (using a depth image). This means that the human user is not only detected in space but that his/her rough body structure is reconstructed and the system then know where certain key body parts (hands, feet, hip …) are located in space. SkeletonTracked

This can be used to react to movement in space (approaching, retreating…), body orientation (facing the system or not …), hand gestures (wave, swipe, cross …) and even body posture (leaning over, sitting …).

In the following linked-up pages, you can learn how to set up the Kinect with Processing and how to obtain skeleton information in 3D space. Gesture detection is yet another topic.

Installing OpenNi for Processing: shows you how to get started.

Using the Kinect with Processing: Overview of Kinect-related pages in codasign.

Skeleton Tracking with the Kinect: Explains how to access the skeleton information.

Getting Joint Position in 3D Space from the Kinect: This is interesting because it does not only show how to retrieve joint locations but also how to easily obtain the projection onto screen coordinates.

Reference for Simple-OpenNI and the Kinect: Some potentially interesting information about the SimpleOpenNI class.

Processing: How to link up the (local) Processing API reference

Processing has a nice API reference page under http://processing.org/reference.You also have this locally (i.e. on your hard drive) once you install Processing and can open it in the main menu under Help > Reference.

Processing API

Under Windows, your Processing directory is probably in your C:\Programs directory. Go there and then to processing-2.0b7 (or whatever it’s called in your case) > modes > java > reference. Open index.html.

On a Mac, you go to your Applications folder, find Processing.app, rightclick and choose “show contents”. Then go to Contents > Resources > Java > modes > java > reference. Open index.html.

Creative Coding

It’s fascinating to see how many coding platform projects are dedicated to facilitating programming specifically for artists.

The following video presents three such projects. It features Processing (a Java derivative), Cinder (a C++ based framework) and OpenFrameworks (also C++). All of them are free and open source.

Let’s use this opportunity to post two examples of “creative coding”, both dealing with transformations of the human body. The first one is a video called “unnamed soundsculpture“, a work by onformative.

They used Kinects to record a dancer and used particle systems to transform the result. The making of is at least as interesting as the final result:

The second example is “Future Self”, a light sculpture, that works with sensor input about the position/pose of the observer.

Gaze Interaction

While the current focus in HCI is sensor-based interaction (à la Kinect), recent developments could foster interaction with the eyes. The Danish company EyeTribe (formerly Senseye) is building a very nice tracking system with $2.3 million support from the Danish government. Partnering companies include the IT University of Copenhagen, DTU Informatics, LEGO and Serious Games Interactive.

EyeTribe plans to release an SDK for app development next year.

Copyright © 2024 Michael Kipp's Blog

Theme by Anders NorenUp ↑