Tag: Kinect

Interaction Engineering (2021/22)

Last winter, my course Interaction Engineering was conducted on campus for the first part (autumn 2021) which really helped connecting the students. In November we had to switch to virtual meetings via Zoom but at this point most students already had a good idea what project to pursue with which partner.

This year we had a large variety of interaction modalities, from tangible interaction, via gaze and gesture control to full-body interaction. All projects resulted in actual prototypes that were presented live on 2 Feb 2022.

Click on the following screen shot to get to our project page where you can browse all projects. Each project comes with a short video, a written report and a set of slides.

A new Kinect (version 3)

Microsoft’s Kinect sensor can recognize human bodies and track the movement of body joints in 3D space. In 2019 an entirely new version called Azure Kinect DK was released by Microsoft. It is the third major version of the Kinect.

Originally, the Kinect was released 2010 (version 1, Xbox) and 2013 (version 2, Xbox One) but production was discontinued in 2017. However, Kinect technology was integrated for gesture control in the HoloLens (2016). While the Kinect failed to become a mainstream gaming controller, it was widely used for research and prototyping in the area of human-computer interaction.

The camera looks quite different from its earlier cousins.

In early 2022 we acquired the new Azure Kinect for the Interaction Engineering course at the cost of around 750 € here in Germany.

Setting up the Kinect

The camera has two cables, a power supply and a USB connection to a PC. You have to download an install two software packages:

  • Azure Kinect SDK
  • Azure Kinect Body Tracking SDK

It feels a bit archaic because you need to run executables in the console. For instance, it is recommended that you perform a firmware update on the sensor. For this, go into the directory of the Azure Kinect SDK and call “AzureKinectFirmwareTool.exe -Update <path to firmware>”. The firmware is in another directory of this package.

As a next step you go into the Azure Kinect Body Tracking SDK directory where you can start the 3D viewer. Again, this has one parameter so you cannot just click it in the file explorer. Type “k4abt_simple_3d_viewer.exe CPU” or “k4abt_simple_3d_viewer.exe CUDA” to start the viewer (in the /tools directory).

This is what you see (with the CPU version this is very slow).

Differences between Kinect versions

The new Kinect obviously improves on various aspects of the older ones. The two most relevant aspects are the field of view (how wide angled is the camera view) and the number of skeleton joints that are reconstructed.

FeatureKinect 1Kinect 2Kinect 3
Camera resolution640×4801920×10803840×2160
Depth camera320×240512×424640×576 (narrow)
512×512 (wide)
Field of viewH: 57°
V: 43°
H: 70°
V: 60°
H: 75° (narrow)
V: 65° (narrow)
H: 120° (wide)
V: 120° (wide)
Skeleton joints202632

There is an open-access publication dedicated to the comparison between the three Kinects:

Michal Tölgyessy, Martin Dekan, Ľuboš Chovanec and Peter Hubinský (2021) Evaluation of the Azure Kinect and Its Comparison to Kinect V1 and Kinect V2. In: Sensors 21 (2). Download here.

Skeleton

Here is a schematic view of the joints that are recognized. In practice it turns out one has to put special attention to the robustness of the signal concerning hands, feet and also head orientation.

(Source: https://docs.microsoft.com/bs-latn-ba/azure/kinect-dk/body-joints)

To integrate the Kinect with a JavaScript program, e.g. using p5js, I recommend looking at the Kinectron project.

Links

Microsoft’s Azure Kinect product page

Azure Kinect documentation page

Course chapter on the Kinect (Interaction Engineering) in German

Wikipedia on the Kinect (very informative)

For developers

Kinectron (JavaScript, including p5js)

Azure Kinect DK Code Samples Repository

Azure Kinect Library for Node / Electron (JavaScript)

Azure Kinect for Python (Python 3)

Student project: .life (2015)

.life is an interactive media stage performance about digital traces and the loss of autonomy in the net. It is a 6th semester student project in the Interactive Media programme at Augsburg University of Applied Sciences (summer 2015).

Student team: Florian Lehmann, Sophie Kellner, David Marquardt, Nicolas Hofmair, Simon Abbt, Philipp Hoffmann, Karina Kraus, Marion Imhof

Supervision: Prof. Robert Rose, Prof. Dr. Michael Kipp

Making-of video

Complete performance

German abstract

In den medialen Netzen verlieren wir unsere Autonomie und werden zu gläsernen Menschen. Unsere Tanzperformance .life visualisiert dieses Phänomen mit einem dynamischen Bühnenbild, das der Tänzer live durch seine Bewegungen erzeugt.

Täglich hinterlassen wir unbemerkt Datenspuren im Netz und liefern uns so Kräften aus, die wir nicht kontrollieren können, ja nicht einmal wahrnehmen.

Kann sich unser Held aus dieser Verstrickung befreien oder wird er vielleicht selbst zum Täter, der die Daten anderer missbraucht?

Downloads and Links

Student project: Whiteout (2014)

Whiteout is an interactive media stage performance inspired by Peter Pan. It is a 6th semester student project in the Interactive Media programme at Augsburg University of Applied Sciences (summer 2014).

Making-Of Video:

Complete performance:

“WHITEOUT – Disconnect to Neverland” ist ein mediales Bühnenprojekt, das im Sommersemester 2014 von einer Gruppe von acht Studenten des sechsten Semester des Studiengangs Interaktive Medien bearbeitet wurde. Hierbei wurde eine “interaktive Inszenierung” konzipiert, organisiert und schließlich aufgeführt.

Student team: Madita Herpich, Matthias Zeug, Iris Hefele, Lisa Wölfel, Carina Nusser, Elisabeth Hönig, Juleen Schurr, Barbara Gschwill

Supervision: Prof. Robert Rose, Prof. Dr. Michael Kipp

Abschlussbericht

Webseite

Facebook

More videos

Student project: LOOMOX (2014)

LOOMOX is a playful interactive installation in the form of a window which allows contact, collaboration and community building. It is a 6th semester student project in the Interactive Media programme at Augsburg University of Applied Sciences (summer 2014).

LOOMOX ist eine interaktive Installation, die an der Bibliothek der Hochschule Augsburg entstehen wird und zum gemeinsamen Spielen an einem öffentlichen Ort anregt. Dein LOOMOX wird per Beamer auf eine Rückprojektionsfolie an der Glasfassade der Augsburger Hochschulbibliothek projiziert. Dank einer ebenfalls hier angebrachten Kinect erkennt er, wenn du dich dem Screen näherst und begrüßt dich freudig.Nun kannst du deinen LOOMOX per Handgeste durch eines von mehreren Spielen steuern, das du gegen den Computer oder eine andere Person bestreitest. Ganz besonders freut sich dein LOOMOX, wenn du ihn anschließend mit nach Hause nimmst.

Student team: Jonathan Irschl, Sebastian Harter, Sebastian Antosch, Christian Baur, Lukas Fornaro, Patrick Schroeter, Christian Reichart, Susanne Rauchbart, Stephan Reichinger

Supervision: Prof. Daniel Rothaug, Prof. Dr. Michael Kipp

Abschlussbericht

Webseite

Facebook

Student project: make a move (2013)

make a move is a serious game installation about spatial behavior in the context of a flirt. It is a 6th semester student project in the Interactive Media programme at Augsburg University of Applied Sciences (summer 2013).

MAKE A MOVE ist ein “serious game”. Ziel des Spiels ist es die Spieler für die nonverbalen Faktoren einer Mensch/Mensch Interaktion (z.B. Flirten) zu sensibilisieren und sie somit für eine bewusste Anwendung und Wahrnehmng im Alltag zu schulen.

Dabei spielt der Lernfaktor eine maßgebliche Rolle. Dieser ist jedoch spielerisch und charmant verpackt – somit sammelt der Spieler unbewusst und mit Spaß praxisrelevantes “Interaktions Know-How” durch Gamification.

Student team: Stefan Brand, Dominik Frosch, Andi Brosche, Katrin Maier, Dominik Baumann, Christoph Ott, Markus Benndorff, Janne Müller

Supervision: Prof. Jens Müller, Prof. Dr. Michael Kipp

Abschlussbericht (PDF)

Webseite

Facebook

Zeitungsbericht in der Augsburger Allgemeinen

More videos

Student project: Reäktor (2012)

Reäktor is a media-enhanced dance performance. It is a 6th semester student project in the Interactive Media programme at Augsburg University of Applied Sciences (summer 2012).

“Light Visual Art Dance Performance” Was passiert wenn man Licht, Performancekunst, Tanz und moderne Technologien wie Mapping und Tracing verbindet? Eine der Antworten darauf ist Reäktor – eine an der Fakultät für Gestaltung der Hochschule Augsburg entwickelte und erdachte Tanzaufführung, die eine Symbiose aus Echtzeit 3D Tracking, abstrakten Visuals, moderner elektronischen Musik und klassischem Tanz schafft. Es soll gezeigt werden, dass sich die unterschiedlichen Elemente keinesfalls ausschließen. Der Aufführung liegt eine Geschichte zugrunde, die von Begegnungen zwischen Menschen handelt und wie deren Begegnungen und Berührungen für Spuren in deren Leben und Umgebung hinterlassen. Diese Spuren werden durch abstrakte Licht und Schattenspiele dargestellt, die experimentell durch die Verwendung moderner 3D-Tracking-Methoden entstehen. Das Stück richtet sich an ein kulturell gebildetes Publikum und Medienkunst-Liebhaber mit einem besonderen Interesse für Tanz und neue Formen der damit verbundenen Performance. Die ambitionierte Gruppe von acht Studenten unter der Leitung von Herrn Prof. Rose und Herrn Prof. Kipp der Hochschule Augsburg arbeitet seit einem halben Jahr an einem Konzept und einer Realisation dieser Tanzaufführung in der ebenfalls Tänzer des Augsburger Theater-Ensembles vertreten sein werden.

Student team: Matthias Lohscheidt, Christian Unterberg, Sven ten Pas, Thomas Ott, Benjamin Knöfler, Michael Klein, Dennis Schnurer, Florian Krapp

Supervision: Prof. Robert Rose, Prof. Dr. Michael Kipp

Interfaces with the Kinect

Walking in 3D

Here, the Kinect is used to navigate through Google Street View:

  • body orientation => rotate sideways
  • body tilt => rotate up/down
  • walking (on the spot) => move forward
  • jump => move forward by some distance

Also see post on developkinect.com

In-Air Interaction with ARCADE

A combination of interaction techniques to create and control virtual 3D objects that are placed into the live video of the presenter.

  • selection/picking: hold hand over object
  • menu: browse with finger position, select with swipe gesture
  • drawing: two finger touching switches to draw mode, go to 3D rotate mode with a key press
  • rotate/scale: rotate with finger up/down/left/right, scale with two fingers
  • delete: wave gesture over object

Also see post on developkinect.com

Gesture Recognition: Kinetic Space 2.0

The toolkit allows to define your own 3D gestures.

Also see post on developkinect.com

Processing and Kinect: Resources

Java programmers can use the Kinect quite comfortably via the Processing language. On codasign there are a number of articles that’ll teach you how to do this quickly. It’s based on the OpenSimpleNI package.

The Kinect has become so popular because it can track a person in space by inferring a “skeleton” in 3D space (using a depth image). This means that the human user is not only detected in space but that his/her rough body structure is reconstructed and the system then know where certain key body parts (hands, feet, hip …) are located in space. SkeletonTracked

This can be used to react to movement in space (approaching, retreating…), body orientation (facing the system or not …), hand gestures (wave, swipe, cross …) and even body posture (leaning over, sitting …).

In the following linked-up pages, you can learn how to set up the Kinect with Processing and how to obtain skeleton information in 3D space. Gesture detection is yet another topic.

Installing OpenNi for Processing: shows you how to get started.

Using the Kinect with Processing: Overview of Kinect-related pages in codasign.

Skeleton Tracking with the Kinect: Explains how to access the skeleton information.

Getting Joint Position in 3D Space from the Kinect: This is interesting because it does not only show how to retrieve joint locations but also how to easily obtain the projection onto screen coordinates.

Reference for Simple-OpenNI and the Kinect: Some potentially interesting information about the SimpleOpenNI class.

Copyright © 2022 Michael Kipp's Blog

Theme by Anders NorenUp ↑