Last update: 21 July 2025
Also see my blog post about the new Kinect Azure (Kinect v3).
The Kinect is a camera system which allows to recognize and track a human body and outputs data points in the form of a "skeleton" which is a number of connected joints. The Kinect integrates both infrared emitters and infrared cameras together with a software package to perform this task.
Recently, the capabilities of the Kinect can often be achieved with a simple webcam in conjunction with powerful software, usually with the help of machine learning. To learn about such Kinect alternatives, have a look into the chapter Camera-based Interaction.
For implementing gestural interaction, consider the software Wekinator.
Introduction
This chapter deals with programming the Kinect in Processing on Windows or macOS.
The most important code snippets can be found on GitHub:
- for Kinect v1 under Kinect1Processing
- for Kinect v2 under Kinect2Processing
Student Projects
To see some of the possibilies of designing interaction with the Kinect, have a look at some Kinect-based projects in my Interaction Engineering course:
- Endless Runner: Jump'n Run game projected on the floor
- Who Am I: Interaction in a museum
- TrampTroller: A Trampoline controller
- NINGU: The Ninja Way of presenting slides
- Occlusion-Aware Presenter: A solution for the occulusion problem when presenting
- KinectScroll: Gesture-based scrolling
Links and Alternatives
- Kinectron is a project that enables streaming data from the Kinect 2 (even from multiple devices simultaneously) into the browser using JavaScript.
- For Java check out the library J4K, University of Florida.
About the Kinect
The Kinect sensor can track the human body in 3D space as a skeleton, making it suitable for applications like gesture control. It works by projecting an infrared pattern into the room and capturing it with two IR cameras. By analyzing the distortion and differences between the two images, the Kinect generates a depth map. This depth data is then used to identify the presence and position of human bodies, especially the joints.
The image below gives you an idea of the type of data Kinect can capture.

Three Versions
The Kinect was developed by Microsoft for the Xbox 360 gaming console and released in 2010. We refer to this first version as Kinect 1. It is possible to use two Kinect devices on a single computer, as long as it has two separate USB controllers.

Since 2013, a new version has been available: the Kinect One. Online, it is also referred to as Kinect 2 or Kinect v2. A key technical difference: Kinect 2 requires a USB 3.0 connection and does not support using two Kinect 2 devices on a single computer.

In 2019 a third version called Azure Kinect DK was released by Microsoft. See my blog post about the new Kinect Azure (Kinect v3).
Kinect 2
To use the Kinect 2, your computer must have a USB 3 port (the newer USB-C also works with an adapter; USB 2 definitely does not). Unfortunately, skeleton tracking (joints) is only supported on Windows. On a Mac, however, you can still access the raw data (webcam image, IR image, and — most interestingly — the depth map).

Kinect 1+2 on Mac
Mac users should check out the Open Kinect for Processing library by Daniel Shiffman and Thomas Lengeling. As mentioned, skeleton data is currently not available on a Mac. The library supports both Kinect 1 and Kinect 2.
Kinect 2 on Windows
Make sure your computer has a USB 3 port.
First, install the Kinect SDK from Microsoft: Kinect for Windows SDK 2.0. After installation, you can launch Kinect Studio to test your Kinect 2.
Next, start Processing 3 and install the KinectPV2 library by Thomas Lengeling via Sketch > Import Library... as usual.
Important (as of 12/2017): The code shown here currently does not work. Instead, open the sample program "SkeletonColor", which comes bundled with the library.
You can find "SkeletonColor" under File > Examples..., and in the window that appears, go to Contributed Libraries > Kinect v2 for Processing.
On the KinectPV2 website, you'll find a demo video and a version history. In an English tutorial, the author lists all features of the library along with code examples. The library’s source code is available on the KinectPV2 GitHub page.
Overview
With the library, you can do the following, among other things:
- Capture camera images
- Perform skeleton tracking
- Perform face detection
In the following, we will take a closer look at these features:
Capture Camera Images
The Kinect includes a regular webcam (1080p) and an infrared camera (512x424 pixels, 30 Hz). You can access and display the images from both cameras.
You can also access the computed depth image (512x424). A depth image provides a “depth” value for each pixel. You can imagine a ray being cast from the camera through each pixel; as soon as the ray hits an obstacle, the depth at that point is recorded. A depth image is usually displayed as a grayscale image: the brighter a pixel, the closer the corresponding object.
Detect Skeleton Data
In computer graphics, a skeleton refers to a structure made up of joint points (also called joints) and bones. The joints are the more important part, as the position and angles of the joints define a person’s pose.
Skeletons can vary in complexity, with more or fewer joints (e.g., the spine might be represented with 3 or 15 points). The Kinect uses a fixed set of 25 joint points.
Here, "skeleton" refers exclusively to human skeletons. The Kinect cannot detect animals. In every depth image, it tries to detect one or more human skeletons and correctly position the 25 joints. It can detect and track up to 6 skeletons simultaneously.
Additionally, it performs basic hand shape detection: open, closed, and lasso. However, the reliability of this information is limited.
The quality of the detection strongly depends on the user’s position relative to the Kinect. The user must be within the Kinect’s capture volume (distance and angle). Detection works best when the user is facing the Kinect and their arms are not obstructed. Poor results occur when standing sideways, crossing arms, or placing arms behind the back. Detection also struggles when the user is facing away from the Kinect — even if the arms are clearly visible.
Face Recognition
The Kinect can detect faces of up to 6 users simultaneously. For each detected face, the following information is available:
- Region (bounding rectangle)
- 5 key points: eyes, nose, mouth corners
- State: happy, engaged, left/right eye closed, looking away, mouth moved, mouth open, wearing glasses
- Wireframe (mesh / vertex points)
Skeleton
Here is an example of a tracked skeleton:

Generated by the following code:
import KinectPV2.KJoint;
import KinectPV2.*;
KinectPV2 kinect;
float zVal = 300;
float rotX = PI;
boolean showVideo = true;
void setup() {
size(1024, 768, P3D);
kinect = new KinectPV2(this);
kinect.enableColorImg(true);
kinect.enableSkeleton3DMap(true);
kinect.init();
}
void draw() {
background(255);
if (showVideo) {
image(kinect.getColorImage(), 0, 0, 320, 240);
}
//translate the scene to the center
translate(width/2, height/2, 0);
scale(zVal);
rotateX(rotX);
ArrayList skeletonArray = kinect.getSkeleton3d();
for (int i = 0; i < skeletonArray.size(); i++) {
KSkeleton skeleton = (KSkeleton) skeletonArray.get(i);
if (skeleton.isTracked()) {
KJoint[] joints = skeleton.getJoints();
stroke(skeleton.getIndexColor());
drawSkeleton(joints);
}
}
}
void drawSkeleton(KJoint[] joints) {
showJoint(joints[KinectPV2.JointType_Head]);
showJoint(joints[KinectPV2.JointType_Neck]);
showJoint(joints[KinectPV2.JointType_SpineShoulder]);
showJoint(joints[KinectPV2.JointType_SpineMid]);
showJoint(joints[KinectPV2.JointType_SpineShoulder]);
showJoint(joints[KinectPV2.JointType_SpineBase]);
// Right Arm
showJoint(joints[KinectPV2.JointType_ShoulderRight]);
showJoint(joints[KinectPV2.JointType_ElbowRight]);
showJoint(joints[KinectPV2.JointType_WristRight]);
showJoint(joints[KinectPV2.JointType_HandRight]);
showJoint(joints[KinectPV2.JointType_HandTipRight]);
showJoint(joints[KinectPV2.JointType_ThumbRight]);
// Left Arm
showJoint(joints[KinectPV2.JointType_ShoulderLeft]);
showJoint(joints[KinectPV2.JointType_ElbowLeft]);
showJoint(joints[KinectPV2.JointType_WristLeft]);
showJoint(joints[KinectPV2.JointType_HandLeft]);
showJoint(joints[KinectPV2.JointType_HandTipLeft]);
showJoint(joints[KinectPV2.JointType_ThumbLeft]);
// Right Leg
showJoint(joints[KinectPV2.JointType_HipRight]);
showJoint(joints[KinectPV2.JointType_KneeRight]);
showJoint(joints[KinectPV2.JointType_AnkleRight]);
showJoint(joints[KinectPV2.JointType_FootRight]);
// Left Leg
showJoint(joints[KinectPV2.JointType_HipLeft]);
showJoint(joints[KinectPV2.JointType_KneeLeft]);
showJoint(joints[KinectPV2.JointType_AnkleLeft]);
showJoint(joints[KinectPV2.JointType_FootLeft]);
}
void showJoint(KJoint joint) {
strokeWeight(10);
point(joint.getX(), joint.getY(), joint.getZ());
}
void keyPressed() {
if (key == ' ') {
showVideo = !showVideo;
}
}
Recording Motion
On Kinect2Processing (GitHub), you'll find the sketch "KinectRecorder", which allows you to record motion.
Recording means that in each "frame" (typically 1/60th of a second), a skeleton pose is stored in a list. A skeleton pose consists of the positions (3D points) of all the joints.

Kinect 1
This section focuses on skeleton tracking with Kinect 1. The techniques used here work on both Mac and Windows using Processing version 2 and the SimpleOpenNI library.
The code for this section is also available on Kinect1Processing (GitHub).

There is another library called Open Kinect for Processing by Daniel Shiffman, but it does not support skeleton tracking.
Setting Up OpenNI
We use the OpenNI package here. In Processing, this is available through the SimpleOpenNI library.
Installation
Windows users must install the Kinect for Windows SDK v1.8 from Microsoft (download page). Be careful not to install v2.
We need to use the older Processing 2 version to work with the SimpleOpenNI library. If you normally use Processing 3, you should additionally install Processing 2.1.1. Check the Sketchbook folder location under “Preferences” (on Mac: Documents/Processing). Open this folder in your file explorer or Finder — you’ll find a subfolder called libraries. This is where the SimpleOpenNI library must be placed.
Download the ZIP file “SimpleOpenNI-1.96.zip” from the project’s download page. Unzip it and move the “SimpleOpenNI” folder into the libraries folder inside your Sketchbook directory (see above).
Now you're ready for the first test.
First Test
Connect the Kinect (power and USB to your computer) and launch Processing 2.
Load a sample program by selecting File > Examples.... A window with folder names will appear. Navigate to Contributed Libraries > SimpleOpenNI > OpenNI and open DepthImage (double-click).
This program shows you a depth image in a 3D scene.
Below is a snippet of the program (by Max Rheiner):
import SimpleOpenNI.*;
SimpleOpenNI context;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
println("Can't init SimpleOpenNI");
exit();
return;
}
// mirror is by default enabled
context.setMirror(true);
// enable depthMap generation
context.enableDepth();
// enable ir generation
context.enableRGB();
}
void draw()
{
// update the cam
context.update();
background(200, 0, 0);
// draw depthImageMap
image(context.depthImage(), 0, 0);
// draw irImageMap
image(context.rgbImage(), context.depthWidth() + 10, 0);
}
The highlighted lines show the minimal setup required to run the Kinect.
After starting the program, you'll see the depth image as a monochrome display. You'll also see the webcam image (not shown in the screenshot).

Class SimpleOpenNI
The SimpleOpenNI class provides the basic functionality for controlling the camera and image processing. The Kinect contains two cameras: an infrared (IR) camera and a standard RGB webcam.
As shown above, you declare a SimpleOpenNI object as a global variable, instantiate it in setup(), and call its update() method inside draw().
SimpleOpenNI sees the world from the Kinect’s perspective, which means the resulting image appears mirrored to you. To correct this, use:
context.setMirror(true);
Webcam (RGB)
To view the webcam image, you need to enable it:
context.enableRGB();
You can use rgbHeight() and rgbWidth() to match the display size precisely to the webcam resolution.
To display the image, draw the current frame to the screen in draw():
image(context.rgbImage(), 0, 0);
Depth Image
To enable the depth image, use the following:
context.enableDepth() context.depthWidth() context.depthHeight()
To draw the depth image in draw():
image(context.depthImage(), 0, 0);
To display the depth image with a different color tone:
context.setDepthImageColor(100, 150, 200); // blueish tone...
Example: User3D
To see a combination of different visualizations, launch the User3D example:

You will see a depth image, the skeleton, and the Kinect represented as a box. You can draw the Kinect with:
context.drawCamFrustum();
Detecting and Tracking a User Skeleton
First, enable user tracking in setup():
context.enableUser();
Then prepare to draw the skeleton and set the window size to match the depth image resolution:
void setup()
{
// instantiate a new context
context = new SimpleOpenNI(this);
// enable depthMap generation
context.enableDepth();
// enable skeleton generation for all joints
context.enableUser();
background(200,0,0);
stroke(0,0,255);
strokeWeight(3);
smooth();
// create a window the size of the depth information
size(context.depthWidth(), context.depthHeight());
}
Each detected person is assigned an ID between 1 and 10.
In each call to draw(), you can check whether an ID has been assigned — and thus whether a person with that ID is currently being tracked.
// for all users from 1 to 10
int i;
for (i=1; i<=10; i++)
{
// check if the skeleton is being tracked
if(context.isTrackingSkeleton(i))
{
drawSkeleton(i);
}
}
Now we need a function to draw the skeleton for the user with ID i:
void drawSkeleton(int userId)
{
}
Drawing the Skeleton
OpenNI detects 15 different joints. Each joint has an ID stored in a constant. These include:
SimpleOpenNI.SKEL_HEAD SimpleOpenNI.SKEL_NECK SimpleOpenNI.SKEL_LEFT_SHOULDER SimpleOpenNI.SKEL_LEFT_ELBOW SimpleOpenNI.SKEL_LEFT_HAND SimpleOpenNI.SKEL_RIGHT_SHOULDER SimpleOpenNI.SKEL_RIGHT_ELBOW SimpleOpenNI.SKEL_RIGHT_HAND SimpleOpenNI.SKEL_TORSO SimpleOpenNI.SKEL_LEFT_HIP SimpleOpenNI.SKEL_LEFT_KNEE SimpleOpenNI.SKEL_LEFT_FOOT SimpleOpenNI.SKEL_RIGHT_HIP SimpleOpenNI.SKEL_RIGHT_KNEE SimpleOpenNI.SKEL_RIGHT_FOOT
A dedicated function is even provided to draw a “bone” — that is, the line between two joints:
context.drawLimb(USER_ID, JOINT_1, JOINT_2);
To draw the entire skeleton, we write:
void drawSkeleton(int userId)
{
context.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);
context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);
context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);
context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT);
}
Tracking Events
Events are triggered when a person enters or leaves the visible area.
void onNewUser(SimpleOpenNI curContext,int userId)
{
println("New User Detected - userId: " + userId);
context.startTrackingSkeleton(userId);
}
void onLostUser(int userId)
{
println("User Lost - userId: " + userId);
}
Individual Joints in 3D
Above, we used the built-in drawLimb() function to render the skeleton.
If you want to recognize gestures, you’ll need to know — for each frame — where specific joints are located, such as the left hand. The left hand is represented as a joint in the skeleton. Here’s how to get joint positions.
Joint Position
You can get the position of a joint using the getJointPositionSkeleton method of the Context class.
You pass in a vector, which will be filled with the joint’s position:
context.getJointPositionSkeleton(USER_ID, JOINT_ID,
VECTOR_FOR_POSITION);
For example:
PVector pos = new PVector(); // Vector to hold the joint position
context.getJointPositionSkeleton(userId,
SimpleOpenNI.SKEL_HEAD, pos);
println(pos); // Print position
Projecting to 2D Screen Coordinates
The code is also available on Kinect1Processing (GitHub).
If you want to control 2D elements on the screen using the skeleton,
or draw markers for the head or hands, you’ll need the projected 2D coordinates of a joint.
Use the function:
context.convertRealWorldToProjective(VECTOR_3D_POSITION,
VECTOR_2D_PROJECTION);
Again, prepare a vector to hold the result:
PVector projPos = new PVector(); context.convertRealWorldToProjective(pos, projPos);
Now you can draw a circle at the joint’s 2D position — e.g., for the head:
float size = 100; fill(255, 0, 0); ellipse(projPos.x, projPos.y, size, size);
The circle keeps the same size, even if the user moves closer or farther away.
To adapt the size to perspective, use the z component of projPos:
float distScale = 500 / projPos.z; ellipse(projPos.x, projPos.y, distScale*size, distScale*size);
Below is a complete program that lets you color any joint you want:

import SimpleOpenNI.*;
SimpleOpenNI context;
void setup()
{
context = new SimpleOpenNI(this);
context.enableDepth(); // Tiefenbild ein
context.enableUser(); // Skeletterkennung ein
context.setMirror(true); // funktioniert derzeit nicht
stroke(0);
strokeWeight(2);
size(context.depthWidth(), context.depthHeight());
}
void draw()
{
background(200);
context.update();
for (int i=1; i<=10; i++)
{
if (context.isTrackingSkeleton(i))
{
drawSkeleton(i);
highlightJoint(i, SimpleOpenNI.SKEL_HEAD, color(#FFBE43), 250);
highlightJoint(i, SimpleOpenNI.SKEL_RIGHT_HAND, color(#3EFF45), 100);
highlightJoint(i, SimpleOpenNI.SKEL_LEFT_HAND, color(#FF3C1A), 100);
}
}
}
void highlightJoint(int userId, int limbID, color col, float size)
{
// get 3D position of a joint
PVector jointPos = new PVector();
context.getJointPositionSkeleton(userId, limbID, jointPos);
// convert real world point to projective space
PVector jointPos_Proj = new PVector();
context.convertRealWorldToProjective(jointPos, jointPos_Proj);
// create a distance scalar related to the depth (z dimension)
float distanceScalar = (500 / jointPos_Proj.z);
// set the fill colour to make the circle green
fill(col);
// draw the circle at the position of the head with the head size scaled by the distance scalar
ellipse(jointPos_Proj.x, jointPos_Proj.y, distanceScalar*size, distanceScalar*size);
}
// draw the skeleton with the selected joints
void drawSkeleton(int userId)
{
context.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);
context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);
context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);
context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);
context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT);
}
// Event-based Methods
// when a person ('user') enters the field of view
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("New User Detected - userId: " + userId);
context.startTrackingSkeleton(userId);
}
// when a person ('user') leaves the field of view
void onLostUser(int userId)
{
println("User Lost - userId: " + userId);
}
Interaction
We use the program from above to interact with our interactive 2D elements:
When a hand touches a 2D object on the screen, the element is selected.
The code is also available on Kinect1Processing (GitHub).

Main program:
import SimpleOpenNI.*; import java.util.*; Listthings = new ArrayList (); List points = new ArrayList (); SimpleOpenNI context; void setup() { context = new SimpleOpenNI(this); context.enableDepth(); // Tiefenbild ein context.enableUser(); // Skeletterkennung ein context.setMirror(true); // funktioniert derzeit nicht size(context.depthWidth(), context.depthHeight()); things.add(new InteractiveRect(50, 50, 140, 100)); things.add(new InteractiveRect(50, 300, 140, 100)); things.add(new InteractiveRect(450, 50, 140, 100)); things.add(new InteractiveRect(450, 300, 140, 100)); things.add(new InteractiveCircle(150, 240, 100)); things.add(new InteractiveCircle(520, 250, 80)); } void draw() { background(200); context.update(); for (int i=1; i<=10; i++) { if (context.isTrackingSkeleton(i)) { points.clear(); drawSkeleton(i); highlightJoint(i, SimpleOpenNI.SKEL_HEAD, color(#FFBE43), 250); points.add(highlightJoint(i, SimpleOpenNI.SKEL_LEFT_HAND, color(#FF3C1A), 100)); points.add(highlightJoint(i, SimpleOpenNI.SKEL_RIGHT_HAND, color(#3EFF45), 100)); for (InteractiveThing thing: things) { thing.update(points); } } } // Alle objekte zeichnen for (InteractiveThing thing: things) { thing.draw(); } } PVector highlightJoint(int userId, int limbID, color col, float size) { stroke(0); strokeWeight(2); // get 3D position of a joint PVector jointPos = new PVector(); context.getJointPositionSkeleton(userId, limbID, jointPos); // convert real world point to projective space PVector jointPos_Proj = new PVector(); context.convertRealWorldToProjective(jointPos, jointPos_Proj); // create a distance scalar related to the depth (z dimension) float distanceScalar = (500 / jointPos_Proj.z); // set the fill colour to make the circle green fill(col); println("ellipse: " + jointPos_Proj.x + " " + jointPos_Proj.y); // draw the circle at the position of the head with the head size scaled by the distance scalar ellipse(jointPos_Proj.x, jointPos_Proj.y, distanceScalar*size, distanceScalar*size); return jointPos_Proj; } // draw the skeleton with the selected joints void drawSkeleton(int userId) { stroke(0); strokeWeight(2); context.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK); context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND); context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO); context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT); context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT); } // Event-based Methods // when a person ('user') enters the field of view void onNewUser(SimpleOpenNI curContext, int userId) { println("New User Detected - userId: " + userId); context.startTrackingSkeleton(userId); } // when a person ('user') leaves the field of view void onLostUser(int userId) { println("User Lost - userId: " + userId); }
Our InteractiveThing file has been slightly modified:
The objects now respond to a list of points, since an object should be touchable with either the right or the left hand.
interface InteractiveThing {
void draw();
void update(List points);
}
class InteractiveRect implements InteractiveThing {
int rx = 0;
int ry = 0;
int rwidth;
int rheight;
boolean selected = false;
InteractiveRect(int x, int y, int w, int h) {
rx = x;
ry = y;
rwidth = w;
rheight = h;
}
void draw() {
noStroke();
if (selected)
fill(255, 0, 0);
else
fill(100);
rect(rx, ry, rwidth, rheight);
}
void update(List points ) {
selected = false;
for (PVector v: points) {
if ((rx <= v.x && v.x <= rx + rwidth &&
ry <= v.y && v.y <= ry + rheight))
selected = true;
}
}
}
class InteractiveCircle implements InteractiveThing {
int cx = 0;
int cy = 0;
int diameter;
boolean selected = false;
InteractiveCircle(int x, int y, int d) {
cx = x;
cy = y;
diameter = d;
}
void draw() {
noStroke();
if (selected)
fill(255, 0, 0);
else
fill(255);
ellipse(cx, cy, diameter, diameter);
}
void update(List points ) {
selected = false;
for (PVector v: points) {
if (dist(v.x, v.y, cx, cy) < diameter/2)
selected = true;
}
}
}
Recording Motion
On Kinect1Processing (GitHub), you'll find code to record motion.
Recording means that in each "frame" (typically 1/60th of a second), a skeleton pose is stored in a list. A skeleton pose consists of all the positions (3D points) of the joints.

