Exploring the Kinect Studio v2


The Microsoft Kinect for Windows (K4W) team has done it again. They have released some new beta software, and an sdk to go along with the new Kinect v2 device

Note: This is based on preliminary software and/or hardware, subject to change

In their most recent update for the Kinect v2 SDK (preview 1403). Members of the Developer preview program have the ability to check out the new Kinect Studio v2. What’s nice about this is Microsoft focused the majority of their efforts on implementing the much anticipated Kinect Studio application for Kinect version 2 device.

Introduction

This posting is about the capabilities of the Kinect Studio for version 2 Kinect devices and how the application works. It also discusses potential usage patterns, and a quick step by step instructions on how to use it with a custom Kinect v2 based application. If this sounds interesting please read on.

KinectStudio v2 allows a developer, tester, and enthusiast to test a custom Kinect v2 based applications against multiple recorded samples. It also allows a developer view data that the Kinect v2 device sees on a per pixel basis for a particular frame. As a quick snapshow see the figure below.

image

 

Capabilities of Kinect Studio v2

Let’s break down the current capabilities:

  • Record sample clip of data from the Kinect v2 device covering:
    • Color, depth, ir, long ir exposure, body frame, body index, Computer system info, system audio, camera settings, camera calibration
  • Playback a recorded sample clip of data covering:
    • Color, depth, ir, long ir exposure, body frame, body index, Computer system info, system audio, camera settings, camera calibration
  • Play data from a live stream directly from a connected Kinect v2 Device
  • View 3-D coordinates and data from recorded and played back sample clips
    • Zoom in, twist, turn, in 3-D space

image

  • View 2-D coordinates and data from recorded and played back sample clips
    • Zoom in
  • See different viewpoints:
    • Kinect View
    • Orientation Cube
    • Floor Plane (where the floor resides in the perspective view)
  • See Depth data through different point cloud representations:
    • Color Point, Grey Point
  • See Depth data through textures and different color shades (RGB and greyscale)

image

  • See InfraRed data  and values:
    • At a particular pixel x,y coordinate

image

    • See through a grey color scale
  • Open sample Clips from a file
  • Open and connect to Sample Clips from a  repository (network share)
  • See Frame information:
    • Frame #, Start Time, Duration

    image

  • Zoom in on a particular frame
  • Choose which streams to record

image

 

How does this tool work?

The KinectStudio v2 application is a Windows Presentation Foundation application that hooks into some managed and raw native C++ libraries for accessing the Color, Depth and IR streams of data. The tool leverages either a direct connection to a Kinect v2 device, or a specially formated .Xef binary file, which has its roots in the .XTF xbox files.

When connecting to a file through the File->Open command, you are presented with limited features such as playback features for monitoring values within the sample .Xef file, and viewing frames of information.

When connecting to a live Kinect v2 Device, or through the File->Open from repository command:

image

You are presented with many more features, such as the ability to playback the live stream one or more sources of data to a custom application.

The way this works is that the Kinect Studio utilizes a proxy application called KinectStudioHostService.exe which acts as a KinectDevice v2 replica. It mimics the KinectDevice v2 through named pipes to send data streams to the KinectService.exe. When your custom Kinect v2 based application connects to the KinectService, both the KinectService and custom app behave as if you have a real device connected.

Before you go thinking of ideas about how to exploit this concept, I am almost certain Microsoft will only license this as a test bed, and it will probably only be available to use for test based scenarios. In otherwords, I doubt Microsoft releases this mechanism as a production time assistant to be able to multiply the number of Kinect devices by using this psuedo Kinect Device proxy replica, however we must await what Microsoft decides to do with this.

Thus in order to use this approach you need either a live Kinect v2 device – which does send live data and feeds to the Kinect Service, or you need to run the KinectStudioHostService application and open an .xef file for the service host to read to mimic the Kinect v2 device. The latter you do by clicking the “Connect” button to interact with a already running instance of the KinectServiceHost.exe:

image

Once connected, AND the KinectService is running the remaining features as mentioned earlier open up.

image

Side note: Make sure you start the KinectService.exe before you open a file from the repository. Having the KinectService already running will allow the KinectStudioHostSerivce to communicate to the KinectService, which will in turn allow an application to connect to the Kinect v2 Device or it’s psuedo replica: KinectStudioHostService.

Usage Patterns:

There are many ways in which this application was intended to be used, and of course some that are not intended. Let me first say that this tool is not really set for Machine Learning. The amount of data, computers, and repository girth needed for machine learning, or even Big data analysis far outreaches this tool. However one of my friends and colleagues Andreas, suggested maybe we put together a big repository of recorded clips, .xef files so that we can use it like a big test bed repository. Well maybe we could do some poor man’s machine learning version…??? Anyway with the have not’s out the way let’s continue with the have’s…

  1. Functional Testing your Kinect v2 Application.
  2. Supporting for multiple development environments (where there is a not enough Kinect devices). One can record hundreds of samples and then share the repository using a network share, where developers can use the samples to test the applicaiton
  3. Finding dead pixels in your Kinect v2 device
  4. Viewing raw values from Kinect v2 device

There are also many usage patterns where I would personally like to see it used, however for this release, it is not available- and may not be unless we all speak up…

  1. Programmatic access to KinectStudio
    1. Automate unit tests or Functional tests for various parts of the application
      1. The Idea here is that if you can programmatically control playback and recording it opens the door to more opportunities. One such opportunity is the ability to create unit tests and have them launch with automated builds using Team Foundation Server. Picture this, a developer checks in some logic to test if a hand is in gripping motion. The automation can go through multiple gripping recorded samples and play the action against a automated running instance and return a range of values. These values can determine if the custom logic the developer created fits the criteria for a successful unit test.
    2. Automate recording of certain events.
      1. With the use of security features in mind, when a particular event is raised a script can start the recording process for later retrieval and monitoring such as security cameras do
      2. Another idea is the ability to record certain events for athletic purposes to show good posture, versus bad posture and notify experts
  2. Release the application as a production, or a separate sku and allow it to be skinnable or remove features as a detail view for a Kinect v2 custom Application for monitoring and debuging purposes
  3. Provide a way to view the raw details for reporting mechanisms against a custom Kinect v2 application.

Steps to send data to a custom application through KinectStudio v2

The steps I would take are:

  1. Start the KinectStudioHostService.exe application. (If it’s the first time you’re using it you must set the repository folder location using the /d switch)
  2. Start KinectService.exe application.
  3. Open KinectStudio then click on Connect
  4. Open a sample clip or recording from the Repository – or Use a live device
  5. Start a live stream (if choosen)
  6. Start up a custom application that expects the Kinect v2 Device
  7. Hit play (from the .xef/.xrf file from repository), or start recording from a live device.

Summary

In case you’re wondering what does all this sum up to, I’ll tell you. This tool will allow you to test custom applications which utilize the Kinect v2 device for windows. You can record a person interacting with your application, and play that clip back time and time again to test the functionality of your application. You can see depth values, IR values, and color pixel coordinates. The best part about all this is that once you have one or more recorded clips you don’t need a physical device to test the custom application. You can simply link KinectStudio v2 up to your Kinect Service and Kinecct Host proxies, and then launch your custom application through VisualStudio.Net or executing it live, and sit back and monitor!

Watch the Musical Quick Video here

Watch the discussion part 1 here

Watch the discussion part 2 here

8 thoughts on “Exploring the Kinect Studio v2

    1. Hi Daniel,

      No the KinectStudioHostService does not depend on a real KinectDevice v2 to work. It can work stand alone from both the device and KinectService. However if you do have a device, you can connect to it and use it’s live feeds to record and playback data sources. If you don’t have a device you can connect to the KinectService to have the proxy mimic a real device, such that Custom Kinect built applications will think there’s a real device connected. You just play the recorded data source through KinectStudio -> through KinectStudioHostService -> which sends in the data and events to KinectService which in turn tell the custom application that a Kinect Device v2 is connected (or psuedo connected).

      Like

  1. Hi Dwight Goins,

    I have a device Kinect v2.
    I can´t connect to the KinectService sends data source through KinectStudioHostService.
    I can only when my real Kinect v2 is connected on my computer.

    I do the steps for you taught:
    The steps I would take are:

    1.Start the KinectStudioHostService.exe application. (If it’s the first time you’re using it you must set the repository folder location using the /d switch)
    2.Start KinectService.exe application.
    3.Open KinectStudio then click on Connect
    4.Open a sample clip or recording from the Repository – or Use a live device
    5.Start a live stream (if choosen)
    6.Start up a custom application that expects the Kinect v2 Device
    7.Hit play (from the .xef/.xrf file from repository), or start recording from a live device.

    On Kinect Studio of Kinect v1 is depends on the real sensor. Correct?

    Like

  2. Hi Dwight,

    Great article!

    I’m pretty new on the Kinect development scene and am just trying to play with Kinect Studio. I want to follow your set-up steps but I can’t find any of the .exe applications you mention (KinectStudioHostService and KinectService). Could you point me in the right direction?

    Thanks!

    Like

  3. Hi Dwight,

    Thank you for your blogs, they have been very useful! Do you know how I could extract the video/RGB data from a ‘.xef’ files which are already recorded in the Developer preview version?

    Kind Regards,
    Varun

    Like

  4. Hi Dwight,

    I m Ajay from Mumbai, India… Recently we have purchased Kinect XBOX One V2… We want to use it for giving PPT Presentation. So could you please help us out and guide us by providing any application if you have?

    Like

  5. hey it’s me Sami khan student of software engineering, can you refer me any video tutorials or any textual materials from which I can get a complete idea about Kinect studio and how to work with Kinect studio thanks! I am personally stuck at initial stage….

    Like

Leave a comment