Kinect Hackathon Inter-view

I was fortunate enough to attend the New York City (NYC) Kinect Hackathon held on June 21, 2014 – June 22, 2014 @GrindSpaces. This posting is about my experiences and a self interview/review of the projects and experimental hardware Microsoft allowed us to play with. If you’re interested please read on…

Question: So exactly what is a Kinect Hackathon?

The Microsoft Kinect for Windows team has been touring the world trying to push and expose the new Kinect for Windows (K4W) v2 device. They have gone to places like San Francisco, Germany, and China. More recently, they stopped off in New York City. At each location, the team introduced new features, and hardware to the masses showing off capabilities and potential. In NYC, the way in which the team did this was to have a Hackathon contest.

A Hackathon is simply a gathering of technical minded people ranging from inventors, to designers, to enthusiasts, to hobbyists, to developers, architects, and just plain ole smart people who have an idea. The goal is to take this idea and see if the people in the room can make it a reality by building a Proof of Concept (POC).

The contest part of it is to see which team can come up with the best working POC, for one or more ideas within 24 hours. Food and drinks are supplied all night, and team members and architecting, designing, developing, and testing all night until that cut off time.

Question: Wow, that sounds like fun, What was it like?

It was very fun!!! Let me explain why. The day started off by Ben Lower, community manager for Kinect for Windows team, introducing us to various members of the K4W Team: Carmine, Chen, Smarth, Ragul, Kevin, David and Lori. (Please excuse name spellings and if I missed anyone I apologize) and then explained about the new experimental firmware update to make the K4W v2 device support near mode – up to a potential 1cm, although at the Hackathon the current edition was up to 10cm. Ben also talked about Kinect Ripple a new framework which allows you use the floor to map or calibrate a grid for a pseudo menu/command control system for an application, while still keeping the K4W normal functionality – body tracking, audio, etc.

The next this that transpired was opening the floor for ideas, and forming a team. A little feeling slighted note… the winners of the contest were teams which were pre-planned and prepared prior to this event, but that was ok.

People took the microphone in droves… I wish I had recorded all the great ideas and thoughts people envisioned with this device, because I could just quit my day job and work on each idea, one project at a time. Each idea has the potential to make profit, and benefit humanity. The few ideas I did remember ranged from tracking animals, plants, and people, to robots avoiding obstacles, to field sobriety tests, to automated CAD designs, to virtual orchestras, playing instruments with your body, occulus rift + kinect skeleton tracking, simple funny gestures, to move the virtual egg but don’t wake the rooster farm game, to robotic hand tracking

image

 image, to Kinect ripple based Mine sweeper, a kinect ripple based match that shape game, and of course but not least my idea of the Windows 8 Store app: Kinect Virtual Doctor.

After the idea forming came the teams. I pitched my idea, others pitched their ideas, and we just went around forming teams if you didn’t have one. At first I was afraid my heart rate idea (based on my initial code here) would just be a copy and paste app for Windows 8 until a young college student named Mansib Rahman decided to pair up with me.

We changed the game…

image

video: https://www.youtube.com/watch?v=IpGelIHlEsM&feature=youtu.be

We started binging (googling in reality – but binging sounds WAYYYY better) potential algorithms for various medical rates using the HD Camera, IR Camera, and the new Near Mode firmware features of the Kinect. We learned a lot. We worked all night when I re-imagined and realized the potential for a medical library for use with the K4W v2 device was huge. That’s when we decided to create the Kinect Virtual Doctor windows 8 store app.  The application could potentially be placed inside your home, and you can stand in front of your mirror, while the application could tell you your breating rate- O2 Levels, pulse, blood pressure, stress mood, alertness, and many other things. But first we needed to make sure it was plausible and doable. We took the rest of the night trying to determine which algorithms we could implement in 24 hours. It turns out the heart rate, and breathing rates were the easiest, but we only ended up with re-writing my heart rate sample for Windows 8 utilizing the algorithm posted here.

One of the funniest stories of the night in my opinion was the “Pied Piper” green T’s group, at least that’s what I call them. Kudo’s by the way to sticking it out, and passing me a working audio sample – thanks to Andras (K4W MVP). Oh and before I forget – thank you Velvart Andras and James Ashley – (K4W MVP’s) for helping me out with ideas and coding.

These “Pied Piper” guys started out with the idea playing musical instruments with your body. For example if you hit your arm, it plays a drum, if you chest it changes the octave or plays another instrument. Sitting next to these guys was painful because of the terrible sounds coming from that area of the room. Envision akward musical notes with no melody constantly sounding off around 3am in the morning… Then on the other side of me was the Roosters crowing “Cocka-doodle-doo” right afterwards. I swear I felt like Noah or something. In any case the piped piper guys realized it was a little more difficult to do the playing music with your body routine. So they started to give up. A couple of them left and had some drinks – and in my opinion came back slightly wasted. That’s when the only logical thing for them to do appeared… “Let’s make a field sobriety test with the Kinect”. The app was simple – walk a straight line and repeat a tounge twister phrase. If the Kinect tracked you walking the straight line and the you said the phrase correctly, you didn’t go to jail.

This was hilarious and straight out of the HBO movie series “Sillicon Valley” and their fake Pied Piper web site mixed with the intoxication app from Google’s based “The Internship”… Now we went from 3 in the morning bad music to rooster crows to “Ricky Rubio went around the reading rainbow wrong” or something like that – PRICELESS!!!

Question: So what was your experience with the experimental firmware for the Kinect?

I will simply say this, for my application the 10cm worked better for obtaining the face and calculating the heart rate, however not everyone had the same success for their applications during the event.

Question: What was your experience with the Kinect Ripple?

I thought this was another great implementation for the Kinect. I can see Museums, Science exhibits, Schools, Convention centers and the like all utilizing this funtionality. In case you’re wondering what exactly it does… here’s a quick image:

2014-06-22 11.14.39 and video: http://youtu.be/RfJggcO7zZ8

 

 

Question: So would you say the Kinect Hackathon was a success?

Yes, I most definitely would!

Kinect Hackathon in NY City

The Kinect team is sponsoring a terrific hackathon in New York City June 21-22.  The Kinect Team  is going to be there with plenty of pre-release v2 sensors for people to use to create interactive applications.

In addition to the Kinect v2 sensors & SDK, the team is going to be bringing two new, cutting edge things with them:  near-field sensors and Kinect Ripple (see below for more info).

Attendees will be able to build desktop or Windows Store apps using C++, C#, VB, or even pure HTML/Javascript.  Plus we’ll have support for getting Kinect data right into Unity3D, openFrameworks, and Cinder.

Quick Note: What’s New with Kinect v2 For Windows (Technical Dive)

A couple of weeks ago I did a user group session in NYC at Microsoft’s office in TimeSquare http://www.meetup.com/AzureNYC/events/158352992/.

I have another one coming up on June 17th At the Colorado Springs Dot Net User’s group – come join me to hear about what’s happening with the Kinect v2 For Windows!!!

http://www.southcolorado.net/

Exploring the Kinect Studio v2

The Microsoft Kinect for Windows (K4W) team has done it again. They have released some new beta software, and an sdk to go along with the new Kinect v2 device

Note: This is based on preliminary software and/or hardware, subject to change

In their most recent update for the Kinect v2 SDK (preview 1403). Members of the Developer preview program have the ability to check out the new Kinect Studio v2. What’s nice about this is Microsoft focused the majority of their efforts on implementing the much anticipated Kinect Studio application for Kinect version 2 device.

Introduction

This posting is about the capabilities of the Kinect Studio for version 2 Kinect devices and how the application works. It also discusses potential usage patterns, and a quick step by step instructions on how to use it with a custom Kinect v2 based application. If this sounds interesting please read on.

KinectStudio v2 allows a developer, tester, and enthusiast to test a custom Kinect v2 based applications against multiple recorded samples. It also allows a developer view data that the Kinect v2 device sees on a per pixel basis for a particular frame. As a quick snapshow see the figure below.

image

 

Capabilities of Kinect Studio v2

Let’s break down the current capabilities:

  • Record sample clip of data from the Kinect v2 device covering:
    • Color, depth, ir, long ir exposure, body frame, body index, Computer system info, system audio, camera settings, camera calibration
  • Playback a recorded sample clip of data covering:
    • Color, depth, ir, long ir exposure, body frame, body index, Computer system info, system audio, camera settings, camera calibration
  • Play data from a live stream directly from a connected Kinect v2 Device
  • View 3-D coordinates and data from recorded and played back sample clips
    • Zoom in, twist, turn, in 3-D space

image

  • View 2-D coordinates and data from recorded and played back sample clips
    • Zoom in
  • See different viewpoints:
    • Kinect View
    • Orientation Cube
    • Floor Plane (where the floor resides in the perspective view)
  • See Depth data through different point cloud representations:
    • Color Point, Grey Point
  • See Depth data through textures and different color shades (RGB and greyscale)

image

  • See InfraRed data  and values:
    • At a particular pixel x,y coordinate

image

    • See through a grey color scale
  • Open sample Clips from a file
  • Open and connect to Sample Clips from a  repository (network share)
  • See Frame information:
    • Frame #, Start Time, Duration

    image

  • Zoom in on a particular frame
  • Choose which streams to record

image

 

How does this tool work?

The KinectStudio v2 application is a Windows Presentation Foundation application that hooks into some managed and raw native C++ libraries for accessing the Color, Depth and IR streams of data. The tool leverages either a direct connection to a Kinect v2 device, or a specially formated .Xef binary file, which has its roots in the .XTF xbox files.

When connecting to a file through the File->Open command, you are presented with limited features such as playback features for monitoring values within the sample .Xef file, and viewing frames of information.

When connecting to a live Kinect v2 Device, or through the File->Open from repository command:

image

You are presented with many more features, such as the ability to playback the live stream one or more sources of data to a custom application.

The way this works is that the Kinect Studio utilizes a proxy application called KinectStudioHostService.exe which acts as a KinectDevice v2 replica. It mimics the KinectDevice v2 through named pipes to send data streams to the KinectService.exe. When your custom Kinect v2 based application connects to the KinectService, both the KinectService and custom app behave as if you have a real device connected.

Before you go thinking of ideas about how to exploit this concept, I am almost certain Microsoft will only license this as a test bed, and it will probably only be available to use for test based scenarios. In otherwords, I doubt Microsoft releases this mechanism as a production time assistant to be able to multiply the number of Kinect devices by using this psuedo Kinect Device proxy replica, however we must await what Microsoft decides to do with this.

Thus in order to use this approach you need either a live Kinect v2 device – which does send live data and feeds to the Kinect Service, or you need to run the KinectStudioHostService application and open an .xef file for the service host to read to mimic the Kinect v2 device. The latter you do by clicking the “Connect” button to interact with a already running instance of the KinectServiceHost.exe:

image

Once connected, AND the KinectService is running the remaining features as mentioned earlier open up.

image

Side note: Make sure you start the KinectService.exe before you open a file from the repository. Having the KinectService already running will allow the KinectStudioHostSerivce to communicate to the KinectService, which will in turn allow an application to connect to the Kinect v2 Device or it’s psuedo replica: KinectStudioHostService.

Usage Patterns:

There are many ways in which this application was intended to be used, and of course some that are not intended. Let me first say that this tool is not really set for Machine Learning. The amount of data, computers, and repository girth needed for machine learning, or even Big data analysis far outreaches this tool. However one of my friends and colleagues Andreas, suggested maybe we put together a big repository of recorded clips, .xef files so that we can use it like a big test bed repository. Well maybe we could do some poor man’s machine learning version…??? Anyway with the have not’s out the way let’s continue with the have’s…

  1. Functional Testing your Kinect v2 Application.
  2. Supporting for multiple development environments (where there is a not enough Kinect devices). One can record hundreds of samples and then share the repository using a network share, where developers can use the samples to test the applicaiton
  3. Finding dead pixels in your Kinect v2 device
  4. Viewing raw values from Kinect v2 device

There are also many usage patterns where I would personally like to see it used, however for this release, it is not available- and may not be unless we all speak up…

  1. Programmatic access to KinectStudio
    1. Automate unit tests or Functional tests for various parts of the application
      1. The Idea here is that if you can programmatically control playback and recording it opens the door to more opportunities. One such opportunity is the ability to create unit tests and have them launch with automated builds using Team Foundation Server. Picture this, a developer checks in some logic to test if a hand is in gripping motion. The automation can go through multiple gripping recorded samples and play the action against a automated running instance and return a range of values. These values can determine if the custom logic the developer created fits the criteria for a successful unit test.
    2. Automate recording of certain events.
      1. With the use of security features in mind, when a particular event is raised a script can start the recording process for later retrieval and monitoring such as security cameras do
      2. Another idea is the ability to record certain events for athletic purposes to show good posture, versus bad posture and notify experts
  2. Release the application as a production, or a separate sku and allow it to be skinnable or remove features as a detail view for a Kinect v2 custom Application for monitoring and debuging purposes
  3. Provide a way to view the raw details for reporting mechanisms against a custom Kinect v2 application.

Steps to send data to a custom application through KinectStudio v2

The steps I would take are:

  1. Start the KinectStudioHostService.exe application. (If it’s the first time you’re using it you must set the repository folder location using the /d switch)
  2. Start KinectService.exe application.
  3. Open KinectStudio then click on Connect
  4. Open a sample clip or recording from the Repository – or Use a live device
  5. Start a live stream (if choosen)
  6. Start up a custom application that expects the Kinect v2 Device
  7. Hit play (from the .xef/.xrf file from repository), or start recording from a live device.

Summary

In case you’re wondering what does all this sum up to, I’ll tell you. This tool will allow you to test custom applications which utilize the Kinect v2 device for windows. You can record a person interacting with your application, and play that clip back time and time again to test the functionality of your application. You can see depth values, IR values, and color pixel coordinates. The best part about all this is that once you have one or more recorded clips you don’t need a physical device to test the custom application. You can simply link KinectStudio v2 up to your Kinect Service and Kinecct Host proxies, and then launch your custom application through VisualStudio.Net or executing it live, and sit back and monitor!

Watch the Musical Quick Video here

Watch the discussion part 1 here

Watch the discussion part 2 here

Link

Kinect Heart Rate Detector

Kinect Heart Rate Detector

As my brother so nicely puts it… ” The first Goins Collaboration…” presents to you the Kinect Heart Rate Detector sample application. In the next few coming days I will blog in detail how this application works and provide you with insight on how to make the Kinect v2 device measure your heart rate. For now just view the video here: http://youtu.be/LnX0qko-OOk and get the sample application from the link here: https://k4wv2heartrate.codeplex.com/

Happy Kinecting!!!

Working with Kinect v2 Events in Modern C++

This post was republished to D Goins Espiriance at 4:35:52 PM 1/30/2014

Working with Kinect v2 Events in Modern C++

I am currently in the process of trying to determine particular rates of change of various data points such as Infrared, Color, and depth values of the Kinect for windows v2 device. As I wrote the code to interact with the Kinect v2 application programming interface (API), I utilized a “gamers” loop to poll for frames of data coming from the device.

By nature of the polling architecture I am constantly checking for frames data from Kinect device roughly every nanosecond. As I get the frame data, I run through some mathematical calculations to get the rates of changes. I sat back and thought to myself, I wondered if the rates of change values I calculate would be the same if I utilize the event based architecture of the Kinect v2 API.

The event based architecture that the Kinect v2 API supports allow for the Kinect v2 device to notify your application when a frame is ready for processing. So instead of just checking for a frame every nanosecond, I could let the device send a signal to let me know when it was ready to be processed. All is cool, now I wonder if the time it takes for the signal to be recognized, and the time it takes to process the frame (aka latency) would cause any rate of change value differences between the polling design and this one.

Currently I am in the developer preview program for the Kinect for windows v2 device which means I was lucky enough to get my hands on a pre-production device sooner rather than later. I will circle back around once I have the final production ready device and post production ready results here. Alas, this article is not about the latency value differences if any, but rather my journey which I sought for how to work with the Kinect v2 events with Modern C++ applications.

I decided to seek out an example on how to use the event based architecture of the Kinect v2 API. I wanted to know exactly how to implement something like this using modern C++. What I learned is that the Kinect for windows team did a great job of explaining the steps required. Only issue was there was no coding example anywhere. I all had was some coding snippets from them to get it to work and a quick 5 minute explanation of the high level steps of how to do such a thing. I guess if I had been a 20 year C++ veteran who has been writing only C++ apps for the past 20 years, I would laugh at this blog post…

Well obviously that’s not the case. I started my development days as a C++ developer, moved into Java, J++, and Visual Basic, then C# and VB.Net programming languages. This move caused me to put all my C++ programing habits on the back burner until now. I needed to dust off that C++ hat, and go back to the thing that started my developer enthusiasm, hence the purpose of this article.

What I learned is that working with the event model with modern C++ was a delight and pretty much straight forward. You can find the results of my steps and learning here (https://k4wv2eventsample.codeplex.com/ ). Following below are my steps to accomplish this.

Steps:

1. Create a new Visual Studio 2013 C++ project based on the Window 32 project template. Compile and run the application make sure you get a basic windows desktop application running with the defaults.

2. Next I’m just going to add a menu item to the resource file for the purpose of adding a click command to launch the Kinect v2 process:

3. In the Solution Explorer view double click on the [projectname].rc file to edit this file and locate the menu resource. Add an entry inside the menu to “Start Kinect”

4. clip_image010

5. clip_image012clip_image014

6. With the new menu item added and selected navigate to the properties window and add a new ID value:

7. clip_image016

8. Save, compile and Run your project (Ctrl+S , F5).

9. Verify that the menu item is now in your application.

10. Open the [ProjectName].cpp source file. Add an entry into the WndProc procedure inside the switch statement that listens for the new MenuItem command:

<br>LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)<br>{<br>int wmId, wmEvent;<br>PAINTSTRUCT ps;<br>HDC hdc;</p> <p>switch (message)<br>{<br>case WM_COMMAND:<br>wmId = LOWORD(wParam);<br>wmEvent = HIWORD(wParam);<br>// Parse the menu selections:<br>switch (wmId)<br>{<br>case IDM_ABOUT:<br>DialogBox(hInst, MAKEINTRESOURCE(IDD_ABOUTBOX), hWnd, About);<br>break;<br>case IDM_STARTKINECT:<br>StartKinect();<br>break;<br>case IDM_EXIT:<br>DestroyWindow(hWnd);<br>break;<br>default:<br>return DefWindowProc(hWnd, message, wParam, lParam);<br>}<br>break;<br>case WM_PAINT:<br>hdc = BeginPaint(hWnd, &amp;ps);<br>// TODO: Add any drawing code here...<br>EndPaint(hWnd, &amp;ps);<br>break;<br>case WM_DESTROY:<br>PostQuitMessage(0);<br>break;<br>default:<br>return DefWindowProc(hWnd, message, wParam, lParam);<br>}<br>return 0;<br>}</p> <p>

 

11. Also in the same source file, change the Message Loop inside the int main() procedure to be a “gamers loop” using the While(true) {… PeekMessage() …} design:

 

<br>while (true)<br>{<br>while (PeekMessage(&amp;msg, nullptr, 0, 0, PM_REMOVE))<br>{<br>DispatchMessage(&amp;msg);<br>}</p> <p>if (ke.hIREvent)<br>{<br>//TRACE(L"Kinect Event ID: %d" ,(int)ke.hIREvent);</p> <p>//now check for IR Events<br>HANDLE handles[] = { reinterpret_cast&lt;HANDLE&gt;(ke.hIREvent) }; // , reinterpret_cast&lt;HANDLE&gt;(ke.hMSEvent) };</p> <p>switch (MsgWaitForMultipleObjects(_countof(handles), handles, false, 1000, QS_ALLINPUT))<br>{<br>case WAIT_OBJECT_0:<br>{<br>IInfraredFrameArrivedEventArgs* pArgs = nullptr;<br>TRACE(L"IR Frame Event Signaled.");</p> <p>if (ke.pReader)<br>{<br>HRESULT hr = ke.pReader-&gt;GetFrameArrivedEventData(ke.hIREvent, &amp;pArgs);<br>TRACE(L"Retreive Frame Arrive Event Data -HR: %d", hr);</p> <p>if (SUCCEEDED(hr))<br>{<br>TRACE(L"Retreived Frame Arrived Event Data");<br>ke.InfraredFrameArrived(pArgs);<br>pArgs-&gt;Release();<br>TRACE(L"Frame Arrived Event Data Released");<br>}<br>}<br>}<br>break;<br>}<br>}<br>if (WM_QUIT == msg.message)<br>{<br>break;<br>}<br>}</p> <p>return (int) msg.wParam;<br>

 

12. Add the following StartKinect() and struct class to your [projectName].h header file:

 

<br>#pragma once<br>#include "resource.h"<br>#include "common.h"<br>#include &lt;Kinect.h&gt;<br>#include &lt;memory&gt;<br>#include &lt;algorithm&gt;</p> <p>using namespace std;</p> <p>struct KinectEvents<br>{</p> <p>public:<br>std::unique_ptr&lt;IKinectSensor&gt; pKinect;<br>std::unique_ptr&lt;IInfraredFrameSource&gt; pSource;<br>std::unique_ptr&lt;UINT16*&gt; pInfraredData;<br>std::unique_ptr&lt;IInfraredFrameReader&gt; pReader;<br>WAITABLE_HANDLE hIREvent;<br>UINT mLengthInPixels;<br>bool mIsStarted;<br>std::unique_ptr&lt;IMultiSourceFrameReader&gt; pMultiSourceFrameReader;<br>WAITABLE_HANDLE hMSEvent;</p> <p>KinectEvents() : pKinect(nullptr),<br>pSource(nullptr), <br>pInfraredData(nullptr),<br>pReader(nullptr),<br>hIREvent(NULL),<br>mLengthInPixels(0),<br>mIsStarted(false),<br>pMultiSourceFrameReader(nullptr),<br>hMSEvent(NULL)<br>{<br>TRACE(L"KinectEvents Constructed");<br>//Initialize Kinect<br>IKinectSensor * pSensor = pKinect.get();<br>HRESULT hr = GetDefaultKinectSensor(&amp;pSensor);<br>if (SUCCEEDED(hr))<br>{<br>TRACE(L"Default Kinect Retreived - HR: %d", hr);<br>//we have a kinect sensor<br>pKinect.reset(pSensor);<br>KinectStatus status;<br>hr = pKinect-&gt;get_Status(&amp;status);<br>TRACE(L"Kinect is valid device - status: %d\n", status);<br>}<br>}</p> <p>~KinectEvents()<br>{<br>TRACE(L"KinectEvents Destructed");<br>if (hIREvent)<br>{<br>TRACE(L"Handle %d - being released...", hIREvent);<br>HRESULT hr = pReader-&gt;UnsubscribeFrameArrived(hIREvent);<br>if (SUCCEEDED(hr))<br>TRACE(L"Handle to InfraredFrame Event Successfully Released");<br>else<br>TRACE(L"Handle to InfraredFrame Event Not Released");<br>}<br>hIREvent = NULL;<br>TRACE(L"Handle to InfraredFrame set to NULL");<br>if (hMSEvent)<br>{<br>TRACE(L"Handle %d - being released...", hMSEvent);<br>HRESULT hr = pMultiSourceFrameReader-&gt;UnsubscribeMultiSourceFrameArrived(hMSEvent);<br>if (SUCCEEDED(hr))<br>TRACE(L"Handle to MultiSource Frame Event Successfully Released");<br>else<br>TRACE(L"Handle to MultiSource Frame Event Not Released");<br>}<br>hMSEvent = NULL;<br>TRACE(L"Handle to MultiSource Frame Event set to NULL");<br>pReader.release();<br>pReader = nullptr;<br>TRACE(L"InfraredFrame Reader Released");<br>pInfraredData.release();<br>pInfraredData = nullptr;<br>TRACE(L"InfraredFrame Data buffer Released");<br>pSource.release();<br>pSource = nullptr;<br>TRACE(L"InfraredFrameSource Released");<br>pMultiSourceFrameReader.release();<br>pMultiSourceFrameReader = nullptr;<br>TRACE(L"Multi Source Frame Reader Released");<br>if (pKinect)<br>{<br>HRESULT hr = pKinect-&gt;Close();<br>TRACE(L"Closing Kinect - HR: %d", hr);<br>HR(hr);<br>TRACE(L"HR : %d", hr);<br>pKinect.release();<br>pKinect = nullptr;<br>TRACE(L"Kinect resources released.");<br>}<br>}</p> <p>void Start()<br>{<br>ASSERT(pKinect);<br>if (!mIsStarted)<br>{<br>ICoordinateMapper * m_pCoordinateMapper = nullptr;<br>HRESULT hr = pKinect-&gt;get_CoordinateMapper(&amp;m_pCoordinateMapper);<br>TRACE(L"Retrieved CoordinateMapper- HR: %d", hr);<br>IBodyFrameSource* pBodyFrameSource = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pKinect-&gt;get_BodyFrameSource(&amp;pBodyFrameSource);<br>TRACE(L"Retrieved Body Frame Source - HR: %d", hr);<br>}<br>IBodyFrameReader * pBodyFrameReader = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pBodyFrameSource-&gt;OpenReader(&amp;pBodyFrameReader);<br>TRACE(L"Opened Kinect Reader - HR: %d", hr);<br>}<br>IInfraredFrameSource * pIRSource = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pKinect-&gt;get_InfraredFrameSource(&amp;pIRSource);<br>TRACE(L"Retrieved IR Frame Source - HR: %d", hr);<br>}<br>if (SUCCEEDED(hr)){<br>TRACE(L"Kinect has not started yet... Opening");<br>hr = pKinect-&gt;Open();<br>TRACE(L"Opened Kinect - HR: %d", hr);<br>}<br>////Allocate a buffer<br>IFrameDescription * pIRFrameDesc = nullptr;<br>if (SUCCEEDED(hr)){<br>pSource.reset(pIRSource);<br>hr = pIRSource-&gt;get_FrameDescription(&amp;pIRFrameDesc);<br>TRACE(L"Retreived IR FRAME Source - HR: %d", hr);<br>}<br>UINT lengthInPixels = 0;<br>if (SUCCEEDED(hr)){<br>// pSource.reset(pIRSource);<br>hr = pIRFrameDesc-&gt;get_LengthInPixels(&amp;lengthInPixels);<br>TRACE(L"Retreived IR FRAME Description Pixel Length", hr);<br>}<br>auto ret = pIRFrameDesc-&gt;Release();<br>TRACE(L"IR FrameDescription Released %d", ret);<br>IInfraredFrameReader * pIRReader = nullptr;<br>if (SUCCEEDED(hr)){<br>TRACE(L"Length In Pixels: %d", lengthInPixels);<br>mLengthInPixels = lengthInPixels;<br>pInfraredData = make_unique&lt;UINT16*&gt;(new UINT16[lengthInPixels]);<br>hr = pSource-&gt;OpenReader(&amp;pIRReader);<br>TRACE(L"Opened IR Reader");<br>}<br>if (SUCCEEDED(hr)){<br>pReader.reset(pIRReader);<br>hr = pReader-&gt;SubscribeFrameArrived(&amp;hIREvent);<br>TRACE(L"Reader Accessed Successfully");<br>TRACE(L"Subscribe to Frame Arrived Event call - HR: %d", hr);<br>}<br>if (SUCCEEDED(hr)){<br>TRACE(L"Successfully Subscribed to Frame Arrived EventID: %d", (UINT)hIREvent);<br>}<br>mIsStarted = true;<br>}<br>}</p> <p>void InfraredFrameArrived(IInfraredFrameArrivedEventArgs* pArgs)<br>{<br>TRACE(L"IR Framed event arrived");<br>ASSERT(pArgs);<br>IInfraredFrameReference * pFrameRef = nullptr;<br>HRESULT hr = pArgs-&gt;get_FrameReference(&amp;pFrameRef);<br>if (SUCCEEDED(hr)){<br>//we have a frame reference<br>//Now Acquire the frame<br>TRACE(L"We have a frame reference - HR: %d", hr);<br>bool processFrameValid = false;<br>IInfraredFrame* pFrame = nullptr;<br>TIMESPAN relativeTime = 0;<br>hr = pFrameRef-&gt;AcquireFrame(&amp;pFrame);<br>if (SUCCEEDED(hr)){<br>TRACE(L"We have acquired a frame - HR : %d", hr);<br>//Now copy the frames data to the buffer<br>hr = pFrame-&gt;CopyFrameDataToArray(mLengthInPixels, *pInfraredData);<br>if (SUCCEEDED(hr)){<br>TRACE(L"We have successfully copied ir frame data to buffer");<br>processFrameValid = true;<br>hr = pFrame-&gt;get_RelativeTime(&amp;relativeTime);<br>TRACE(L"Relative Time: - HR: %d\t Time: %d", hr, relativeTime);<br>}<br>auto ret = pFrame-&gt;Release();<br>TRACE(L"IR Frame released: %d", ret);<br>}<br>auto ret = pFrameRef-&gt;Release();<br>TRACE(L"IR Frame Reference released: %d", ret);<br>if (processFrameValid)<br>ProcessFrame(mLengthInPixels, *pInfraredData, relativeTime);<br>}<br>}</p> <p>void ProcessFrame(UINT length, UINT16 * pBuffer, TIMESPAN relativeTime)<br>{<br>TRACE(L"Process Frame Called.\nBufferLength: %d\n\tTimeSpan: %d", length, relativeTime);<br>}<br>}<br>;</p> <p>void StartKinect();</p> <p>

 

13. Add a Common.h header file to your project which contains the following:

 

</p> <p>#pragma once</p> <p>#include &lt;wrl.h&gt;<br>#include &lt;algorithm&gt;</p> <p>#pragma warning(disable: 4706)<br>#pragma warning(disable: 4127)</p> <p>namespace wrl = Microsoft::WRL;<br>using namespace std;<br>using namespace wrl;</p> <p>#define ASSERT(expression) _ASSERTE(expression)</p> <p>#ifdef _DEBUG<br>#define VERIFY(expression) ASSERT(expression)<br>#define HR(expression) ASSERT(S_OK == (expression ))<br>inline void TRACE(WCHAR const * const format, ...)<br>{<br>va_list args;<br>va_start(args, format);<br>WCHAR output[512];<br>vswprintf_s(output, format, args);<br>OutputDebugString(output);<br>va_end(args);<br>}</p> <p>#else</p> <p>#define VERIFY(expression) (expression)</p> <p>struct ComException<br>{<br>HRESULT const hr;<br>ComException(HRESULT const value) :hr(value) {}<br>};</p> <p>inline void HR(HRESULT const hr)<br>{<br>if (S_OK != hr) throw ComException(hr);<br>}</p> <p>#define TRACE __noop<br>#endif</p> <p>#if WINAPI_FAMILY_DESKTOP_APP == WINAPI_FAMILY</p> <p>#include &lt;atlbase.h&gt;<br>#include &lt;atlwin.h&gt;</p> <p>using namespace ATL;</p> <p>template &lt;typename T&gt;<br>void CreateInstance(REFCLSID clsid, wrl::ComPtr&lt;T&gt; &amp; ptr)<br>{<br>_ASSERT(!ptr);<br>CoCreateInstance(clsid, nullptr, CLSCTX_INPROC_SERVER,<br>__uuidof(T), reinterpret_cast&lt;void **&gt;(ptr.GetAddressOf()));<br>}</p> <p>struct ComInitialize<br>{<br>ComInitialize()<br>{<br>CoInitialize(nullptr);<br>}<br>~ComInitialize()<br>{<br>CoUninitialize();<br>}<br>};</p> <p>// Safe release for interfaces<br>template&lt;class Interface&gt;<br>inline void SafeRelease(ComPtr&lt;Interface&gt; pInterfaceToRelease)<br>{<br>if (pInterfaceToRelease)<br>{<br>pInterfaceToRelease.Reset();<br>pInterfaceToRelease = nullptr;<br>}<br>}</p> <p>// Safe release for interfaces<br>template&lt;class Interface&gt;<br>inline void SafeRelease(Interface *&amp; pInterfaceToRelease)<br>{<br>if (pInterfaceToRelease != nullptr)<br>{<br>pInterfaceToRelease-&gt;Release();<br>pInterfaceToRelease = nullptr;<br>}<br>}</p> <p>template &lt;typename T&gt;<br>struct WorkerThreadController<br>{<br>public:<br>WorkerThreadController() {<br>}<br>~WorkerThreadController() { }<br>static DWORD WINAPI StartMainLoop(LPVOID pwindow)<br>{<br>MSG msg = { 0 };<br>while (pwindow)<br>{<br>T * pSkeleton = reinterpret_cast&lt;T *&gt;(pwindow);<br>TRACE(L"Calling Update in worker thread main loop");<br>pSkeleton-&gt;Update();<br>Sleep(10);<br>}<br>return 0;<br>}<br>};<br>#endif</p> <p>

 

14. Now it’s time to compile, however we have to make sure our C++ project has access to all the header files and libraries required for compilation of a Kinect v2 project.

15. First open the project properties and navigate to the C/C++ All options tab. Choose an Active(x64) platform, as the Kinect v2 API SDK only comes in 64 bit currently. Set the Additional include directories to point to the location where the Kinect v2 API SDK is installed and select the …inc\ folder:

16. clip_image018

17. Next select the Linker All Options tab, and choose the folder where the Kinect20.lib file can be found, and add the word Kinect20.lib inside the Additional Dependencies:

18. clip_image020

19. Compile the solution (Ctrl+Shift+B).

20. Plug your Kinect v2 device up, start the KinectService.exe proxy application.

21. Open up an application that supports viewing output to the Output window (VS.Net, Sysinternal DebugView etc.)

22. Run DebugView

23. Navigate to your debug folder and double click on the executable (KinectEvents_Sample.exe in my case)

24. clip_image022

25. Once the application starts, on the File Menu click on the Start Kinect

26. Watch the events fly in as new frames are detected and the device notifies your application.

27. clip_image024