Working with Kinect v2 Events in Modern C++

This post was republished to D Goins Espiriance at 4:35:52 PM 1/30/2014

Working with Kinect v2 Events in Modern C++

I am currently in the process of trying to determine particular rates of change of various data points such as Infrared, Color, and depth values of the Kinect for windows v2 device. As I wrote the code to interact with the Kinect v2 application programming interface (API), I utilized a “gamers” loop to poll for frames of data coming from the device.

By nature of the polling architecture I am constantly checking for frames data from Kinect device roughly every nanosecond. As I get the frame data, I run through some mathematical calculations to get the rates of changes. I sat back and thought to myself, I wondered if the rates of change values I calculate would be the same if I utilize the event based architecture of the Kinect v2 API.

The event based architecture that the Kinect v2 API supports allow for the Kinect v2 device to notify your application when a frame is ready for processing. So instead of just checking for a frame every nanosecond, I could let the device send a signal to let me know when it was ready to be processed. All is cool, now I wonder if the time it takes for the signal to be recognized, and the time it takes to process the frame (aka latency) would cause any rate of change value differences between the polling design and this one.

Currently I am in the developer preview program for the Kinect for windows v2 device which means I was lucky enough to get my hands on a pre-production device sooner rather than later. I will circle back around once I have the final production ready device and post production ready results here. Alas, this article is not about the latency value differences if any, but rather my journey which I sought for how to work with the Kinect v2 events with Modern C++ applications.

I decided to seek out an example on how to use the event based architecture of the Kinect v2 API. I wanted to know exactly how to implement something like this using modern C++. What I learned is that the Kinect for windows team did a great job of explaining the steps required. Only issue was there was no coding example anywhere. I all had was some coding snippets from them to get it to work and a quick 5 minute explanation of the high level steps of how to do such a thing. I guess if I had been a 20 year C++ veteran who has been writing only C++ apps for the past 20 years, I would laugh at this blog post…

Well obviously that’s not the case. I started my development days as a C++ developer, moved into Java, J++, and Visual Basic, then C# and VB.Net programming languages. This move caused me to put all my C++ programing habits on the back burner until now. I needed to dust off that C++ hat, and go back to the thing that started my developer enthusiasm, hence the purpose of this article.

What I learned is that working with the event model with modern C++ was a delight and pretty much straight forward. You can find the results of my steps and learning here (https://k4wv2eventsample.codeplex.com/ ). Following below are my steps to accomplish this.

Steps:

1. Create a new Visual Studio 2013 C++ project based on the Window 32 project template. Compile and run the application make sure you get a basic windows desktop application running with the defaults.

2. Next I’m just going to add a menu item to the resource file for the purpose of adding a click command to launch the Kinect v2 process:

3. In the Solution Explorer view double click on the [projectname].rc file to edit this file and locate the menu resource. Add an entry inside the menu to “Start Kinect”

4. clip_image010

5. clip_image012clip_image014

6. With the new menu item added and selected navigate to the properties window and add a new ID value:

7. clip_image016

8. Save, compile and Run your project (Ctrl+S , F5).

9. Verify that the menu item is now in your application.

10. Open the [ProjectName].cpp source file. Add an entry into the WndProc procedure inside the switch statement that listens for the new MenuItem command:

<br>LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)<br>{<br>int wmId, wmEvent;<br>PAINTSTRUCT ps;<br>HDC hdc;</p> <p>switch (message)<br>{<br>case WM_COMMAND:<br>wmId = LOWORD(wParam);<br>wmEvent = HIWORD(wParam);<br>// Parse the menu selections:<br>switch (wmId)<br>{<br>case IDM_ABOUT:<br>DialogBox(hInst, MAKEINTRESOURCE(IDD_ABOUTBOX), hWnd, About);<br>break;<br>case IDM_STARTKINECT:<br>StartKinect();<br>break;<br>case IDM_EXIT:<br>DestroyWindow(hWnd);<br>break;<br>default:<br>return DefWindowProc(hWnd, message, wParam, lParam);<br>}<br>break;<br>case WM_PAINT:<br>hdc = BeginPaint(hWnd, &amp;ps);<br>// TODO: Add any drawing code here...<br>EndPaint(hWnd, &amp;ps);<br>break;<br>case WM_DESTROY:<br>PostQuitMessage(0);<br>break;<br>default:<br>return DefWindowProc(hWnd, message, wParam, lParam);<br>}<br>return 0;<br>}</p> <p>

 

11. Also in the same source file, change the Message Loop inside the int main() procedure to be a “gamers loop” using the While(true) {… PeekMessage() …} design:

 

<br>while (true)<br>{<br>while (PeekMessage(&amp;msg, nullptr, 0, 0, PM_REMOVE))<br>{<br>DispatchMessage(&amp;msg);<br>}</p> <p>if (ke.hIREvent)<br>{<br>//TRACE(L"Kinect Event ID: %d" ,(int)ke.hIREvent);</p> <p>//now check for IR Events<br>HANDLE handles[] = { reinterpret_cast&lt;HANDLE&gt;(ke.hIREvent) }; // , reinterpret_cast&lt;HANDLE&gt;(ke.hMSEvent) };</p> <p>switch (MsgWaitForMultipleObjects(_countof(handles), handles, false, 1000, QS_ALLINPUT))<br>{<br>case WAIT_OBJECT_0:<br>{<br>IInfraredFrameArrivedEventArgs* pArgs = nullptr;<br>TRACE(L"IR Frame Event Signaled.");</p> <p>if (ke.pReader)<br>{<br>HRESULT hr = ke.pReader-&gt;GetFrameArrivedEventData(ke.hIREvent, &amp;pArgs);<br>TRACE(L"Retreive Frame Arrive Event Data -HR: %d", hr);</p> <p>if (SUCCEEDED(hr))<br>{<br>TRACE(L"Retreived Frame Arrived Event Data");<br>ke.InfraredFrameArrived(pArgs);<br>pArgs-&gt;Release();<br>TRACE(L"Frame Arrived Event Data Released");<br>}<br>}<br>}<br>break;<br>}<br>}<br>if (WM_QUIT == msg.message)<br>{<br>break;<br>}<br>}</p> <p>return (int) msg.wParam;<br>

 

12. Add the following StartKinect() and struct class to your [projectName].h header file:

 

<br>#pragma once<br>#include "resource.h"<br>#include "common.h"<br>#include &lt;Kinect.h&gt;<br>#include &lt;memory&gt;<br>#include &lt;algorithm&gt;</p> <p>using namespace std;</p> <p>struct KinectEvents<br>{</p> <p>public:<br>std::unique_ptr&lt;IKinectSensor&gt; pKinect;<br>std::unique_ptr&lt;IInfraredFrameSource&gt; pSource;<br>std::unique_ptr&lt;UINT16*&gt; pInfraredData;<br>std::unique_ptr&lt;IInfraredFrameReader&gt; pReader;<br>WAITABLE_HANDLE hIREvent;<br>UINT mLengthInPixels;<br>bool mIsStarted;<br>std::unique_ptr&lt;IMultiSourceFrameReader&gt; pMultiSourceFrameReader;<br>WAITABLE_HANDLE hMSEvent;</p> <p>KinectEvents() : pKinect(nullptr),<br>pSource(nullptr), <br>pInfraredData(nullptr),<br>pReader(nullptr),<br>hIREvent(NULL),<br>mLengthInPixels(0),<br>mIsStarted(false),<br>pMultiSourceFrameReader(nullptr),<br>hMSEvent(NULL)<br>{<br>TRACE(L"KinectEvents Constructed");<br>//Initialize Kinect<br>IKinectSensor * pSensor = pKinect.get();<br>HRESULT hr = GetDefaultKinectSensor(&amp;pSensor);<br>if (SUCCEEDED(hr))<br>{<br>TRACE(L"Default Kinect Retreived - HR: %d", hr);<br>//we have a kinect sensor<br>pKinect.reset(pSensor);<br>KinectStatus status;<br>hr = pKinect-&gt;get_Status(&amp;status);<br>TRACE(L"Kinect is valid device - status: %d\n", status);<br>}<br>}</p> <p>~KinectEvents()<br>{<br>TRACE(L"KinectEvents Destructed");<br>if (hIREvent)<br>{<br>TRACE(L"Handle %d - being released...", hIREvent);<br>HRESULT hr = pReader-&gt;UnsubscribeFrameArrived(hIREvent);<br>if (SUCCEEDED(hr))<br>TRACE(L"Handle to InfraredFrame Event Successfully Released");<br>else<br>TRACE(L"Handle to InfraredFrame Event Not Released");<br>}<br>hIREvent = NULL;<br>TRACE(L"Handle to InfraredFrame set to NULL");<br>if (hMSEvent)<br>{<br>TRACE(L"Handle %d - being released...", hMSEvent);<br>HRESULT hr = pMultiSourceFrameReader-&gt;UnsubscribeMultiSourceFrameArrived(hMSEvent);<br>if (SUCCEEDED(hr))<br>TRACE(L"Handle to MultiSource Frame Event Successfully Released");<br>else<br>TRACE(L"Handle to MultiSource Frame Event Not Released");<br>}<br>hMSEvent = NULL;<br>TRACE(L"Handle to MultiSource Frame Event set to NULL");<br>pReader.release();<br>pReader = nullptr;<br>TRACE(L"InfraredFrame Reader Released");<br>pInfraredData.release();<br>pInfraredData = nullptr;<br>TRACE(L"InfraredFrame Data buffer Released");<br>pSource.release();<br>pSource = nullptr;<br>TRACE(L"InfraredFrameSource Released");<br>pMultiSourceFrameReader.release();<br>pMultiSourceFrameReader = nullptr;<br>TRACE(L"Multi Source Frame Reader Released");<br>if (pKinect)<br>{<br>HRESULT hr = pKinect-&gt;Close();<br>TRACE(L"Closing Kinect - HR: %d", hr);<br>HR(hr);<br>TRACE(L"HR : %d", hr);<br>pKinect.release();<br>pKinect = nullptr;<br>TRACE(L"Kinect resources released.");<br>}<br>}</p> <p>void Start()<br>{<br>ASSERT(pKinect);<br>if (!mIsStarted)<br>{<br>ICoordinateMapper * m_pCoordinateMapper = nullptr;<br>HRESULT hr = pKinect-&gt;get_CoordinateMapper(&amp;m_pCoordinateMapper);<br>TRACE(L"Retrieved CoordinateMapper- HR: %d", hr);<br>IBodyFrameSource* pBodyFrameSource = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pKinect-&gt;get_BodyFrameSource(&amp;pBodyFrameSource);<br>TRACE(L"Retrieved Body Frame Source - HR: %d", hr);<br>}<br>IBodyFrameReader * pBodyFrameReader = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pBodyFrameSource-&gt;OpenReader(&amp;pBodyFrameReader);<br>TRACE(L"Opened Kinect Reader - HR: %d", hr);<br>}<br>IInfraredFrameSource * pIRSource = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pKinect-&gt;get_InfraredFrameSource(&amp;pIRSource);<br>TRACE(L"Retrieved IR Frame Source - HR: %d", hr);<br>}<br>if (SUCCEEDED(hr)){<br>TRACE(L"Kinect has not started yet... Opening");<br>hr = pKinect-&gt;Open();<br>TRACE(L"Opened Kinect - HR: %d", hr);<br>}<br>////Allocate a buffer<br>IFrameDescription * pIRFrameDesc = nullptr;<br>if (SUCCEEDED(hr)){<br>pSource.reset(pIRSource);<br>hr = pIRSource-&gt;get_FrameDescription(&amp;pIRFrameDesc);<br>TRACE(L"Retreived IR FRAME Source - HR: %d", hr);<br>}<br>UINT lengthInPixels = 0;<br>if (SUCCEEDED(hr)){<br>// pSource.reset(pIRSource);<br>hr = pIRFrameDesc-&gt;get_LengthInPixels(&amp;lengthInPixels);<br>TRACE(L"Retreived IR FRAME Description Pixel Length", hr);<br>}<br>auto ret = pIRFrameDesc-&gt;Release();<br>TRACE(L"IR FrameDescription Released %d", ret);<br>IInfraredFrameReader * pIRReader = nullptr;<br>if (SUCCEEDED(hr)){<br>TRACE(L"Length In Pixels: %d", lengthInPixels);<br>mLengthInPixels = lengthInPixels;<br>pInfraredData = make_unique&lt;UINT16*&gt;(new UINT16[lengthInPixels]);<br>hr = pSource-&gt;OpenReader(&amp;pIRReader);<br>TRACE(L"Opened IR Reader");<br>}<br>if (SUCCEEDED(hr)){<br>pReader.reset(pIRReader);<br>hr = pReader-&gt;SubscribeFrameArrived(&amp;hIREvent);<br>TRACE(L"Reader Accessed Successfully");<br>TRACE(L"Subscribe to Frame Arrived Event call - HR: %d", hr);<br>}<br>if (SUCCEEDED(hr)){<br>TRACE(L"Successfully Subscribed to Frame Arrived EventID: %d", (UINT)hIREvent);<br>}<br>mIsStarted = true;<br>}<br>}</p> <p>void InfraredFrameArrived(IInfraredFrameArrivedEventArgs* pArgs)<br>{<br>TRACE(L"IR Framed event arrived");<br>ASSERT(pArgs);<br>IInfraredFrameReference * pFrameRef = nullptr;<br>HRESULT hr = pArgs-&gt;get_FrameReference(&amp;pFrameRef);<br>if (SUCCEEDED(hr)){<br>//we have a frame reference<br>//Now Acquire the frame<br>TRACE(L"We have a frame reference - HR: %d", hr);<br>bool processFrameValid = false;<br>IInfraredFrame* pFrame = nullptr;<br>TIMESPAN relativeTime = 0;<br>hr = pFrameRef-&gt;AcquireFrame(&amp;pFrame);<br>if (SUCCEEDED(hr)){<br>TRACE(L"We have acquired a frame - HR : %d", hr);<br>//Now copy the frames data to the buffer<br>hr = pFrame-&gt;CopyFrameDataToArray(mLengthInPixels, *pInfraredData);<br>if (SUCCEEDED(hr)){<br>TRACE(L"We have successfully copied ir frame data to buffer");<br>processFrameValid = true;<br>hr = pFrame-&gt;get_RelativeTime(&amp;relativeTime);<br>TRACE(L"Relative Time: - HR: %d\t Time: %d", hr, relativeTime);<br>}<br>auto ret = pFrame-&gt;Release();<br>TRACE(L"IR Frame released: %d", ret);<br>}<br>auto ret = pFrameRef-&gt;Release();<br>TRACE(L"IR Frame Reference released: %d", ret);<br>if (processFrameValid)<br>ProcessFrame(mLengthInPixels, *pInfraredData, relativeTime);<br>}<br>}</p> <p>void ProcessFrame(UINT length, UINT16 * pBuffer, TIMESPAN relativeTime)<br>{<br>TRACE(L"Process Frame Called.\nBufferLength: %d\n\tTimeSpan: %d", length, relativeTime);<br>}<br>}<br>;</p> <p>void StartKinect();</p> <p>

 

13. Add a Common.h header file to your project which contains the following:

 

</p> <p>#pragma once</p> <p>#include &lt;wrl.h&gt;<br>#include &lt;algorithm&gt;</p> <p>#pragma warning(disable: 4706)<br>#pragma warning(disable: 4127)</p> <p>namespace wrl = Microsoft::WRL;<br>using namespace std;<br>using namespace wrl;</p> <p>#define ASSERT(expression) _ASSERTE(expression)</p> <p>#ifdef _DEBUG<br>#define VERIFY(expression) ASSERT(expression)<br>#define HR(expression) ASSERT(S_OK == (expression ))<br>inline void TRACE(WCHAR const * const format, ...)<br>{<br>va_list args;<br>va_start(args, format);<br>WCHAR output[512];<br>vswprintf_s(output, format, args);<br>OutputDebugString(output);<br>va_end(args);<br>}</p> <p>#else</p> <p>#define VERIFY(expression) (expression)</p> <p>struct ComException<br>{<br>HRESULT const hr;<br>ComException(HRESULT const value) :hr(value) {}<br>};</p> <p>inline void HR(HRESULT const hr)<br>{<br>if (S_OK != hr) throw ComException(hr);<br>}</p> <p>#define TRACE __noop<br>#endif</p> <p>#if WINAPI_FAMILY_DESKTOP_APP == WINAPI_FAMILY</p> <p>#include &lt;atlbase.h&gt;<br>#include &lt;atlwin.h&gt;</p> <p>using namespace ATL;</p> <p>template &lt;typename T&gt;<br>void CreateInstance(REFCLSID clsid, wrl::ComPtr&lt;T&gt; &amp; ptr)<br>{<br>_ASSERT(!ptr);<br>CoCreateInstance(clsid, nullptr, CLSCTX_INPROC_SERVER,<br>__uuidof(T), reinterpret_cast&lt;void **&gt;(ptr.GetAddressOf()));<br>}</p> <p>struct ComInitialize<br>{<br>ComInitialize()<br>{<br>CoInitialize(nullptr);<br>}<br>~ComInitialize()<br>{<br>CoUninitialize();<br>}<br>};</p> <p>// Safe release for interfaces<br>template&lt;class Interface&gt;<br>inline void SafeRelease(ComPtr&lt;Interface&gt; pInterfaceToRelease)<br>{<br>if (pInterfaceToRelease)<br>{<br>pInterfaceToRelease.Reset();<br>pInterfaceToRelease = nullptr;<br>}<br>}</p> <p>// Safe release for interfaces<br>template&lt;class Interface&gt;<br>inline void SafeRelease(Interface *&amp; pInterfaceToRelease)<br>{<br>if (pInterfaceToRelease != nullptr)<br>{<br>pInterfaceToRelease-&gt;Release();<br>pInterfaceToRelease = nullptr;<br>}<br>}</p> <p>template &lt;typename T&gt;<br>struct WorkerThreadController<br>{<br>public:<br>WorkerThreadController() {<br>}<br>~WorkerThreadController() { }<br>static DWORD WINAPI StartMainLoop(LPVOID pwindow)<br>{<br>MSG msg = { 0 };<br>while (pwindow)<br>{<br>T * pSkeleton = reinterpret_cast&lt;T *&gt;(pwindow);<br>TRACE(L"Calling Update in worker thread main loop");<br>pSkeleton-&gt;Update();<br>Sleep(10);<br>}<br>return 0;<br>}<br>};<br>#endif</p> <p>

 

14. Now it’s time to compile, however we have to make sure our C++ project has access to all the header files and libraries required for compilation of a Kinect v2 project.

15. First open the project properties and navigate to the C/C++ All options tab. Choose an Active(x64) platform, as the Kinect v2 API SDK only comes in 64 bit currently. Set the Additional include directories to point to the location where the Kinect v2 API SDK is installed and select the …inc\ folder:

16. clip_image018

17. Next select the Linker All Options tab, and choose the folder where the Kinect20.lib file can be found, and add the word Kinect20.lib inside the Additional Dependencies:

18. clip_image020

19. Compile the solution (Ctrl+Shift+B).

20. Plug your Kinect v2 device up, start the KinectService.exe proxy application.

21. Open up an application that supports viewing output to the Output window (VS.Net, Sysinternal DebugView etc.)

22. Run DebugView

23. Navigate to your debug folder and double click on the executable (KinectEvents_Sample.exe in my case)

24. clip_image022

25. Once the application starts, on the File Menu click on the Start Kinect

26. Watch the events fly in as new frames are detected and the device notifies your application.

27. clip_image024

Presenter Quirks: through the Kinect Office Pack Plugin

 

Last year was a good year, and this year will be even better for techies such as myself. To start the year off right I want to talk about my newest adventure and project.

The adventure deals with the Kinect. Not just any Kinect, the new Kinect for the Xbox One, currently known as the Kinect for windows v2.

My team and I are on a new project… We’re calling it internally:

“Presenter Quirks”

It is a suite of applications and add-ins implemented for Microsoft software, and Windows 8 Devices and especially Microsoft Office applications like PowerPoint. This suite will assist you in becoming that great presenter, orator, lecturer, speaker, and etc. It works by way of the Kinect for Windows version 2 Device. It measures your speech, movement, gestures, and body statistics such as heart and speech rate, and colloquial terms and for the hip generation, slang. As a quick example, let’s say you use Microsoft Office PowerPoint 2013. Perhaps you would like to observe and perfect your presentation skills, by seeing how many times you say words like “Umm”, “Eh” or “Ah”, or even the phrases “like …”, “you know what I mean”, “you follow?”. Perhaps, you are a beginner English student and you want to perfect your English persona.

Maybe you even want to cut down on how much you talk with your hands and keep them within a certain Physical Interaction zone (phiz…). Maybe you want to track your body language as you walk across a stage while you’re speaking and you’d like to cut down on that. Maybe you’re nervous as heck, and your heart rate is beating too fast, and you’d like an animation to play to keep the crowd entertained, to lighten the pressure. Or perhaps you want to randomly monitor your audience to see if you are holding their attention while speaking…

If any one or all of these apply to you, then you may be interested in finding out more about Presenter Quirks. All of this can be done with this new application suite and software we are producing plus a whole lot more.

I am excited to introduce a sneak peak at phase 1 of Presenter Quirks

Listed below are some screenshots of the PowerPoint Add-In, with the Kinect enabled that will be available with Presenter Quirks.

 

Here is the application running with PowerPoint 2013 with the PIP feature turned on…

The above feature is the ability to put you as the presenter inside your presentation:

Another feature is actually controlling the presentation by pointing and utilizing the Laser feature of PowerPoint:

The above picture just shows some other features Presenter Quirks will offer.

Presenter Quirks will also support controlling PowerPoint with voice and hand gestures.

Now there are other samples that have managed to make the PowerPoint slides go forward and back using hand gestures such as here (http://www.kinecthacks.com/microsoft-demos-kinect-powerpoint/ ) and here (http://kinectpowerpoint.codeplex.com/ ), and these are pretty cool. But sorry guys, this still doesn’t compare to Presenter Quirks.

Presenter Quirks has the ability to record your body metrics and report back to you statistics that gives you the opportunity to become a better presenter, lecturer, speaker, news bearer, and etc. Controlling power point is easy, what’s hard is making sure your audience is entertained and focused on the message you’re delivering. The best way to do this is to become a great speaker, and have the tools by your side to verify this. Plus a little help from Power Point automation doesn’t hurt…

 

Well I don’t want to reveal all the capabilities in 1 post, so stay tuned. We have way more stuff planned and coming up!!!