Exploring the Kinect Studio v2

The Microsoft Kinect for Windows (K4W) team has done it again. They have released some new beta software, and an sdk to go along with the new Kinect v2 device

Note: This is based on preliminary software and/or hardware, subject to change

In their most recent update for the Kinect v2 SDK (preview 1403). Members of the Developer preview program have the ability to check out the new Kinect Studio v2. What’s nice about this is Microsoft focused the majority of their efforts on implementing the much anticipated Kinect Studio application for Kinect version 2 device.


This posting is about the capabilities of the Kinect Studio for version 2 Kinect devices and how the application works. It also discusses potential usage patterns, and a quick step by step instructions on how to use it with a custom Kinect v2 based application. If this sounds interesting please read on.

KinectStudio v2 allows a developer, tester, and enthusiast to test a custom Kinect v2 based applications against multiple recorded samples. It also allows a developer view data that the Kinect v2 device sees on a per pixel basis for a particular frame. As a quick snapshow see the figure below.



Capabilities of Kinect Studio v2

Let’s break down the current capabilities:

  • Record sample clip of data from the Kinect v2 device covering:
    • Color, depth, ir, long ir exposure, body frame, body index, Computer system info, system audio, camera settings, camera calibration
  • Playback a recorded sample clip of data covering:
    • Color, depth, ir, long ir exposure, body frame, body index, Computer system info, system audio, camera settings, camera calibration
  • Play data from a live stream directly from a connected Kinect v2 Device
  • View 3-D coordinates and data from recorded and played back sample clips
    • Zoom in, twist, turn, in 3-D space


  • View 2-D coordinates and data from recorded and played back sample clips
    • Zoom in
  • See different viewpoints:
    • Kinect View
    • Orientation Cube
    • Floor Plane (where the floor resides in the perspective view)
  • See Depth data through different point cloud representations:
    • Color Point, Grey Point
  • See Depth data through textures and different color shades (RGB and greyscale)


  • See InfraRed data  and values:
    • At a particular pixel x,y coordinate


    • See through a grey color scale
  • Open sample Clips from a file
  • Open and connect to Sample Clips from a  repository (network share)
  • See Frame information:
    • Frame #, Start Time, Duration


  • Zoom in on a particular frame
  • Choose which streams to record



How does this tool work?

The KinectStudio v2 application is a Windows Presentation Foundation application that hooks into some managed and raw native C++ libraries for accessing the Color, Depth and IR streams of data. The tool leverages either a direct connection to a Kinect v2 device, or a specially formated .Xef binary file, which has its roots in the .XTF xbox files.

When connecting to a file through the File->Open command, you are presented with limited features such as playback features for monitoring values within the sample .Xef file, and viewing frames of information.

When connecting to a live Kinect v2 Device, or through the File->Open from repository command:


You are presented with many more features, such as the ability to playback the live stream one or more sources of data to a custom application.

The way this works is that the Kinect Studio utilizes a proxy application called KinectStudioHostService.exe which acts as a KinectDevice v2 replica. It mimics the KinectDevice v2 through named pipes to send data streams to the KinectService.exe. When your custom Kinect v2 based application connects to the KinectService, both the KinectService and custom app behave as if you have a real device connected.

Before you go thinking of ideas about how to exploit this concept, I am almost certain Microsoft will only license this as a test bed, and it will probably only be available to use for test based scenarios. In otherwords, I doubt Microsoft releases this mechanism as a production time assistant to be able to multiply the number of Kinect devices by using this psuedo Kinect Device proxy replica, however we must await what Microsoft decides to do with this.

Thus in order to use this approach you need either a live Kinect v2 device – which does send live data and feeds to the Kinect Service, or you need to run the KinectStudioHostService application and open an .xef file for the service host to read to mimic the Kinect v2 device. The latter you do by clicking the “Connect” button to interact with a already running instance of the KinectServiceHost.exe:


Once connected, AND the KinectService is running the remaining features as mentioned earlier open up.


Side note: Make sure you start the KinectService.exe before you open a file from the repository. Having the KinectService already running will allow the KinectStudioHostSerivce to communicate to the KinectService, which will in turn allow an application to connect to the Kinect v2 Device or it’s psuedo replica: KinectStudioHostService.

Usage Patterns:

There are many ways in which this application was intended to be used, and of course some that are not intended. Let me first say that this tool is not really set for Machine Learning. The amount of data, computers, and repository girth needed for machine learning, or even Big data analysis far outreaches this tool. However one of my friends and colleagues Andreas, suggested maybe we put together a big repository of recorded clips, .xef files so that we can use it like a big test bed repository. Well maybe we could do some poor man’s machine learning version…??? Anyway with the have not’s out the way let’s continue with the have’s…

  1. Functional Testing your Kinect v2 Application.
  2. Supporting for multiple development environments (where there is a not enough Kinect devices). One can record hundreds of samples and then share the repository using a network share, where developers can use the samples to test the applicaiton
  3. Finding dead pixels in your Kinect v2 device
  4. Viewing raw values from Kinect v2 device

There are also many usage patterns where I would personally like to see it used, however for this release, it is not available- and may not be unless we all speak up…

  1. Programmatic access to KinectStudio
    1. Automate unit tests or Functional tests for various parts of the application
      1. The Idea here is that if you can programmatically control playback and recording it opens the door to more opportunities. One such opportunity is the ability to create unit tests and have them launch with automated builds using Team Foundation Server. Picture this, a developer checks in some logic to test if a hand is in gripping motion. The automation can go through multiple gripping recorded samples and play the action against a automated running instance and return a range of values. These values can determine if the custom logic the developer created fits the criteria for a successful unit test.
    2. Automate recording of certain events.
      1. With the use of security features in mind, when a particular event is raised a script can start the recording process for later retrieval and monitoring such as security cameras do
      2. Another idea is the ability to record certain events for athletic purposes to show good posture, versus bad posture and notify experts
  2. Release the application as a production, or a separate sku and allow it to be skinnable or remove features as a detail view for a Kinect v2 custom Application for monitoring and debuging purposes
  3. Provide a way to view the raw details for reporting mechanisms against a custom Kinect v2 application.

Steps to send data to a custom application through KinectStudio v2

The steps I would take are:

  1. Start the KinectStudioHostService.exe application. (If it’s the first time you’re using it you must set the repository folder location using the /d switch)
  2. Start KinectService.exe application.
  3. Open KinectStudio then click on Connect
  4. Open a sample clip or recording from the Repository – or Use a live device
  5. Start a live stream (if choosen)
  6. Start up a custom application that expects the Kinect v2 Device
  7. Hit play (from the .xef/.xrf file from repository), or start recording from a live device.


In case you’re wondering what does all this sum up to, I’ll tell you. This tool will allow you to test custom applications which utilize the Kinect v2 device for windows. You can record a person interacting with your application, and play that clip back time and time again to test the functionality of your application. You can see depth values, IR values, and color pixel coordinates. The best part about all this is that once you have one or more recorded clips you don’t need a physical device to test the custom application. You can simply link KinectStudio v2 up to your Kinect Service and Kinecct Host proxies, and then launch your custom application through VisualStudio.Net or executing it live, and sit back and monitor!

Watch the Musical Quick Video here

Watch the discussion part 1 here

Watch the discussion part 2 here


Kinect Heart Rate Detector

Kinect Heart Rate Detector

As my brother so nicely puts it… ” The first Goins Collaboration…” presents to you the Kinect Heart Rate Detector sample application. In the next few coming days I will blog in detail how this application works and provide you with insight on how to make the Kinect v2 device measure your heart rate. For now just view the video here: http://youtu.be/LnX0qko-OOk and get the sample application from the link here: https://k4wv2heartrate.codeplex.com/

Happy Kinecting!!!

Working with Kinect v2 Events in Modern C++

This post was republished to D Goins Espiriance at 4:35:52 PM 1/30/2014

Working with Kinect v2 Events in Modern C++

I am currently in the process of trying to determine particular rates of change of various data points such as Infrared, Color, and depth values of the Kinect for windows v2 device. As I wrote the code to interact with the Kinect v2 application programming interface (API), I utilized a “gamers” loop to poll for frames of data coming from the device.

By nature of the polling architecture I am constantly checking for frames data from Kinect device roughly every nanosecond. As I get the frame data, I run through some mathematical calculations to get the rates of changes. I sat back and thought to myself, I wondered if the rates of change values I calculate would be the same if I utilize the event based architecture of the Kinect v2 API.

The event based architecture that the Kinect v2 API supports allow for the Kinect v2 device to notify your application when a frame is ready for processing. So instead of just checking for a frame every nanosecond, I could let the device send a signal to let me know when it was ready to be processed. All is cool, now I wonder if the time it takes for the signal to be recognized, and the time it takes to process the frame (aka latency) would cause any rate of change value differences between the polling design and this one.

Currently I am in the developer preview program for the Kinect for windows v2 device which means I was lucky enough to get my hands on a pre-production device sooner rather than later. I will circle back around once I have the final production ready device and post production ready results here. Alas, this article is not about the latency value differences if any, but rather my journey which I sought for how to work with the Kinect v2 events with Modern C++ applications.

I decided to seek out an example on how to use the event based architecture of the Kinect v2 API. I wanted to know exactly how to implement something like this using modern C++. What I learned is that the Kinect for windows team did a great job of explaining the steps required. Only issue was there was no coding example anywhere. I all had was some coding snippets from them to get it to work and a quick 5 minute explanation of the high level steps of how to do such a thing. I guess if I had been a 20 year C++ veteran who has been writing only C++ apps for the past 20 years, I would laugh at this blog post…

Well obviously that’s not the case. I started my development days as a C++ developer, moved into Java, J++, and Visual Basic, then C# and VB.Net programming languages. This move caused me to put all my C++ programing habits on the back burner until now. I needed to dust off that C++ hat, and go back to the thing that started my developer enthusiasm, hence the purpose of this article.

What I learned is that working with the event model with modern C++ was a delight and pretty much straight forward. You can find the results of my steps and learning here (https://k4wv2eventsample.codeplex.com/ ). Following below are my steps to accomplish this.


1. Create a new Visual Studio 2013 C++ project based on the Window 32 project template. Compile and run the application make sure you get a basic windows desktop application running with the defaults.

2. Next I’m just going to add a menu item to the resource file for the purpose of adding a click command to launch the Kinect v2 process:

3. In the Solution Explorer view double click on the [projectname].rc file to edit this file and locate the menu resource. Add an entry inside the menu to “Start Kinect”

4. clip_image010

5. clip_image012clip_image014

6. With the new menu item added and selected navigate to the properties window and add a new ID value:

7. clip_image016

8. Save, compile and Run your project (Ctrl+S , F5).

9. Verify that the menu item is now in your application.

10. Open the [ProjectName].cpp source file. Add an entry into the WndProc procedure inside the switch statement that listens for the new MenuItem command:

<br>LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)<br>{<br>int wmId, wmEvent;<br>PAINTSTRUCT ps;<br>HDC hdc;</p> <p>switch (message)<br>{<br>case WM_COMMAND:<br>wmId = LOWORD(wParam);<br>wmEvent = HIWORD(wParam);<br>// Parse the menu selections:<br>switch (wmId)<br>{<br>case IDM_ABOUT:<br>DialogBox(hInst, MAKEINTRESOURCE(IDD_ABOUTBOX), hWnd, About);<br>break;<br>case IDM_STARTKINECT:<br>StartKinect();<br>break;<br>case IDM_EXIT:<br>DestroyWindow(hWnd);<br>break;<br>default:<br>return DefWindowProc(hWnd, message, wParam, lParam);<br>}<br>break;<br>case WM_PAINT:<br>hdc = BeginPaint(hWnd, &amp;ps);<br>// TODO: Add any drawing code here...<br>EndPaint(hWnd, &amp;ps);<br>break;<br>case WM_DESTROY:<br>PostQuitMessage(0);<br>break;<br>default:<br>return DefWindowProc(hWnd, message, wParam, lParam);<br>}<br>return 0;<br>}</p> <p>


11. Also in the same source file, change the Message Loop inside the int main() procedure to be a “gamers loop” using the While(true) {… PeekMessage() …} design:


<br>while (true)<br>{<br>while (PeekMessage(&amp;msg, nullptr, 0, 0, PM_REMOVE))<br>{<br>DispatchMessage(&amp;msg);<br>}</p> <p>if (ke.hIREvent)<br>{<br>//TRACE(L"Kinect Event ID: %d" ,(int)ke.hIREvent);</p> <p>//now check for IR Events<br>HANDLE handles[] = { reinterpret_cast&lt;HANDLE&gt;(ke.hIREvent) }; // , reinterpret_cast&lt;HANDLE&gt;(ke.hMSEvent) };</p> <p>switch (MsgWaitForMultipleObjects(_countof(handles), handles, false, 1000, QS_ALLINPUT))<br>{<br>case WAIT_OBJECT_0:<br>{<br>IInfraredFrameArrivedEventArgs* pArgs = nullptr;<br>TRACE(L"IR Frame Event Signaled.");</p> <p>if (ke.pReader)<br>{<br>HRESULT hr = ke.pReader-&gt;GetFrameArrivedEventData(ke.hIREvent, &amp;pArgs);<br>TRACE(L"Retreive Frame Arrive Event Data -HR: %d", hr);</p> <p>if (SUCCEEDED(hr))<br>{<br>TRACE(L"Retreived Frame Arrived Event Data");<br>ke.InfraredFrameArrived(pArgs);<br>pArgs-&gt;Release();<br>TRACE(L"Frame Arrived Event Data Released");<br>}<br>}<br>}<br>break;<br>}<br>}<br>if (WM_QUIT == msg.message)<br>{<br>break;<br>}<br>}</p> <p>return (int) msg.wParam;<br>


12. Add the following StartKinect() and struct class to your [projectName].h header file:


<br>#pragma once<br>#include "resource.h"<br>#include "common.h"<br>#include &lt;Kinect.h&gt;<br>#include &lt;memory&gt;<br>#include &lt;algorithm&gt;</p> <p>using namespace std;</p> <p>struct KinectEvents<br>{</p> <p>public:<br>std::unique_ptr&lt;IKinectSensor&gt; pKinect;<br>std::unique_ptr&lt;IInfraredFrameSource&gt; pSource;<br>std::unique_ptr&lt;UINT16*&gt; pInfraredData;<br>std::unique_ptr&lt;IInfraredFrameReader&gt; pReader;<br>WAITABLE_HANDLE hIREvent;<br>UINT mLengthInPixels;<br>bool mIsStarted;<br>std::unique_ptr&lt;IMultiSourceFrameReader&gt; pMultiSourceFrameReader;<br>WAITABLE_HANDLE hMSEvent;</p> <p>KinectEvents() : pKinect(nullptr),<br>pSource(nullptr), <br>pInfraredData(nullptr),<br>pReader(nullptr),<br>hIREvent(NULL),<br>mLengthInPixels(0),<br>mIsStarted(false),<br>pMultiSourceFrameReader(nullptr),<br>hMSEvent(NULL)<br>{<br>TRACE(L"KinectEvents Constructed");<br>//Initialize Kinect<br>IKinectSensor * pSensor = pKinect.get();<br>HRESULT hr = GetDefaultKinectSensor(&amp;pSensor);<br>if (SUCCEEDED(hr))<br>{<br>TRACE(L"Default Kinect Retreived - HR: %d", hr);<br>//we have a kinect sensor<br>pKinect.reset(pSensor);<br>KinectStatus status;<br>hr = pKinect-&gt;get_Status(&amp;status);<br>TRACE(L"Kinect is valid device - status: %d\n", status);<br>}<br>}</p> <p>~KinectEvents()<br>{<br>TRACE(L"KinectEvents Destructed");<br>if (hIREvent)<br>{<br>TRACE(L"Handle %d - being released...", hIREvent);<br>HRESULT hr = pReader-&gt;UnsubscribeFrameArrived(hIREvent);<br>if (SUCCEEDED(hr))<br>TRACE(L"Handle to InfraredFrame Event Successfully Released");<br>else<br>TRACE(L"Handle to InfraredFrame Event Not Released");<br>}<br>hIREvent = NULL;<br>TRACE(L"Handle to InfraredFrame set to NULL");<br>if (hMSEvent)<br>{<br>TRACE(L"Handle %d - being released...", hMSEvent);<br>HRESULT hr = pMultiSourceFrameReader-&gt;UnsubscribeMultiSourceFrameArrived(hMSEvent);<br>if (SUCCEEDED(hr))<br>TRACE(L"Handle to MultiSource Frame Event Successfully Released");<br>else<br>TRACE(L"Handle to MultiSource Frame Event Not Released");<br>}<br>hMSEvent = NULL;<br>TRACE(L"Handle to MultiSource Frame Event set to NULL");<br>pReader.release();<br>pReader = nullptr;<br>TRACE(L"InfraredFrame Reader Released");<br>pInfraredData.release();<br>pInfraredData = nullptr;<br>TRACE(L"InfraredFrame Data buffer Released");<br>pSource.release();<br>pSource = nullptr;<br>TRACE(L"InfraredFrameSource Released");<br>pMultiSourceFrameReader.release();<br>pMultiSourceFrameReader = nullptr;<br>TRACE(L"Multi Source Frame Reader Released");<br>if (pKinect)<br>{<br>HRESULT hr = pKinect-&gt;Close();<br>TRACE(L"Closing Kinect - HR: %d", hr);<br>HR(hr);<br>TRACE(L"HR : %d", hr);<br>pKinect.release();<br>pKinect = nullptr;<br>TRACE(L"Kinect resources released.");<br>}<br>}</p> <p>void Start()<br>{<br>ASSERT(pKinect);<br>if (!mIsStarted)<br>{<br>ICoordinateMapper * m_pCoordinateMapper = nullptr;<br>HRESULT hr = pKinect-&gt;get_CoordinateMapper(&amp;m_pCoordinateMapper);<br>TRACE(L"Retrieved CoordinateMapper- HR: %d", hr);<br>IBodyFrameSource* pBodyFrameSource = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pKinect-&gt;get_BodyFrameSource(&amp;pBodyFrameSource);<br>TRACE(L"Retrieved Body Frame Source - HR: %d", hr);<br>}<br>IBodyFrameReader * pBodyFrameReader = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pBodyFrameSource-&gt;OpenReader(&amp;pBodyFrameReader);<br>TRACE(L"Opened Kinect Reader - HR: %d", hr);<br>}<br>IInfraredFrameSource * pIRSource = nullptr;<br>if (SUCCEEDED(hr))<br>{<br>hr = pKinect-&gt;get_InfraredFrameSource(&amp;pIRSource);<br>TRACE(L"Retrieved IR Frame Source - HR: %d", hr);<br>}<br>if (SUCCEEDED(hr)){<br>TRACE(L"Kinect has not started yet... Opening");<br>hr = pKinect-&gt;Open();<br>TRACE(L"Opened Kinect - HR: %d", hr);<br>}<br>////Allocate a buffer<br>IFrameDescription * pIRFrameDesc = nullptr;<br>if (SUCCEEDED(hr)){<br>pSource.reset(pIRSource);<br>hr = pIRSource-&gt;get_FrameDescription(&amp;pIRFrameDesc);<br>TRACE(L"Retreived IR FRAME Source - HR: %d", hr);<br>}<br>UINT lengthInPixels = 0;<br>if (SUCCEEDED(hr)){<br>// pSource.reset(pIRSource);<br>hr = pIRFrameDesc-&gt;get_LengthInPixels(&amp;lengthInPixels);<br>TRACE(L"Retreived IR FRAME Description Pixel Length", hr);<br>}<br>auto ret = pIRFrameDesc-&gt;Release();<br>TRACE(L"IR FrameDescription Released %d", ret);<br>IInfraredFrameReader * pIRReader = nullptr;<br>if (SUCCEEDED(hr)){<br>TRACE(L"Length In Pixels: %d", lengthInPixels);<br>mLengthInPixels = lengthInPixels;<br>pInfraredData = make_unique&lt;UINT16*&gt;(new UINT16[lengthInPixels]);<br>hr = pSource-&gt;OpenReader(&amp;pIRReader);<br>TRACE(L"Opened IR Reader");<br>}<br>if (SUCCEEDED(hr)){<br>pReader.reset(pIRReader);<br>hr = pReader-&gt;SubscribeFrameArrived(&amp;hIREvent);<br>TRACE(L"Reader Accessed Successfully");<br>TRACE(L"Subscribe to Frame Arrived Event call - HR: %d", hr);<br>}<br>if (SUCCEEDED(hr)){<br>TRACE(L"Successfully Subscribed to Frame Arrived EventID: %d", (UINT)hIREvent);<br>}<br>mIsStarted = true;<br>}<br>}</p> <p>void InfraredFrameArrived(IInfraredFrameArrivedEventArgs* pArgs)<br>{<br>TRACE(L"IR Framed event arrived");<br>ASSERT(pArgs);<br>IInfraredFrameReference * pFrameRef = nullptr;<br>HRESULT hr = pArgs-&gt;get_FrameReference(&amp;pFrameRef);<br>if (SUCCEEDED(hr)){<br>//we have a frame reference<br>//Now Acquire the frame<br>TRACE(L"We have a frame reference - HR: %d", hr);<br>bool processFrameValid = false;<br>IInfraredFrame* pFrame = nullptr;<br>TIMESPAN relativeTime = 0;<br>hr = pFrameRef-&gt;AcquireFrame(&amp;pFrame);<br>if (SUCCEEDED(hr)){<br>TRACE(L"We have acquired a frame - HR : %d", hr);<br>//Now copy the frames data to the buffer<br>hr = pFrame-&gt;CopyFrameDataToArray(mLengthInPixels, *pInfraredData);<br>if (SUCCEEDED(hr)){<br>TRACE(L"We have successfully copied ir frame data to buffer");<br>processFrameValid = true;<br>hr = pFrame-&gt;get_RelativeTime(&amp;relativeTime);<br>TRACE(L"Relative Time: - HR: %d\t Time: %d", hr, relativeTime);<br>}<br>auto ret = pFrame-&gt;Release();<br>TRACE(L"IR Frame released: %d", ret);<br>}<br>auto ret = pFrameRef-&gt;Release();<br>TRACE(L"IR Frame Reference released: %d", ret);<br>if (processFrameValid)<br>ProcessFrame(mLengthInPixels, *pInfraredData, relativeTime);<br>}<br>}</p> <p>void ProcessFrame(UINT length, UINT16 * pBuffer, TIMESPAN relativeTime)<br>{<br>TRACE(L"Process Frame Called.\nBufferLength: %d\n\tTimeSpan: %d", length, relativeTime);<br>}<br>}<br>;</p> <p>void StartKinect();</p> <p>


13. Add a Common.h header file to your project which contains the following:


</p> <p>#pragma once</p> <p>#include &lt;wrl.h&gt;<br>#include &lt;algorithm&gt;</p> <p>#pragma warning(disable: 4706)<br>#pragma warning(disable: 4127)</p> <p>namespace wrl = Microsoft::WRL;<br>using namespace std;<br>using namespace wrl;</p> <p>#define ASSERT(expression) _ASSERTE(expression)</p> <p>#ifdef _DEBUG<br>#define VERIFY(expression) ASSERT(expression)<br>#define HR(expression) ASSERT(S_OK == (expression ))<br>inline void TRACE(WCHAR const * const format, ...)<br>{<br>va_list args;<br>va_start(args, format);<br>WCHAR output[512];<br>vswprintf_s(output, format, args);<br>OutputDebugString(output);<br>va_end(args);<br>}</p> <p>#else</p> <p>#define VERIFY(expression) (expression)</p> <p>struct ComException<br>{<br>HRESULT const hr;<br>ComException(HRESULT const value) :hr(value) {}<br>};</p> <p>inline void HR(HRESULT const hr)<br>{<br>if (S_OK != hr) throw ComException(hr);<br>}</p> <p>#define TRACE __noop<br>#endif</p> <p>#if WINAPI_FAMILY_DESKTOP_APP == WINAPI_FAMILY</p> <p>#include &lt;atlbase.h&gt;<br>#include &lt;atlwin.h&gt;</p> <p>using namespace ATL;</p> <p>template &lt;typename T&gt;<br>void CreateInstance(REFCLSID clsid, wrl::ComPtr&lt;T&gt; &amp; ptr)<br>{<br>_ASSERT(!ptr);<br>CoCreateInstance(clsid, nullptr, CLSCTX_INPROC_SERVER,<br>__uuidof(T), reinterpret_cast&lt;void **&gt;(ptr.GetAddressOf()));<br>}</p> <p>struct ComInitialize<br>{<br>ComInitialize()<br>{<br>CoInitialize(nullptr);<br>}<br>~ComInitialize()<br>{<br>CoUninitialize();<br>}<br>};</p> <p>// Safe release for interfaces<br>template&lt;class Interface&gt;<br>inline void SafeRelease(ComPtr&lt;Interface&gt; pInterfaceToRelease)<br>{<br>if (pInterfaceToRelease)<br>{<br>pInterfaceToRelease.Reset();<br>pInterfaceToRelease = nullptr;<br>}<br>}</p> <p>// Safe release for interfaces<br>template&lt;class Interface&gt;<br>inline void SafeRelease(Interface *&amp; pInterfaceToRelease)<br>{<br>if (pInterfaceToRelease != nullptr)<br>{<br>pInterfaceToRelease-&gt;Release();<br>pInterfaceToRelease = nullptr;<br>}<br>}</p> <p>template &lt;typename T&gt;<br>struct WorkerThreadController<br>{<br>public:<br>WorkerThreadController() {<br>}<br>~WorkerThreadController() { }<br>static DWORD WINAPI StartMainLoop(LPVOID pwindow)<br>{<br>MSG msg = { 0 };<br>while (pwindow)<br>{<br>T * pSkeleton = reinterpret_cast&lt;T *&gt;(pwindow);<br>TRACE(L"Calling Update in worker thread main loop");<br>pSkeleton-&gt;Update();<br>Sleep(10);<br>}<br>return 0;<br>}<br>};<br>#endif</p> <p>


14. Now it’s time to compile, however we have to make sure our C++ project has access to all the header files and libraries required for compilation of a Kinect v2 project.

15. First open the project properties and navigate to the C/C++ All options tab. Choose an Active(x64) platform, as the Kinect v2 API SDK only comes in 64 bit currently. Set the Additional include directories to point to the location where the Kinect v2 API SDK is installed and select the …inc\ folder:

16. clip_image018

17. Next select the Linker All Options tab, and choose the folder where the Kinect20.lib file can be found, and add the word Kinect20.lib inside the Additional Dependencies:

18. clip_image020

19. Compile the solution (Ctrl+Shift+B).

20. Plug your Kinect v2 device up, start the KinectService.exe proxy application.

21. Open up an application that supports viewing output to the Output window (VS.Net, Sysinternal DebugView etc.)

22. Run DebugView

23. Navigate to your debug folder and double click on the executable (KinectEvents_Sample.exe in my case)

24. clip_image022

25. Once the application starts, on the File Menu click on the Start Kinect

26. Watch the events fly in as new frames are detected and the device notifies your application.

27. clip_image024

Presenter Quirks: through the Kinect Office Pack Plugin


Last year was a good year, and this year will be even better for techies such as myself. To start the year off right I want to talk about my newest adventure and project.

The adventure deals with the Kinect. Not just any Kinect, the new Kinect for the Xbox One, currently known as the Kinect for windows v2.

My team and I are on a new project… We’re calling it internally:

“Presenter Quirks”

It is a suite of applications and add-ins implemented for Microsoft software, and Windows 8 Devices and especially Microsoft Office applications like PowerPoint. This suite will assist you in becoming that great presenter, orator, lecturer, speaker, and etc. It works by way of the Kinect for Windows version 2 Device. It measures your speech, movement, gestures, and body statistics such as heart and speech rate, and colloquial terms and for the hip generation, slang. As a quick example, let’s say you use Microsoft Office PowerPoint 2013. Perhaps you would like to observe and perfect your presentation skills, by seeing how many times you say words like “Umm”, “Eh” or “Ah”, or even the phrases “like …”, “you know what I mean”, “you follow?”. Perhaps, you are a beginner English student and you want to perfect your English persona.

Maybe you even want to cut down on how much you talk with your hands and keep them within a certain Physical Interaction zone (phiz…). Maybe you want to track your body language as you walk across a stage while you’re speaking and you’d like to cut down on that. Maybe you’re nervous as heck, and your heart rate is beating too fast, and you’d like an animation to play to keep the crowd entertained, to lighten the pressure. Or perhaps you want to randomly monitor your audience to see if you are holding their attention while speaking…

If any one or all of these apply to you, then you may be interested in finding out more about Presenter Quirks. All of this can be done with this new application suite and software we are producing plus a whole lot more.

I am excited to introduce a sneak peak at phase 1 of Presenter Quirks

Listed below are some screenshots of the PowerPoint Add-In, with the Kinect enabled that will be available with Presenter Quirks.


Here is the application running with PowerPoint 2013 with the PIP feature turned on…

The above feature is the ability to put you as the presenter inside your presentation:

Another feature is actually controlling the presentation by pointing and utilizing the Laser feature of PowerPoint:

The above picture just shows some other features Presenter Quirks will offer.

Presenter Quirks will also support controlling PowerPoint with voice and hand gestures.

Now there are other samples that have managed to make the PowerPoint slides go forward and back using hand gestures such as here (http://www.kinecthacks.com/microsoft-demos-kinect-powerpoint/ ) and here (http://kinectpowerpoint.codeplex.com/ ), and these are pretty cool. But sorry guys, this still doesn’t compare to Presenter Quirks.

Presenter Quirks has the ability to record your body metrics and report back to you statistics that gives you the opportunity to become a better presenter, lecturer, speaker, news bearer, and etc. Controlling power point is easy, what’s hard is making sure your audience is entertained and focused on the message you’re delivering. The best way to do this is to become a great speaker, and have the tools by your side to verify this. Plus a little help from Power Point automation doesn’t hurt…


Well I don’t want to reveal all the capabilities in 1 post, so stay tuned. We have way more stuff planned and coming up!!!

Build Your First Kinect for Windows Application

Ok, so many of you have asked where have I been and what have I been doing over the past few months… well here it is:

Recently I have been working on some new training videos dealing with the Kinect for Windows. The first video is published and you all can take a look, as well as learn some other KOOL things too…

Goto this site: http://www.wintellectnow.com/
My Video can be found here: http://www.wintellectnow.com/Videos/Watch/build-your-first-kinect-for-windows-app

Your 14 day promotional code is Goins-13, this will give the user full access to the site for 14 days. To use the code have them follow these instructions:

1. go to http://www.wintellectNOW.com
2. click on the Sign up Now button
3. Select “Individual Plan”
4. Enter “Goins-13” in the Promo code box.
5. Fill out the rest of the form

If you get a chance check it out… I’d love to hear your feedback. Also please feel free to pass this email on to your friends, family, brotherhood, and sisterhood. Thank you.

Custom Binding Elements and Custom Behaviors won’t Show In BizTalk WCF Custom Adapter?


Hey all this is a quick troubleshooting post on how to get your Custom Binding Elements and Custom Behavior Elements showing in BizTalk 2010, and 2013. This post also serves as a reminder for me J

The other day I was confronted with getting some WCF Custom adapters configured. In one scenario, I needed to use a custom wcf binding I created. In another scenario I needed to use a custom wcf behavior. So I went through the normal steps of using C# and Visual Studio .net 2012 to create a custom Binding and custom Behavior. You can read about that here http://msdn.microsoft.com/en-us/magazine/cc163302.aspx and here http://trycatch.me/adding-custom-message-headers-to-a-wcf-service-using-inspectors-behaviors/.

Once I did this I figured all I remembered I needed to do was to register it in the machine.config file and BizTalk should be able to see it… “Hold your horses Tonto” sayeth the lone ranger.

This is true, however as a sanity check, it depends on what you register. For BizTalk to see the custom binding and behavior, you need to register the BindingElement or Behavior Element, not the specific behaviors or bindings. You also need to make sure you register it with both 32bit and64 bit config files. In my opinion the easiest way to register these extensions is using the graphical WCF Configuration tool shown below.

Once the behavior or bindings are registered properly you can see them inside of BizTalk admin console.


This means that you can practically use any properly registered WCF behavior or binding with BizTalkWCF Custom adapters… including the ability to send data to Azure Bridges, Topics, Queues, Azure ACS, and Azure EDI Bridges.





Never Ending TFS 2010 Unit Tests

This title is purposely misleading. Rather it should be how to get Visual Studio 2012 unit tests to run with TFS Build Server 2010. However this is the result of not completely configuring my build environment as outlined in the remaining text below…

Have you ever created a TFS 2010/2012 Build definition, and tried to make sure that unit tests were executing correctly, only to check it and notice that the Unit Tests were not quite executing, yet TFS Build server displayed it was. To make matters worse, the unit tests were never ending, and the TFS Build was sitting there waiting for the test to end. To top it off, these builds were “Gated” check in builds, which basically meant no one could check in their code successfully until the build completed successfully, which in my case wasn’t happening because the unit test never completed, heck they never truly started.

You tried everything and not once did it occur to you that you were running Visual Studio .Net 2012 unit tests with TFS Build Server 2010. TFS Build Server 2010 is designed to work with Visual Studio .Net 2010. Even though it will work woth VS.Net 2012, TFS Build Server 2010 knows nothing about VS.Net 2012, including it’s unit tests.

So… I searched… http://social.msdn.microsoft.com/Forums/en-US/winappswithcsharp/thread/ab53380d-36fd-40e5-8494-3cb9560578b5 and I asked http://social.msdn.microsoft.com/Forums/en-US/tfsbuild/thread/252f96cd-4b67-4075-ba82-57aa322f69ec

Luckily, one of the MS TFS Experts: John Qiao, pointed me in the right direction and gave me the answer…

First thing I had to do was to create a new Build Process template based on the default template, so that I would mess up any default settings and build definitions.

Next thing I had to do was modify the section which invokes the Unit Testing engine: MSTest.

Which was easy enough, just open up the Custom template I created, which just happens to be a WF 4.0. Navigate to the Run MSTest for Test xxx, activity, and modify it’s ToolPath property settings:

That was it… My Visual Studio 2012 Unit Tests were executing in TFS Build Server 2010 successfully… (BTW, I know my path is mis-spelled, when installing VS.Net it was manually configured and typed in wrong… I didn’t do it- promise J)



Automating Windows Azure Virtual Machines with System Center 2012 Service Manager

Greetings all,

Today I take a break from the more technical underlining of development tasks to peek into the Infrastructure and Operations management side of things. This brief talk will explain how easy it is to take Microsoft System Center 2012 Service Manager, Orchestrator Runbooks, and Microsoft Azure Virtual Machines to integrate and automate provisioning of new VM instances.

Before I get into specifics, let’s discuss these topics from a high overview to make sure everyone understands the topic at hand.


Overview of System Center 2012 SP1

System Center 2012 SP1 is a suite of applications produced by Microsoft for managing enterprise servers, client machines, inventory and asset tracking, enterprise applications, request offerings, problem management and incident reports and many more features. System Center 2012 is comprised of Operations Manager, Virtual Machine Manager, AppController, Data Protection Manager, Configuration Manager, Service Manager, and Orchestrator.

Operations Manager is used to monitor servers, such as SQL database, BizTalk Server and IIS Web Servers. This application can monitor when these servers go offline and when errors occur on these servers. This application can also notify groups of individuals responsible for the health and maintenance of these servers.

Virtual Machine Manager is an application used to manage, create and monitor virtual machines created with Hyper-V. A virtual machine is a virtualized computer that is running in the memory process of a Host Hyper Visor application (Hyper-V). The Hyper Visor application divides up it’s hardware, memory and other assets to provide resources for the virtual machine image.

AppController is an application that allows companies to manage their virtual machines created in Virtual Machine Manager from s self provisioning perspective. A Web site can be configured to allow internal employees request and self provision virtual machines across a private internal network, or even the public network known now adays as the public cloud.

Data Protection Manager is an application that allows you to backup and restore virtual machine hard drives, databases, and applications stored on network drives and network shares. It helps to automate the backup and restore process of large files and systems.

Configuration Manager is an application that allows companies to keep track of all hardware, and versions of software installed in a network. It can also be used as an inventory and asset tracking repository.


System Center Service Manager 2012

Service Manager is an application that allows enterprises to manage activities, change, incidents, problems, release and service request items. It also allows you to create your own form of item management. These items, are usually referred to as work items and can consist of something called a request offering, which is simply where an IT department can offer various capabilities to other internal departments. These capabilities, referred to in System Center as “Services” can refer to anything from Purchasing new laptops, to fixing the office telephone. Typically, when these services are offered, there must be a cost associated with them and managed so that the services can be billed to the appropriate department requesting the service. This is where Service Manager comes in. This application allows enterprises to create one or more of these services, as it applies to your business, and create a Request form to expose to internal departments to request this capability. There is also a self service aspect to this application as well, similar to the AppController for Virtual Machines. The self service aspect allows employees to request a service from a Web site configured to host these self service forms.

Which leads us up to the purpose of this posting. Service Manager easily allows you to create forms to expose “Services” or capabilities which an IT Department offer to its co-workers and departments it must service. AppController and VMM, allows employees to create Virtual Machines, and provision them within their private network (Private Cloud) or external network (Public Cloud). With these three applications, there is no bridge which allows you provision Windows Azure Public resources, such as Mobile Services, Azure Virtual Machines, Azure Storage accounts, and Azure Storage containers, Azure Networks, and etc.

With the implementation of Public Cloud offerings such as Windows Azure, it only makes sense to support this with a powerful suite of applications within System Center 2012 sp1. In order to do this, we must configure and develop business logic that can automate workflow and call out to the Public Cloud api’s.


Overview System Center Orchestrator 2012

Enter the world of Orchestrator. System Center Orchestrator contains something called “Runbooks”. This is an application that allows developers to further extend System Center suite to call out into the external world of Public Cloud offerings such as Windows Azure. A runbook is a graphical designer that allows a developer to configure and create some very fancy and complex workflows. It can do things like send an email when a request for a new telephone line is generated, or even automatically provision a new Windows Azure Virtual Machine instance when a request is generated. For those BizTalk Dev’s out there, think of a Runbook as similar to a WF Workflow, or a BizTalk Orchestration, except that it has it’s own editor as opposed to being hooked inside of Visual Studio .Net.



Overview of Windows Azure

Windows Azure is Microsoft’s public cloud service offerings to everyone. It offers a multitude of capabilities. It supports Web Sites, Virtual Machines, Mobile Services, Cloud Services, SQL Databases, Storage Containers, BigData processing, Virtual Networks, SQL Reporting, Media Services, Active Directory and more.


When working with Windows Azure Virtual Machines, this is a service that Microsoft allows you to create your own virtualized computer and have it hosted and maintained in Windows Azure. This means if you have company, and you need to hire a contractor or employee to do some work, you can create a virtualized computer configured in your internal network to access your sensitive information. The Data never has to leave your controlled and monitored environment. It’s very easy to setup a server, or a workstation for virtualization. Below is a screenshot of a new VM wizard showing you just how easy it is.


How it all fits together to Automate Windows Azure Virtual Machines with System Center 2012 Service Manager

So now that you overstand what System Center 2012 SP1 is, and you overstand Windows Azure Virtual Machines, let’s discuss how we can integrate the 2 together to automate the Windows Azure wizard. First thing we need is a way that Service Manager to maintain and monitor our requests for provisioning a Windows Azure virtual machine.

Service Manager gives you the ability to create a Service Request item. The only catch is that the out of the box Service Request items does not contain any field entries for DNS Name, Image Size, Image Name, Password, Affinity Group or Locations as the Windows Azure Wizard supports. This means we will need to extend the Service Request item in Service Manager. Extending the Service Manger Service Request item requires that you create your own class and derive from the base class to extend it. This is a topic for another post, however I will way it very easy to do, and once it’s done you can create a custom form to display inside Service Manager to keep track of these special fields and more.

After you have created a custom form, you can use System Center Orchestrator to create a custom Runbook to retrieve the filled out entries in the Request form and automate the process for provisioning Windows Azure Virtual Machines. Below is an example of a runbook created to monitor the the Request Form being filled out and crated, and then invoking the Windows Azure Runbooks activities to provision the Windows Azure Virtual Machines.


In a future post I drill more deeper into how I created the Runbook, Custom Request Form, and extended the ServiceRequest Item in Service manager. Until then, happy reading…!

Kinect For Windows v1.7 SDK Download Available

Greetings All,


Just a quick note…

The Kinect for Windows SDK v1.7 is now available for download as of March 18th, 2013…

You can get the download here. You can also read from a fellow MVP and colleague Tim Huckaby about it here

I’ll post more details about what you can do with it in the following days…


Let’s start ‘Kinecting’ Things Together Shall We?