My Kinect told me I have Dark Olive Green Skin…


Did you know the Kinect for windows v2 has the ability to determine your Skin  pigmentation and your hair color? – Yes I’m telling you the truth. One of the many features of the Kinect device is the ability to read skin complexion and hair color of a person who is being tracked by the device.

If you ever need or require the ability to read the skin complexion of a person or determine the color of a persons hair on their head, this posting will show you how to do just that.

image

The steps are rather quick and simple. Determining the skin color requires you to access Kinect’s HD Face features.

Kinect has the ability to detect Facial features in 3-D. This is known as “HD Face”. It can detect depth, height, and width. The Kinect can also use it’s High Definition Camera, to detect colors such as the Red, Green, and Blue intensities that reflect back, and infer the actual skin tone of a tracked face. Along with the skin tone, the Kinect can also detect the Hair color on top of a person’s head…

So What’s Your Skin Tone? Click Here to download the source code and try it out.

If you want to include this feature inside your application, the steps you must take are:

1. Create a new WPF or Windows 8.1 WPF application

2 Inside the new application, add a reference to the Microsoft.Kinect and Microsoft.Kinect.Face assemblies.

image

3. Let’s also make sure we set this up for the proper processor architecture: HD Face supports both 32bit and 64 bit. I want to use 64 bit. Change your build settings to use 64 bit configuration and builds from the project properties in VS.Net:

image

The above step is very important. You must choose either x86 (32 bit) or x64 (64 bit) architecture and build accordingly. “Any Cpu” won’t work as is here. The reason being that the Kinect Assemblies are named exactly the same thing, however they are compiled appropriately for each architecture. You can easily get a “BadFormat” exception if you’re using version x86 with a 64 bit build and vice versa.

4. Next copy the correct version of the NuiDatabase from the Kinect Redist folder into your \bin\x64\Debug  folder path for your project. This step is also important. If you mis-match your versions, by copying the x86 NuiDatabase into a 64bit compiled application you’ll start to see weird errors during runtime, things like it can’t find your Kinect.Face assembly, and “BadFormat” errors. So make sure you choose the correct architecture.

image

Note: Optionally You can also use the Kinect Nu-Get packages which will basically do the right thing for you. However you can’t mix and match. You can’t manually add references and then go back and add NuGet packages, things will quickly get out of sync:

image

5. Inside your code add the namespaces for Kinect and Kinect HD Face:

using Microsoft.Kinect;
using Microsoft.Kinect.Face;

6. Create some variables to hold pointers to the artefacts:

        private KinectSensor m_sensor;
        private BodyFrameReader m_bodyReader;
        private HighDefinitionFaceFrameReader m_hdFaceReader;
        private HighDefinitionFaceFrameSource m_hdFaceSource;
        private FaceModel m_faceModel;
        private FaceAlignment m_faceAlignment;
        private FaceModelBuilder m_faceBuilder;
        private ulong m_trackedBodyId;
        private bool m_faceBuilderStarted;
        private bool m_faceBuildComplete;

The m_sensor holds a pointer to the Kinect Device itself. We’ll use this to get access to the Body Frames, High Definition Face Frames, FaceModel, FaceModel Builder and tracked person. The m_bodyReader will be a frame reader for determining a body being tracked. The Kinect sends 30 frames per second. Each frame can tell us if a person is found within that frame of data. The m_hdFaceSource will be the HD Face source to keep track of bodyTrackingID’s and give us access to the 30 frames per second data of HD Face Frames. The m_hdFaceReader will be used as each HD Face Frame is processed, it allows us to get the 3-D Face information (FaceModel), and listen for events which allow us to build a complete 180 degree view of the face. The m_faceModel will be the 3-D Face measurements. The m_faceBuilder will be used to build the 180 degree HD Face model which will be stored inside the m_hdFaceModel. The m_faceBuilder provides us with the internal mechanism to build an internal matrice of 3-D Face depth values of  IR, and Color (RGB) information. This allows us to then produce the complete m_hdFaceModel with Skin Color and Hair Color respectively. The m_faceBuilder also allows us to listen for events that tell us when the tracked face needs to rotate left, right, and tilt up, to make sure the complete matrix is built. The m_trackedBodyId is a tracking id that synchronizes the tracked Body, with the HD Face Source. Without a synchronized tracked person the HD Face can not perform it’s work. Lastly, there are two flag variables that will help us keep track of when the Face Builder process has started, and when the Face Builder process has completed.

Game Plan:

Overall what the application is going to do is initialize the Kinect sensor  and variables to default values. It will then setup the BodyFrameReader to listen for body frames to come from the Kinect. Once a body frame is generated, we will determine if a body is within the frame and figure out if the body is tracked. If the body is tracked, we will get the trackingId of the body and set it to the HDFrameSource. Once the tracking Id is set on the HDFrame source, this will generate HDFaceFrame events. Once a valid HDFaceFrame is generated we will start the face builder process. We will ask the face builder to start the process of building the 180 degree face model matrix. At this point the tracked user needs to turn their head slowly left and back to center, right and back to center, up and down back to center until the face builder notify us when it’s complete building the matrix. Once complete we ask the face builder to produce the 3-d face model. The 3-D face model then gives us access to the Skin Color, Hair Color, and 3-D depth and matrices.

7.  Initialize the sensor get an instance of your Kinect sensor, initialize your bodyReader, hdFaceReader, faceModel, trackingId and faceAligment variables:

 public MainWindow()
        {
            InitializeComponent();
            InitializeKinect();
        }

        public void InitializeKinect()
        {
            m_sensor = KinectSensor.GetDefault();
            m_bodyReader = m_sensor.BodyFrameSource.OpenReader();
            m_bodyReader.FrameArrived += m_bodyReader_FrameArrived;
            
            m_hdFaceSource = new HighDefinitionFaceFrameSource(m_sensor);
            m_hdFaceReader = m_hdFaceSource.OpenReader();
            m_hdFaceReader.FrameArrived += m_hdFaceReader_FrameArrived;
            m_faceModel = new FaceModel();
            m_faceBuilder =
                m_hdFaceReader.HighDefinitionFaceFrameSource.OpenModelBuilder(FaceModelBuilderAttributes.HairColor 
                 | FaceModelBuilderAttributes.SkinColor);
            m_faceBuilder.CollectionCompleted += m_faceBuilder_CollectionCompleted;
            m_faceBuilder.CaptureStatusChanged += m_faceBuilder_CaptureStatusChanged;
            m_faceBuilder.CollectionStatusChanged += m_faceBuilder_CollectionStatusChanged;
            m_faceAlignment = new FaceAlignment();
            m_trackedBodyId = 0;
            m_faceBuilderStarted = false;
            m_faceBuildComplete = false;
            m_sensor.Open();
        }

8. Inside the BodyReader_FrameArrived, event handler, add code to determine when the Kinect tracks a body, once Kinect finds the tracked body, set the trackingId for the hdFaceReader Source.

void m_bodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
        {
            using (var bodyFrame = e.FrameReference.AcquireFrame())
            {
                if (null != bodyFrame)
                {
                    Body[] bodies = new Body[bodyFrame.BodyCount];
                    bodyFrame.GetAndRefreshBodyData(bodies);
                    foreach (var body in bodies)
                    {
                        if (body.IsTracked)
                        {
                            m_trackedBodyId = body.TrackingId;
                            m_hdFaceReader.HighDefinitionFaceFrameSource.TrackingId = m_trackedBodyId;
                        }
                    }
                }
            }
        }

9. Once the trackingId is set for the HdFaceFrameReader Source, this will kick off the HD Face Frame Arrived event handler. Just check the flag and start the face builder process:

void m_hdFaceReader_FrameArrived(object sender, HighDefinitionFaceFrameArrivedEventArgs e)
        {
            if (!m_faceBuilderStarted)
            {
                m_faceBuilder.BeginFaceDataCollection();
            }
            
        }

 

10. In the FaceBuilder_CollectionStatus, just listen for a complete status. This allows us to set our flag for letting us know all the face views have been correctly captured and we can ask for the face builder to give us the model:

 void m_faceBuilder_CollectionStatusChanged(object sender, FaceModelBuilderCollectionStatusChangedEventArgs e)
        {
            var collectionStatus = e.PreviousCollectionStatus;
            switch (collectionStatus)
            {
                    case FaceModelBuilderCollectionStatus.Complete:
                    lblCollectionStatus.Text = "CollectionStatus: Complete";
                    m_faceBuildComplete = true;
                    break;              

            }
        }

11. In the faceBuilder_CollectionCompleted event handler check the collection status to make sure it’s completed, check your flag to make sure it’s set, and then ask the faceBuilder to produce the FaceModel using the event argument variable. The face Model provides access to the Skin Color and Hair Color as an Unsigned Integer (UINT).  To make this an actual drawing color, we’ll need to convert the UINT to a color structure. The color structure can be created using some old skool bit shifting, see below.

private void m_faceBuilder_CollectionCompleted(object sender, FaceModelBuilderCollectionCompletedEventArgs e)
        {
            var status = m_faceBuilder.CollectionStatus;
            //var captureStatus = m_faceBuilder.CaptureStatus;
            if (status == FaceModelBuilderCollectionStatus.Complete && m_faceBuildComplete)
            {
                try
                {
                    m_faceModel = e.ModelData.ProduceFaceModel();
                }
                catch (Exception ex)
                {
                    lblCollectionStatus.Text = "Error: " + ex.ToString();
                    lblStatus.Text = "Restarting...";
                    m_faceBuildComplete = false;
                    m_faceBuilderStarted = false;
                    m_sensor.Close();
                    System.Threading.Thread.Sleep(1000);
                    m_sensor.Open();
                    return;
                }
                    var skinColor = UIntToColor( m_faceModel.SkinColor);
                    var hairColor = UIntToColor(m_faceModel.HairColor);
                
                    var skinBrush = new SolidColorBrush(skinColor);

                    var hairBrush = new SolidColorBrush(hairColor);

                    skinColorCanvas.Background = skinBrush;

                    lblSkinColor.Text += " " + skinBrush.ToString();

                hairColorCanvas.Background = hairBrush;
                
                lblHairColor.Text += " " + hairBrush.ToString();


                    m_faceBuilderStarted = false;
                    m_sensor.Close();
               
            }
        }
        private Color UIntToColor(uint color)
        {
            //.Net colors are presented as
            // a b g r
            //instead of
            // a r g b
            byte a = (byte)(color >> 24);
            byte b = (byte)(color >> 16);
            byte g = (byte)(color >> 8);
            byte r = (byte)(color >> 0);
            return Color.FromArgb(250, r, g, b);
        }

 

12. Lastly add the WPF Labels, and Canvas elements to your app so you can actually see something:

<Window x:Class="KinectFindingSkinTone.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525">
    <Window.Resources>
        <SolidColorBrush x:Key="MediumGreyBrush" Color="#ff6e6e6e" />
        <SolidColorBrush x:Key="KinectPurpleBrush" Color="#ff52318f" />
        <SolidColorBrush x:Key="KinectBlueBrush" Color="#ff00BCF2" />
    </Window.Resources>

    <Grid Background="White" Margin="10 0 10 0">

        <StackPanel Margin="20">
            <TextBlock x:Name="lblCollectionStatus"  Text="CollectionStatus: " Foreground="{StaticResource KinectBlueBrush}" FontSize="20" />
            <TextBlock x:Name="lblStatus"  Text="FrameStatus: " Foreground="{StaticResource KinectBlueBrush}" FontSize="20" />

            <TextBlock x:Name="lblSkinColor"  Text="Skin Color: " Foreground="{StaticResource KinectBlueBrush}" FontSize="20" />
                       <Border BorderBrush="Black"><Canvas Width="300" Height="100"  x:Name="skinColorCanvas" Background="DarkGray"></Canvas></Border>
            
            <TextBlock x:Name="lblHairColor"  Text="Hair Color: " Foreground="{StaticResource KinectBlueBrush}" FontSize="20" />
                <Border BorderBrush="Black">
            <Canvas Width="300" Height="100" x:Name="hairColorCanvas" Background="DarkGray"></Canvas>
                </Border>
        </StackPanel>
    </Grid>
</Window>

Once your application runs it should look similar to this (Minus the FrameStatus):

image

Try it out on your own.

2 thoughts on “My Kinect told me I have Dark Olive Green Skin…

Leave a comment