HoloLens Error DEP6200 : Bootstrapping ‘Device’ failed. Device cannot be found.

I was lucky enough to get my HoloLens device flashed to RS4. I was delighted to see some of the new cool benefits which I’m not going to mention until the release becomes public.

I will mention however that once this was done I was informed that taking on this super early preview release, and they manual way it was done would reset and clear my device. Basically all my applications would be gone, and the device would basically be brand new with this new updated OS Build.

I was ok with this since it was my dev machine and I was itching to get my hands on the latest so I could start experimenting with all the latest changes. My initial impression is it is fast, and very fluent. My second impression is that make sure if you have at least 2 devices you DO NOT flash the second one. Keep one available for the official release when this becomes available, especially if you need to continue development of production ready, commercial applications. Flashing to RS4 is only for experimental purposes right now, until it is officially released. Heed my warning…

Now on to the topic, after the flash, my device was brand new, so I had to rename the device, recalibrate it – configure it, connect to the network and sign in. After all this was done, decided to go ahead and deploy a quick C# DirectX application onto the device using Visual Studio .Net.

As I was trying to deploy I ran into the infamous VS.Net error DEP 6200 Bootstrapping ‘Device’ failed. Device cannot be found…. I tried everything. I follow various advices from stackoverflow and MSDN forums but nothing would work. After reading a little more about what this error means from the links above I started thinking. I luckily had another device (which also was RS4 flashed) and I noticed the behavior of this device compared to the one yielding the issue. The second device was never paired with my laptop and this is when it hit me. This error appears to be that the device connection/device name is cached and you need a refresh.

So… I went ahead and removed and deleted all instances of the old HoloLens device from the device manger and control panel. I did this with the device Plugged In. Once you delete the image from the control panel -> Hardware And Sound -> Devices and Printers:


Then unplug and restart the computer. Then after restarting, plug the device back in, you should see the device behave as if it were never paired or connected. Once you see this you’re good. If you don’t try it again, removing it from Device Manager as well. Basically you need to reconnect the device as if it were never connected before.

Everything started working again and I could successfully deploy from Visual Studio.NET to my HoloLens device (RS4) edition.


R Tools for Visual Studio .Net In Public Preview

Microsoft has just released it’s R language plugin for Visual Studio .Net 2015 and it’s no joke: https://www.visualstudio.com/en-us/features/rtvs-vs.aspx

For those .Net developers and researchers out there, there’s another outstanding R language integrated development environment (IDE): Visual Studio .Net 2015. R tools for Visual Studio .Net (RTVS) is a plugin, project template, and extension to Visual Studio .Net 2015 Update 1 which allows developers to write R Scripts, test, debug, prototype, and research data.

It has all the same features of RStudio, and RGui with the exception of the package manager, but that’s coming soon. You can check out it’s features in this video here:

If you try it out and have some other ideas you can be one of the first to suggest new features at RTVS github repo.

In addition to RVTS, Microsoft is continuing to make good on its promise to support more open source initiatives, the R language being just another one of them. Not only has Microsoft  created this plugin, it has also added the ability to run R Scripts in an asynchronous, multithreaded, multicore concurrent engine: the Microsoft R server and the Microsoft R Open engine.


R Server and R Open are both server and client updates respectively to CRAN (R Project) which give it the ability to run over multiple cores taking advantage of the current CPU and CPU Core processing power. Both of these products are open source (GNU License), which means there source code is freely available to view and along versions for multiple platforms: Mac, Linux, and Windows.

So head over to this link to download the bits and start writing R Scripts inside Visual Studio .Net 2015.

Cortana and IoT: Demo of the Day on Channel 9

Hey all,

I’m featured on Jerry Nixon’s Demo of the Day post on Channel 9.


Jerry Nixon is a Developer Evangelist for Microsoft for the Colorado, Utah, Wyoming, and I think Nevada areas…

He and I connected on a Skype call one evening and caught me working on the Home Automation demo. He recorded it as he and I was talking. The demo is about how to automate your home using Windows 10, Cortana, Lifx buls, Weemo switches and the Sonos Wifi Sound system. You can check out the phone conversation/demo here: https://channel9.msdn.com/Shows/demooftheday/cortana-iot

The Hidden Future Reality: Room 2 Room

With all the buzz words on Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality – we think of goggles, glasses, hololenses and the likes.




Well, imagine complete rooms the where we can project holograms right in front of you and interact and talk without the need of glasses, goggles, visors, screens, hololenses and the likes… Imagine the scene below:


(Thanks for the pic Josh…)

Enter Room 2 Room.

Image from Published paper



It seems our friends over at Microsoft Research (Shoutout to Andrew – Thanks for helping us with our Holodeck simulator during the MVP Conference…) is at it again. “Room2Room uses projected augmented reality to enable co-present interaction between remote participants: (a, d) remote participants are represented as life-size virtual copies projected into the physical space”

This is fancy smancy words that basically mean the Microsoft researchers have taken  Microsoft Kinects and a Projectors and set them up is two different rooms running some custom software. This software basically allows you to talk to a virtual person in the other room through the Kinect and projectors.

Welcome to the new world!!!

Part 2: Building the Home Automation App – Tell Windows 10 Cortana to Control your Lights

This weekend I decided to take on a personal endeavour and step through my adventures of automating my Home with Windows 10 and IoT (Internet of my Things). You can read about a general overview in part 1 here.

This posting is about my steps and experiences around using Windows 10, an IoT device called the Phillips Hue Lighting system, and some custom code to control my office lights. Let’s first see it in action:

HomeAutomationTurnOnOffLights from Dwight Goins on Vimeo.

In the above video clip, I speak to my Windows 10 personal assistant: Cortana. I tell Cortana to turn my office lights on and off. I can even tell it to change office colors:

HomeAutomationChangeLightColors from Dwight Goins on Vimeo.

This is great!!! So how did you do it?

I created a Windows 10 UWP application and used the Phillips Hue Lighting REST API to control the lights and light colors.

Ok that’s the simple answer so let me expound some more on that. Windows 10 allows you to create Universal Windows Platform applications which target the Windows 10 operating system. The Windows 10 operating system has some core components which make it fairly easy to work with IoT devices. One example of this is controlling the Hue lights. Controlling the Hue lights is accomplished by way of a wireless signal through a local network which Windows 10 can send to the lighting system.

Windows 10 also has a built in personal assistant like that off Apple’s Siri on iPhones. The personal assistant is called Cortana Cortana comes with all Windows 10 devices capable of processing speech and accessing the internet. Cortana can do everything Siri can do and a lot more. As you’ve already seen, one can tell Cortana to perform new and custom actions based on speech, custom code and body gestures. Cortana even supports multiple languages, for those ancient languages that aren’t apart of Cortana, you’ll have to get creative and “englibic” it. Here’s an example:

HomeAutomation_AncientAfricanLanguage from Dwight Goins on Vimeo.

Thus inside the Home automation App, I’m going to tell Cortana to turn on and off the lights, and change the light colors. Cortana will process my speech commands and inform my Home Automation App of which actions it should take. The Home Automation App will then send the commands over the network to the Phillips Hue Lighting system.


  1. I first started with downloading and installing Visual Studio .Net 2015 on my Windows 10 computer.
  2. Next I downloaded and installed the Windows 10 SDK.
  3. After my environment was setup, I opened up Visual Studio .Net 2015 and I created a new Universal project for Windows 10. (To learn how to do this view getting started)
  4. Next I started researching to find out exactly how to teach Cortana about new speech commands, and how to have Cortana tell my Home Automation App what to do. What I found was a sample project on Github and a nice video explaining how to include Cortana in your UWP apps.
  5. Next I researched how to turn on and off lights in the Hue system from here.
  6. I then just created my custom speech commands and invoked the Hue REST api’s to turn on and off the lights.
  7. Lastly, I looked at the hue, brightness, and saturation fields from the Hue system to get a range of colors and added those colors into my Home Automation app to support changing colors.

Overall this took about 4-5 hours to get it all working and I was impressed how easy it all was.

Now on to my next adventure: Controlling the Sonos Wireless Stereo system. I suspect that this is going to be harder, because I know for a fact that the Sonos system does not provide a documented API to control it, so that means I’ll have to hack it.

Stay Tuned for part 3: Automating a Sonos Stereo system with Windows 10 and IoT

Windows10 and IOT: How to automate your home Part 1

IoT stands for the Internet of Things. Some of the concepts of IoT are that devices can, connect to the internet, communicate with other devices, transmit events and data, and receive events and data all through the internet. Another concept of IoT, is that with these devices there is telemetry data and information that comes from them, and this data can be analysed, and learned from, providing insight into the events and daily operating usage of the device. If these devices are home based and personal, the idea is that learning from this data should allow me to better understand how these devices affect my life, and allow me to make better choices about how to use the devices to better my life.

Windows 10 is Microsoft’s latest operating system to be released which has core functionality to use a single platform for multiple devices to connect to the Internet. For developers, it utilizes a new platform called the Universal Windows Platform (UWP). This new platform hides the gory details of how to connect devices, and get them on the internet. Instead, developers focus on the core functionality of their application. This means it should be easy to get devices connected, sending, and receiving which leads to analyzing and learning.

So why mention IoT and Windows 10 together?

Microsoft touts that we can use Windows 10 to quickly and easily build an IoT solution. Let’s see if we can test this claim. The goal, use Windows 10 to build a Home Automated solution. Basically I want to turn lights on, change music, and look at a security camera feed at night. Once I’ve accomplished this, I want to see what my favourite music is over time, see what security events I should be aware of happening at night around my house, such as movement and blob detection, and lastly, figure out if my electric bills are rising due to elongated usage of my office lights.

This post will be 1 of multi part articles about Windows 10 and IoT. So with all these devices, and with all this data, I should be able to make a quick and easy Home Automation solution and learn from my daily routine to make better decisions about my office, music, security and lighting conditions.

Ok this is alot of reading, I want to get started now. How do we start?

To get started, I figure I would take some time and talk a little about where the IoT industry is going in light of many announcements made from big name companies like Microsoft, Amazon, Google and Apple. The obvious is move is that these tech giants are trying to push more and more connected devices towards consumers. If you notice these devices are integrated in our homes, schools, offices, and have even made it into our daily living routines.

What devices are you referring to?

For example, we have the new Microsoft Band 2 which is getting ready to come out which monitors your health and living style. From a home décor standpoint, we have the Phillips Hue lighting system which allows you to control your lights in your home. From an entertainment perspective, we have the Sonos stereo system, which allows you to control your entertainment system and music. From a security camera standpoint, we have infra-red cameras and depth sensors like the Intel RealSense camera, and Kinect for windows v2 camera which can easily provide security video feeds around your house. Lastly, we have the Windows 10 operating system, software that can run with and on various devices to allow you to connect and bring them all together.

As we venture through this home automation solution, I’ll post video snippets to show my progress and timings.

Do you have a diagram of how all these things will work together?

Ok with all that out the way let’s draw up the architecture around how all this will work together.

Windows 10 and IOT diagram

In the above diagram, the user: Me, can say some commands such as: “Hey Cortana, Home Automation, turn on Office lights”, “Hey Cortana, Home Automation, play music from India Arie”, or “Hey Cortana, Home Automation, view the Security Video from last night”. A Home Automation Windows 10 application will process the commands and send and receive data from the connected devices. As the automation works, telemetry data, and information is sent to Cortana Analytics. After a few days of automation, I can query the data from Cortana Analytics and analyze, and learn from my daily usage habits. The theory is I should be able to tell what my favorite music I like to listen to was that previous week. Also I can figure out what type of weird security events such as movement detection and blob detection has occurred at night, and get a running log of how long I keep my lights on in my office for electric billing purposes. Groovy huh???

Stay Tuned for part 2

Stay tuned for part 2… Building the Home Automation App – Getting Cortana to understand my commands and control the Phillips Hue Lighting system.

Windows Hello Rocks!!! Now Why can’t the Kinect for Windows Do This???

Let me start by repeating “Windows Hello Rocks!!!”.

For those of you who don’t quite now what this is, one of the new features of Windows 10 is to get rid of passwords, and use biometric sensors to recognize who you are.

Biometric sensors being fingerprint readers, Iris retina scanners, and of course Depth cameras such as the Intel RealSense F200. The Intel depth camera is eerily similar to the Kinect for Windows cameras: v1 and v2, so hopefully those with Kinects, can use this feature too in the near feature. I know some of you may be asking what’s the big deal… Fingerprint readers have been around for 15 years or more. I know I use to have one when I worked for the Air Force Reserves as a Crystal Reports developer.

Well, the big news is now you can use embedded cameras like the Intel RealSense F200 to simply have your face recognized securely so you don’t need your finger anymore!!!

But I digress, currently the Kinect is not supported so I ask why?

My only guess why is because doing this requires changes in the driver architecture. The current driver is designed to be run in User mode setting. User mode loads “AFTER” a user is logged in, so using the Kinect would require creating a driver which runs in Kernel mode – which runs before the user is logged in, thus allowing this device to run outside the realm of the user mode setting and be used for Facial recognition.

Well I just got my Intel RealSense Development Kit in the mail. It contains the F200 camera along with SDK and drivers for windows 8.1 and windows 10.

I installed the drivers and SDK on both my win8.1 machine, along with my surface pro 3 which has Windows 10 build 10240, and I attached the device. Windows recognized it perfectly. I followed the steps here.

Once complete Windows hello was working and “Look Ma, no hands” no more passwords. Windows 10 recognized my face and only my face. I can sign in by just getting in front of the camera.

Great work Microsoft!!!