Does Microsoft Kinect herald the start of a sensor revolution?

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, Microsoft officially announced a software development kit for the Kinect sensor. Whilst there’s no commercial licensing model yet, that sounds like it’s the start of a journey to official support of gesture-based interaction with Windows PCs.

There’s little doubt that Kinect, Microsoft’s natural user interface for the Xbox game console, has been phenomenally successful. It’s even been recognised as the fastest-selling consumer device on record by Guinness World Records. I even bought one for my family (and I’m not really a gamer) – but before we go predicting the potential business uses for this technology, it’s probably worth stopping and taking stock. Isn’t this really just another technology fad?

Kinect is not the first new user interface for a computer – I’ve written so much about touch-screen interaction recently that even I’m bored of hearing about tablets! We can also interact with our computers using speech if we choose to – and the keyboard and mouse are still hanging on in there too (in a variety of forms). All of these technologies sound great, but they have to be applied at the right time: my iPad’s touch screen is great for flicking through photos, but an external keyboard is better for composing text; Kinect is a fantastic way to interact with games but, frankly, it’s pretty poor as a navigational tool.

What we’re really seeing here is a proliferation of sensors. Keyboard, mouse, trackpad, microphone, and camera(s), GPS, compass, heart monitor – the list goes on. Kinect is really just an advanced, and very consumable, sensor.

Interestingly sensors typically start out as separate peripherals and, over time, they become embedded into devices. The mouse and keyboard morphed into a trackpad and a (smaller) keyboard. Microphones and speakers were once external but are now built in to our personal computers. Our smartphones contain a wealth of sensors including GPS, cameras and more. Will we see Kinect built into PCs? Quite possibly – after all it’s really a couple of depth sensors and a webcam!

What’s really exciting is not Kinect per se but what it represents: a sensor revolution. Much has been written about the Internet of Things but imagine a dynamic network of sensors where the nodes can automatically handle re-routing of messages based on an awareness of the other nodes. Such networks could be quickly and easily deployed (perhaps even dropped from a plane) and would be highly resilient to accidental or deliberate damage because of their “self-healing” properties. Another example of sensor use could be in an agricultural scenario with sensors automatically monitoring the state of the soil, moisture, etc. and applying nutrients or water. We’re used to hearing about RFID tags in retail and logistics but those really are just the tip of the iceberg.

Exciting times…

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was jointly authored with Ian Mitchell.]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.