You use it on your phone, and with your favorite video game console, but you might not even realize what this technology is. Welcome to the future: it’s gesture recognition.
Just when you think we can’t make any more breakthroughs in technology, we do. And while we’ve been using gesture recognition for years (the XBox Kinect was released in 2012 to the public), there’s been a push forward for more applications to use it in the past year or so.
Gesture recognition — a perceptual computing interface where a computer captures and translates gestures into commands — extends beyond just video games and mobile device usage. Here are a few exciting examples:
Custom Gestures: If you’ve watched Minority Report, you’ll be excited to see that you too can act like Tom Cruise and manipulate screen data with hand gestures with Seemove software. The software can be customized for whatever gestures you want to assign tasks to.
Healthcare: Developers at Siemens Healthcare are developing gesture recognition tools that would allow surgeons to manipulate digital images without compromising sterile procedures.
Automotive: Google and Ford have paired up to work on tools that drivers can use to gesture to adjust air conditioning, windows, or windshield wipers.
How it Works
Just like touch screens helped us be more efficient on our computers and mobile devices, so can gesture recognition tools.
If you have an XBox Kinect or a phone like the Samsung S5, you’re already familiar with how gesture recognition works from the consumer point of view. If you want to start a new Dance Central game, you swipe in front of your body. If you want to scroll down on a website on your phone, you run your hand alongside the phone.
In a given device, a motion sensor sees a human gesture, such as a hand wave, and interprets it based on the data that’s been input. So if programmers set up the system to recognize a hand wave as a signal that the user wants to turn on his computer, that’s what it’ll do.
In setting up the software, programmers create a library of gestures tied to specific commands. That, by the way, is a great example of the kind of data we can store on our M2 servers. While some libraries are relatively small, others will be much larger, and that data is important to house somewhere it can be accessed quickly with no delay.
It gets pretty complex, even with your kid’s XBox Kinect, which can not only recognize hand gestures, but also skeletal and facial tracking, voice recognition, and a user’s height.
The Drawbacks of Gesture Recognition
Like any emerging technology, gesture recognition has its limitations, though those are sure to be remedied quickly, as technology is speeding up. For one, the systems are sensitive, so if there’s “noise,” meaning it’s difficult for a device to discern a person from the furniture around her, the system may not know what to pay attention to in order to receive commands. A user has to be a specific distance from the scanner: too far away or too close, and it can’t read gestures.
And there are still several players in this new space, which means there’s varying quality among them, and they don’t necessarily play well together. I expect we’ll see some consolidation as bigger entities (surely Google’s on that list, and maybe Amazon) snatch up the more successful gesture recognition companies and others run out of funds for R&D.
For the time being, I look at gesture recognition as being full of potential, and look forward to seeing new applications of it in industries like retail, transportation, and education.