If you have ever seen movies like “The Matrix” and “Iron Man,” then you must have wondered what it would be like to control the TV, computer and other devices in your home with just a wave of your hand. These sci-fi dreams are quickly becoming a reality as gesture recognition technology matures.
Gesture recognition is a type of perceptual computing user interface that allows computers to capture and interpret human gestures as commands. The general definition of gesture recognition is the ability of a computer to understand gestures and execute commands based on those gestures. Most consumers are familiar with the concept through Wii Fit, X-box and PlayStation games such as “Just Dance” and “Kinect Sports.”
In order to understand how gesture recognition works, it is important to understand how the word “gesture” is defined. In it’s most general sense, the word gesture can refer to any non-verbal communication that is intended to communicate a specific message. In the world of gesture recognition, a gesture is defined as any physical movement, large or small, that can be interpreted by a motion sensor. It may include anything from the pointing of a finger to a roundhouse kick or a nod of the head to a pinch or wave of the hand. Gestures can be broad and sweeping or small and contained. In some cases, the definition of “gesture” may also includes voice or verbal commands.
Gesture recognition is an alternative user interface for providing real-time data to a computer. Instead of typing with keys or tapping on a touch screen, a motion sensor perceives and interprets movements as the primary source of data input. This is what happens between the time a gesture is made and the computer reacts.
For instance, Kinect looks at a range of human characteristics to provide the best command recognition based on natural human inputs. It provides both skeletal and facial tracking in addition to gesture recognition, voice recognition and in some cases the depth and color of the background scene. Kinect reconstructs all of this data into printable three-dimensional (3D) models. The latest Kinect developments include an adaptive user interface that can detect a user’s height.
Microsoft is leading the charge with Kinect, a gesture recognition platform that allows humans to communicate with computers entirely through speaking and gesturing. Kinect gives computers, “eyes, ears, and a brain.” There are a few other players in the space such as SoftKinect, GestureTek, PointGrab, eyeSight and PrimeSense, an Israeli company recently acquired by Apple. Emerging technologies from companies such as eyeSight go far beyond gaming to allow for a new level of small motor precision and depth perception.
Gesture recognition has huge potential in creating interactive, engaging live experiences. Here are five gesture recognition examples that illustrate the potential of gesture recognition to to educate, simplify user experiences and delight consumers.
Gesture recognition has the power to deliver an exciting, seamless in-store experience. This example uses Kinect to create an engaging retail experience by immersing the shopper in relevant content, helping her to try on products and offering a game that allows the shopper to earn a discount incentive.
A company named Leap Motion last year introduced the Leap Motion Controller, a gesture-based computer interaction system for PC and Mac. A USB device and roughly the size of a Swiss army knife, the controller allows users to interact with traditional computers with gesture control. It’s very easy to see the live experience applications of this technology.
Companies such as Microsoft and Siemens are working together to redefine the way that everyone from motorists to surgeons accomplish highly sensitive tasks. These companies have been focused on refining gesture recognition technology to focus on fine motor manipulation of images and enable a surgeon to virtually grasp and move an object on a monitor.
Google and Ford are also reportedly working on a system that allows drivers to control features such as air conditioning, windows and windshield wipers with gesture controls. The Cadillac CUE system recognizes some gestures such as tap, flick, swipe and spread to scroll lists and zoom in on maps.
Seeper, a London-based startup, has created a technology called Seemove that has gone beyond image and gesture recognition to object recognition. Ultimately, Seeper believes that their system could allow people to manage personal media, such as photos or files, and even initiate online payments using gestures.
There are several examples of using gesture recognition to bridge the gap between the deaf and non-deaf who may not know sign language. This example showing how Kinect can understand and translate sign language from Dani Martinez Capilla explores the notion of breaking down communication barriers using gesture recognition.
To learn more about AR interactive displays with gesture recognition for live events and experiences, contact us at any time. Email Beck Besecker or call 727-851-9522.
Marxent Congrats to @Robern on the Robern Room Designer! Built on Marxent's award-winning 3D Cloud™ platform, the Robern Ro… https://t.co/DAFL8EOYnu
Marxent Is Amazon No. 1 or No. 2 in furniture now? Via @FurnitureToday: https://t.co/XFtRnLfGSk
Marxent Congrats on the well-deserved recognition! https://t.co/llADxuf4A9