August 30, 2005
Hi-tech hope for deaf
From: Australian IT, Australia - Aug 30, 2005
Barbara Gengler
AUGUST 30, 2005
TWO US universities are working on a turnkey system that bridges a gulf in communications between deaf and hearing people.
Researchers at the Georgia Institute of Technology and the University of Rochester are developing a gesture recognition system that consists of a miniature videocamera and wrist-mounted accelerometers to pick up a person's signs, and software-based machine learning techniques that interpret them.
The system, called TeleSign, was encouraged by VoxTec International's Phrasalator system, a one-way device that recognises a set of defined phrases and play a recorded translation of them.
This device can be used easily for other languages, requiring only a hand translation of the phrases and a set of recorded sentences. In gesture recognition technology, a camera reads the movements of the human body and communicates the data to a computer that uses the gestures as input to control devices or applications.
For example, a person clapping their hands together in front of a camera can produce the sound of cymbals being clashed when the gesture is fed through a computer.
One-way gesture recognition is being used is to help physically impaired people to interact with computers, for example by interpreting sign language.
The technology could also change the way users interact with computers by eliminating input devices such as joysticks, mice and keyboards, and allowing the unencumbered body to give signals to the computer through gestures such as finger pointing.
Gesture recognition does not require the user to wear any special equipment or attach any devices to the body.
The gestures are read by a camera instead of sensors attached to a device such as a data glove.
In addition to hand and body movement, gesture recognition technology also can be used to read facial expressions, speech and movements of the eye.
The prototype is similar to the VoxTec device but rather than using a microphone to detect the user's speech, it uses a camera mounted on the brim of a hat, and accelerometers worn on wristbands, to detect the user's signing.
By clicking a button on a wristband that sends signals to a computer processor through the Bluetooth wireless protocol, a person can set the system in motion.
After signing a phrase, the person presses the button again, alerting the system to stop capturing the hand movements and to begin searching its database for the closest English-language matches.
Thad Starner, director of the contextual computing group at Georgia Tech, says the vocabulary is limited to about 20 phrases, which is sufficient for a variety of situations.
According to Starner, to simplify the system, the research team worked with George Washington University on another interface design using accelerometers placed inside the device to provide orientation and acceleration information on each other.
Other sensors fixed to the elbow and shoulder determine the hand position in relation to the body.
Starner says the glove interface works because many signs are recognised by differences in their beginning hand shapes, intervening movements and ending hand shapes.
According to the researchers, the matching is done with the help of algorithms known as Markov models, like the algorithms used in speech recognition.
Researchers worldwide are also working on technology that will allow translation between sign language and spoken languages.
This report appears on australianIT.com.au.
Copyright 2004 News Limited.