“Sign language has helped the hearing-impaired communicate for many centuries, way before it was formalised and officially recognised, but this long-standing language of gestures has now been given a 21st-century technological upgrade. Saudi designer and media artist Hadeel Ayoub has invented a smart glove that recognises hand movements and converts them into the relevant text.”
“This new smart glove can turn sign language into text and speech”
Interesting. I wonder if it is able to judge a baseline to interpret signs that make use of three dimensional space and not just hand and finger shape. In ASL where a sign is gestured can significantly change the meaning of what is being conveyed, as well as if the sign is formed at the crown, the chin, or other part of the body. Also, repetition of signs in space can impart significant meaning.
Well if the glove incorporates a IMU (accelerometer, compass, and/or gyroscope), it is certainly possible to build a 3D image of the movements being performed. Does the syntax need to know where the hand position is in relation to specific other body parts? If so, an optical interpreter would likely be a better solution.
It can. For example the same hand shape is used for both mother, father, grand mother and grandfather. The hand shape at the chin indicates mother, and a repetition of the hand shape outward away from the body indicates mother. The male signs are identical hand shape at the forehead.
Well chin and forehead are close enough together that I doubt a glove with an IMU could distinguish the positions. The moving away from the body could be reliably detected though.
As a social worker, even if it were “close enough” it would still be with it. Right now I have to type on computer, write on paper, or arrange a week in advance for an interpreter.