myVoice

The project is a prototype glove that translates American sign language into text or sound. The project uses five flex sensors, a gyroscope, accelerometer, and contact sensors to detect the motion and state of the hand. The output of the sensors are fed into a database to display on a mobile phone application the corresponding output word. Machine learning through support vector machine (SVM) is used in order to interpret the sign language by mapping the values read from the sensors to specific words of the English language added to the project’s database. Dimension reduction methods are applied in order to allow the system to infer decisions at the lowest processing time and computation complexity.

Project Details

[photo]