on Oct 1 2014
MotionSavvy works by utilizing gesture sensor that senses your hands and arms , a tablet , a custom case, and our software. The user simply launches the application and starts to sign. Both the deaf and hearing user can see what is being said in real time.
The typical flow of the conversation involves sign language to voice , and then voice recognition to text ( and eventually signing avatars ). Both sides of the conversation can see what is being said on the screen via text displayed in the chat area. Additionally as a last resort , users can tap on the screen and an on screen keyboard is a available for manual input. Note that all text inputted and signed comes out in grammatically correct english.
UNI contains two important pieces of software that enables real time recognition , the SignBuilder training program and the real time recognition program .
1. MotionSavvy utilizes a dedicated team of ASL signers. This team populates the different ASL dictionaries with as much variations as possible ( 10 signers can populate over 9,000 samples a week ). They utilize the SignBuilder software which enables them to simply record and upload various signs to our cloud infrastructure for learning.
2. MotionSavvy takes all those recorded samples of signs and efficiently generates a model of ASL which is then distributed via updates to the individual tablets.
3. When the user launches or updates the software the latest model is loaded into the recognition engine for real time recognition.
Interested users can view the SignBuilder software in action at the following blog post :
Fortunately, we can add other sign languages largely with technology. For instance, French Sign Language is quite similar to ASL, in the same way Spanish is to English. Many signs will already match or be very similar. By leveraging the crowd, we can improve the real-time analysis and comparison of stored signs to complete new languages.
MotionSavvy has Built its tool SignBuilder for this very reason :
Speed isn’t an issue, and will only improve over time as our database is perfected. The real pain point for us technically is assigning blame for mistakes when signing. It is difficult to discern whether the user “misspoke” or whether LEAP tracked the hand incorrectly.
To address this we’re providing visual feedback on screen in the form of 3D hands. So the user can see the physical representation of what LEAP sees as they sign. All the while, MotionSavvy is recognizing, learning, and storing their movements and vocabulary. It’s a win-win.
The second most technical issue is the population of many different variations or regional signing styles. This is done via machine learning and our SignBuilder program. One example that comes to mind is the deaf individual that comes into a Wegmans store and wants to get a steak. In ASL the signs for steak , content and meat are all the same but understood by the context. A plaintext recognition of that ASL phrase in ASL grammar would be : "want (meat/steak/content) t-bone". Which in our system would convert to "I want that t-bone steak".
MotionSavvy's product appeals to many businesses because of it's ability for them to meet ADA Compliance. MotionSavvy charges these businesses on a per subscription bases ( 1 device is 1 subscription ). This subscription can be purchased at a 350$ price point. Additionally MotionSavvy can be hired to produce use cases and custom dictionaries for the business on a consulting basis.
MotionSavvy has built in analytics that can be utilized by various parties to determine accuracy of the current dictionary. To date our current accuracy is about 96%! Adding more signs and variations will only see this go up.
MotionSavvy has built a custom language processing engine that utilizes the standard non networked / data dependent voice recognition utilized by Intel / Nuance that is packaged onto the tablets utilized. This means no voice recognition / audio is being sent to a cloud for processing and all recognition is being done in real time on the device.
We want MotionSavvy to fill the gaps of everyday 1 on 1 communication. 38 million people in the U.S. are hard of hearing and all of these individuals have experienced traveling through airports , going to the hospital , or done any type of banking.
There are 360 million deaf worldwide and 38 million here in the U.S. ASL is the third most used “language” in America, and $10B is spent every year on deaf and hard of hearing services such as interpreters, video relay systems, and equipment. MotionSavvy's product is really useful for anybody looking to service deaf / asl customers better in the following but not limited too , medical , government , education, banking , transportation, fast food, and retail.
Yes. Educational applications that utilize the LEAP and our software to detect how you sign and allow you to better practice your signing, versus just seeing an image on the screen or a book.
Additionally we have been approached by various groups that want our recognition and training programs for utilization in cars.
Already have an account? Login
Don't have an account? Signup