Major tech companies like Microsoft, Apple, Google, Meta, and Amazon are working on improving speech recognition technologies for people with disabilities. These companies have partnered with the University of Illinois (UIUC) under the Speech Accessibility Project.
The project aims to improve speech recognition for users with ALS, more commonly known as Lou Gehrig’s disease, cerebral palsy, Down Syndrome, or Parkinson’s, among others, that alter speech patterns. Microsoft recently improved Teams calls on Windows using artificial intelligence and Amazon launched new Echo devices like Echo Dot and Alexa Voice Remote Pro, which come with better audio architecture. The Project could perhaps help expand such products to people with diverse speech patterns in the future.
Non-Profit organizations like Davis Phinney Foundations for Parkinson’s and Team Gleason have also come together under the project. It focuses on how despite improvements in voice recognition technologies and translation tools, there is a hindrance for people with different speech patterns.
Mark Hasegawa-Johnson, a professor at UIUC, stated:
“Speech interfaces should be available to everybody, and that includes people with disabilities. This task has been difficult because it requires a lot of infrastructure, ideally the kind that can be supported by leading technology companies, so we’ve created a uniquely interdisciplinary team with expertise in linguistics, speech, AI, security and privacy."
The initiative will take speech samples from users with different speech patterns or speaking disabilities to create a dataset to train machine learning models. Paid volunteers will be recruited to do so in American English initially.
Source: Engadget