Details of cognitive hearing aids and battery in closeup.

The cognitive hearing aids can change how people hear.

Cognitive issues usually accompany hearing loss. That’s because hearing impairment makes it difficult for some to understand what others are saying. As you can imagine, this makes it hard to hold a conversation. During moments where a person with hearing loss is in a crowded or noisy place, understanding speech becomes an increasingly taxing. In order to solve this issue, scientists are developing cognitive hearing aids. The hope is that these devices can filter background noise and enhance voice recognition to make it easier for people to understand speech.

How Hearing Aids Work

Hearing aids are the primary medical devices used to improve hearing. They are used by people who have hearing damage or have developed hearing loss at some point in their lives. Around 48 million Americans report having hearing loss. Unfortunately, only 20 percent of these people actually use hearing aids.

While these devices cannot fully correct hearing loss, they make is easier to understand and process sound. Some hearing aids can cancel noise and the wind, enhance your spatial region, and highlight voice. These devices still have a long way to go. While some hearing aids are capable of suppressing background noise, they have trouble focusing on the speech of a specific person.

Working on Cognitive Hearing Aids

The goal of creating cognitive hearing aids is to focus on hearing one speaker over voices of many others. Hearing aids that can focus on a singular person can make it easier to understand someone in a crowded place. The cognitive hearing aids would have to connect to the brain to understand where to focus. They would also be quite the achievement.

At the Columbia University School of Engineering and Applied Science, researchers came together to determine how they can achieve this kind of auditory focus with hearing aids. The scientists looked at deep neural network models, which helped them separate multiple voices and determine which one the brain is focusing on. The speaker is then amplified so the user can hear them better. Ultimately, this improves auditory attention decoding (AAD).

“This work combines the state-of-the-art from two disciplines: speech engineering and auditory attention decoding,” says Nima Mesgarani, associate professor of electrical engineering and lead of the study. “We were able to develop this system once we made the breakthrough in using deep neural network models to separate speech.”

Previous studies helped the research team develop this new method. “Translating these findings to real-world applications poses many challenges,” notes James O’Sullivan, a postdoctoral research scientist working with Mesgarani and lead author of the study.

“Our study takes a significant step towards automatically separating an attended speaker from the mixture,” O’Sullivan continues. “To do so, we built deep neural network models that can automatically separate specific speakers from a mixture. We then compare each of these separated speakers with the neural signals to determine which voice the subject is listening to, and then amplify that specific voice for the listener.”

Final Thoughts on the Study

“Our system demonstrates a significant improvement in both subjective and objective speech quality measures — almost all of our subjects said they wanted to continue to use it,” Mesgarani says. “Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of realistic hearing aid devices that can automatically and dynamically track a user’s direction of attention and amplify an attended speaker.”

Hopefully, the cognitive hearing aids will convince those with hearing loss to use these devices. Hearing impairment can occur around the age of 65 years old. Using these devices can significantly improve their quality of life.