Cognitive hearing aid uses AI to pick out single voices in crowded rooms

0
24
This original message was seen on site

Why it matters to you

A smart cognitive hearing aid could make life significantly easier for people who are deaf or hard of hearing.

Whether it’s Apple’s smart cochlear implant collaboration or tools designed to make sign language communication easier, there is no shortage of cutting-edge gadgetry available to make life easier for people who are deaf or hard of hearing. A new piece of technology coming out of Columbia University School of Engineering and Applied Science could make things even better, however — courtesy of a hearing aid that is designed to read brain activity to determine which voice a hearing aid user is most interested in listening to and then focusing in on it. The resulting “cognitive hearing aid” could be transformative in settings like crowded rooms in which multiple people are speaking at the same time.

“My research has been focused on understanding how speech is processed in the brain, and to create models of it that can be used in automatic speech-recognition technologies,” Nima Mesgarani, an associate professor of electrical engineering, told Digital Trends. “Working at the intersection of brain science and engineering, I saw a unique opportunity to combine the latest advances from both fields, to create a solution for decoding the attention of a listener to a specific speaker in a crowded scene which can be used to amplify that speaker relative to others.”

Please enable Javascript to watch this video

Mesgarani says that, up until now, no hearing aid on the market has addressed this specific problem. While the latest hearing aids feature technology designed to suppress background noise, these hearing aids have no way of knowing which voices a wearer wants to listen to, and which are the distractors.

The device Mesgarani and team came up with constantly monitors the brain activity of the wearer to solve this issue. To do this, it uses a deep neural network which automatically separates each of the speakers from the background hubbub and compares each speaker with the neural data from the user’s brain. The speaker who best matches the neural data is then amplified to assist the user.

It’s a great concept — although it may still be a bit longer before the finished product is available to wearers. Next, the team hopes to develop better algorithms for performing the task in all possible conditions, as well as finding a way to make the neural recording process less intrusive.

“Many researchers have been developing techniques for measuring the brain signal from inside the ear,” Mesgarani continued. “Imagine an earbud with electrodes placed around it. [Another solution might include] C-shape grids placed around the ear, similar to a [regular] hearing aid.”

A paper describing this work was recently published in the Journal of Neural Engineering.