Our Brains Have Built-In Noise Reduction

Ferret in dramatic lighting

Ferrets give us a better understanding of how we boost relevant sounds and reduce the noise in our brain.

Working with ferrets – whose auditory system is a lot like ours – a team of researchers at the University of Maryland have shown how a form of noise reduction happens in the brain. Speech and other dynamic signals get boosted while noise gets dampened, leading to better clarity for relevant signals.

A new study shows how the mammalian brain can distinguish the signal from the noise. Brain cells in the primary auditory cortex can both turn down the noise and increase the gain on the signal. The results show how the brain processes sound in noisy environments, and might eventually help in the development of better voice recognition devices, including improvements to cochlear implants for those with hearing loss.

The team, led by Nima Mesgarani, attached electrodes to the ferret’s A1 region of the cortex, and played ferret and speech sounds with a variety of background noises. They analyzed the electrode data and reconstructed the signals as perceived by the ferret and found a much cleaner signal with the noise reduced.

The ferret sounds and speech were presented alone, against a background of white noise, against pink noise (noise with equal energy at all octaves that sounds lower in pitch than white noise) and against reverberation. Then they took the neural signals recorded from the electrodes and used a computer simulation to reconstruct the sounds the animal was hearing.

The key takeaway from the research is that there is active noise suppression and not just signal boosting happening in the brain.

We show that the auditory system maintains a robust representation of speech in noisy and reverberant conditions by preserving the same statistical distribution of responses in all conditions. Reconstructed stimulus from population of cortical neurons resembles more the original clean than the distorted signal. We show that a linear spectrotemporal receptive field model of neurons with a static nonlinearity fails to account for the neural noise reduction. Although replacing static nonlinearity with a dynamic model of synaptic depression can account for the reduction of additive noise, only the combined model with feedback gain normalization is able to predict the effects across both additive and reverberant conditions.

The abstract of the paper can be read here. (Subscription required to access the paper)

Image by Flickr user Tequiua
Share this post
  , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.