Machine-learning system processes sounds like humans do


Anne Trafton


Using a machine-learning system known as a deep neural network, MIT researchers have created the first model that can replicate human performance on auditory tasks such as identifying a musical genre. “What these models give us, for the first time, is machine systems that can perform sensory tasks that matter to humans and that do so at human levels,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT and the senior author of the study. Modeling the brain When deep neural networks were first developed in the 1980s, neuroscientists hoped that such systems could be used to model the human brain. The MIT researchers trained their neural network to perform two auditory tasks, one involving speech and the other involving music. To see if the model stages might replicate how the human auditory cortex processes sound information, the researchers used functional magnetic resonance imaging (fMRI) to measure different regions of auditory cortex as the brain processes real-world sounds.


Visit Link


Tags: