What neuroscience can learn from computer science

Marta Kryven, Public Library Of Science

After visiting the International Conference on Perceptual Organization (ICPO) in June 2015, I made a list of trends in neuroscience-inspired computer applications that I will explore in more detail in this post: Computer vision based on features of early vision Gestalt-based image segmentation (Levinshtein, Sminchisescu, Dickinson, 2012) Shape from shading and highlights—which is described in more detail in a recent PLOS Student Blog post Foveated displays (Jacobs et al. 2015) Perceptually-plausible formal shape representations My favorite example of the interlocking components of neuroscience and computer science is computer vision based on features of early vision. During the first 80-150 ms, before awareness of the object has emerged, your brain is hard at work assembling shapes from short and long edges in various orientations, which are coded by location-specific neurons in primary visual area, V1. Image credit: Credit: Jim.belk, Public Domain via Wikimedia Commons At about the time that Hubel and Wiesel made their breakthrough, mathematicians were looking for new tools for signal processing, to separate data from noise in a signal. What do computers see? So, how far are computer scientists from modeling the brain?

Visit Link