Distinguished Lecture Series: Andreas Tolias, Baylor College of Medicine
Abstract: Despite major advances in artificial intelligence through deep learning methods, computer algorithms remain vastly inferior to mammalian brains, and lack a fundamental feature of animal intelligence: they generalize poorly outside the domain of the data they have been trained on. This results in brittleness (e.g. adversarial attacks) and poor performance in transfer learning, few-shot learning, casual reasoning and scene understanding, as well as difficulty with lifelong and unsupervised learning – all important hallmarks of human intelligence. We conjecture that this gap is caused by the fact that current deep learning architectures are severely under-constrained, lacking key model biases found in the brain that are instantiated by the multitude of cell types, pervasive feedback, innately structured connectivity, specific non-linearities, and local learning rules. There is ample behavioral evidence that the brain performs approximate Bayesian inference under a generative model of the world (also known as inverse graphics or analysis by synthesis), so the brain must have evolved a strong and useful model bias that allows it to efficiently learn such a generative model. Therefore, our goal is to learn the brain’s model bias in order to engineer less artificial, and more intelligent, neural networks. Experimental neuroscience now has technologies that enable us to analyze how brain circuits work in great detail and with impressive breadth. Using tour-de-force experimental methods we have been collecting an unprecedented amount of neural responses (e.g. more than 1.5 million neuron-hours) from the visual cortex, and developed computational models that we use to extract principles of functional organization of the brain and learn the brain’s model biases.
Biography: Dr. Andreas Tolias’ research goal is to decipher brain’s mechanisms of intelligence. He studies how networks of neurons are structurally and functionally organized to process information. Research in his lab combines computational and machine learning approaches to electrophysiological (whole-cell and multi-electrode extracellular), multi-photon imaging, molecular and behavioral methods. He got his Ph.D. from MIT in Computational and Systems Neuroscience. The current focus of research in his lab is to reverse engineer neocortical intelligence. To this end his lab is deciphering the structure of microcircuits in visual cortex (define cell types and connectivity), elucidate the computations they perform and apply these principles to develop novel machine learning algorithms. He has trained numerous graduate students and postdoctoral fellows and enjoys mentoring immensely.