These days, our lives are run by genius gadgets: iPhones, iPads, computers, robots. Did you ever stop to wonder who put the “Siri” in Siri? And what does R2-D2’s brain — if you can call it that — look like? As technology and computers make their presence more and more known in every facet of our lives, wouldn’t it be good to understand how it all works?
Michael A. Arbib knows a thing or two about computers, math, and the brain. He is the Fletcher Jones Professor of Computer Science, as well as a professor of biological sciences, biomedical engineering, electrical engineering, neuroscience, and psychology at the University of Southern California, which he joined in September 1986.
Born in England in 1940, Arbib grew up in Australia (earning a degree in pure mathematics from Sydney University) and then moved to MIT, attracted by the leading figures in cybernetics there, including the mathematician Norbert Wiener. After getting his Ph.D. in 1963, he spent two years roaming the world and lecturing in the U.S., Europe, the Soviet Union, and Australia before taking a position at Stanford. After five years there, Arbib became chairman of the Department of Computer and Information Science at the University of Massachusetts at Amherst in 1970, and remained in that department until his move to USC.
His first book, “Brains, Machines, and Mathematics,” set the stage for a career based on the argument that we can learn much about machines from studying brains, and much about brains from studying machines. Arbib has always promoted an interdisciplinary environment in which computer scientists and engineers can talk to neuroscientists and cognitive scientists, and this interplay has led him to work in computer science, linguistics, computational neuroscience, and neuroinformatics. He was also highly involved in providing the first computational model of mirror neurons and conducting some of the key initial imaging studies of the human mirror system.
2012 saw the publication of Arbib’s 40th book, “How the Brain Got Language: The Mirror System Hypothesis.” “Language, Music and the Brain: A Mysterious Relationship” (based on a Strüngmann Forum he organized in Frankfurt in May 2011) will follow from the MIT Press in June 2013. He is currently a board member of the Academy of Neuroscience for Architecture. We caught up with the busy Arbib to delve into the complex circuitry of computational neuroscience, artificial intelligence, and neuromorphic architecture.
Brain World: How is the brain like a computer, and how is it different?
Michael Arbib: The brain is like a computer in the sense that it stores information and processes it. But it has several characteristics that are very distinctive. One is that it is more like a robot than a computer in that it is continually receiving information from the world while continually acting in the world. We have eyes and ears and are continually monitoring what’s going on around us, and we have our limbs so we can move around in the world and interact with it. One of the slogans people use is that we are “embodied” — interactive with the world both physically and socially. So that’s one big difference from a personal computer, which has no will of its own but just sits there passively until we send a request.
The second thing is that the classic model of a single computer has passive storage of information and a central processing unit, which pulls up one instruction, finds the data for it, goes to the next instruction, and so on, one process at a time. It’s a serial computer, whereas our brain has hundreds of different brain regions and about a hundred billion neurons, each of which is an active processor. So computation in the brain proceeds by this highly distributed interaction between all these brain regions, and these brain regions themselves are products of all the interactions of their neurons.
Another important difference is that the brain is adaptive — it learns, whereas somebody external from the computer has to write a program to tell a computer step by step what to do — though now we begin to write programs designed to use “machine learning” to extract patterns from masses of data. Our synapses, the connection points between the neurons, are all plastic in the sense that their values can change as a function of experience. A child develops not because the parents insert a USB stick into a slot in the head that downloads how to behave in that society. It’s their embodied interaction with the physical and social world that over time changes their neural connections so the child can function successfully as a member of society.
BW: What is “artificial intelligence”?
MA: In 1956, there was a conference at Dartmouth College where John McCarthy (who died recently) coined the term “artificial intelligence.” He was simply saying that there are things like playing a game of chess, recognizing objects in the world, or using language that all require intelligence, and so the question is “How can we program computers of the current day to exhibit aspects of that intelligence?”
I think what has happened since then is that we have more and more examples of something that approaches intelligence. We have pretty good speech-recognition systems like Siri, the question-answering system on the iPhone. We are not trying to create a whole human intelligence. We’re just saying what are some aspects of human intelligence that would be useful to have a machine do for us and then see if and how we can use today’s computers to achieve it. Getting the computer to do it is what makes it artificial. And then, depending on the particular system, it may be inspired by the study of how the brain does it, or it may just be the accumulated skills of computer programmers finding the way to do a particular job.
Let me give a simple example. A vending machine can recognize dollar bills. It does that crudely by recognizing the pattern of green and black and white. It doesn’t recognize the face of George Washington. So there’s an example of, if you will, a very small packet of intelligence, recognizing American currency by a method that uses a computer but is nothing like the way a human would do it.
BW: What are “mirror neurons,” and how do they impact social neuroscience?
MA: The original discovery was made by a group in Parma, Italy, led by Giacomo Rizzolatti. By recording the electrical activity of neurons in the brains of macaque monkeys, they found some neurons that behaved similarly when the monkey carried out an action and when he saw a human carry out the same action, and they called them mirror neurons. That was the starting point, and since then the Parma group has done many more experiments. A few other people are doing animal experiments, but the majority of studies have used human brain imaging.
The catch with human brain imaging is you just see activity averaged over big chunks of brain. You don’t see what individual neurons are doing. This is where computational neuroscience comes in, by trying to describe explicitly the process that links neural circuitry to activity in brain regions, showing how this activity relates to the animal’s activity in its world. This is the way we try to reconcile what we learn from single-cell studies in the monkey with brain imaging in humans.
We have to go “beyond the mirror” to understand the mirror neurons as part of a larger system. How does visual or auditory input get to the mirror neurons? How does the activity of the mirror neurons get out to control behavior? How does it link to different memory systems Understanding that larger system will have a big impact on developing the neuroscience of social behavior in animals and humans. One result of that effort is my new book, “How the Brain Got Language,” which posits a special role for mirror neurons in the evolution of the human brain that gave our species its unique capacity for language.