BW: What is the connection between computational neuroscience and artificial intelligence?
MA: Artificial intelligence gets computers to do “something intelligent” without necessarily worrying about how humans do it. Computational neuroscience is when we find explicit formulas that tell us how neurons in brain regions interact with each other, how they receive information from the environment, how they move the muscles, how they change with time. The idea is to have a loop with the experimentalists. They will give us some data, we will try to come up with these explicit models, and then those models, if they’re precise enough, will then be simulated on a computer. So even though we’re using a serial computer, we can simulate parallel processes. We take many tiny steps on the serial computer to simulate one time step in parallel computation.
A good example of computer simulation (but not neuroscience) is the Apollo 13 space flight and the whole problem of getting the spacecraft back to Earth. It’s a very difficult problem. If you worry about oxygen running out, you have to go as quickly as possible. But if you worry about the fuel running out, you’d better go more slowly and not burn up all the fuel. It took many different computer simulations to find a scenario that would thankfully get the crew back before either the oxygen or the fuel ran out. Clearly, the computer is not a spacecraft, but, thanks to Isaac Newton, engineers could come up with equations which describe the motion of the spacecraft in sufficient detail to get reliable understanding from running programs on the serial computer.
Likewise for computational neuroscience — we analyze the brain to the point that we have our own version, if you will, of Newton’s equations for systems in the brain, and then we can use an ordinary computer to carry out computations that will show us how the system will behave with different assumptions. Computational neuroscience describes the actual processes whereby neurons in brain regions interact with each other with such detailed precision that we can then explore how the brain works through computer simulation.
That’s part one. Part two would be how we can use that as an inspiration for artificial intelligence in controlling a robot, or doing machine learning, or things of that kind. But only some work in artificial intelligence is inspired by studying the brain.
BW: What is “neuromorphic architecture”?
MA: I just have to give you one warning, which is that the term “neuromorphic architecture” is used in another sense by some people. Computer scientists will now use chips that are specially designed to operate in parallel in a way that works a little bit like, for instance, the retina of the brain doing visual processing. So they might use “neuromorphic architecture” to mean the design of brain-inspired computer circuitry.
However, we’re now switching from a context framed by computer science, artificial intelligence, or neuroscience to one framed by architecture and the traditional sense of architecture as the design of the built environment. When I first went to a meeting of the Academy of Neuroscience for Architecture (ANFA), there was a presentation which emphasized the idea of monitoring signs of brain activity when people were exposed to different features of architecture, perhaps using virtual reality. I call that the neuroscience of the architectural experience.
But I had a different perspective based on the interplay between brain theory and artificial intelligence with my students at USC. I gave them a course project, which got them to think about building rooms where there were cameras and different types of things the room could do and a network to integrate the sensory and the motor experience. So when I was at the ANFA meeting, it inspired me to coin the term “neuromorphic architecture” in this new sense of what happens when you have a building that includes perception, action, control, and memory that is inspired by a study of the brain.
BW: Are you trying to give a building a human brain?
MA: Let me remind you of artificial intelligence. We don’t try to create a complete human intelligence, but we try to come up with packets of intelligence so that getting a robot to navigate effectively might be one example, having Siri answer questions from a database might be another. But we’re not trying to get them to do everything a human does. They’re going to be much better than a human at doing arithmetic and perhaps much worse at face recognition.
Similarly with neuromorphic architecture. It’s not that we want the building to be human. It’s that we want to ask what new things could we want the building to have that can be inspired by both artificial intelligence and computational neuroscience.
It could simply be that you have a security system that does face recognition so that a door is locked unless the face is recognized. That would be one simple packet of intelligence you could give the building.
Another has to do with way-finding — how, for example, you find the toilets in a new building. There are various standard patterns like symmetrically on either side of the elevator, or for men and women up or down a floor in the same place. So we have certain ways of providing cues by the static layout of the building itself, based on knowledge of societal customs. We also have signage where we may decide that the building layout by itself can’t tell the story but by adding signs in appropriate places we can help people find their way.
The next level up is how to design a system to help you find something in the building. Maybe you want the office of Mr. X. Then you would start by looking for a receptionist and ask for directions to the office, or you would just knock on somebody’s door and hope that they had heard of Mr. X. With neuromorphic architecture, you would be able to talk to the building, and the building would have enough Siri-like capacity for speech understanding and question answering to be able to understand your question. It would know enough about the building that it would be able to direct you to the place and, depending upon the technology you think of, it could perhaps put up little flashing arrows on the wall for you to follow. That’s fine in a low-density building, i.e., not much foot traffic, but if there are 20 people in the corridor, each trying to find their way, you might give them 20 different colored arrows. That could become hopeless.
One idea that’s being developed by several different companies is designing spectacles. We’re talking here about augmented reality, where you hold your phone up, and it will do recognition of where you are by using GPS. So instead of looking directly at the building, you’re looking at the building as seen by your smartphone. One of the developments that’s now in prototype is to have glasses where a segment of your visual field is taken up by a heads-up display, which could augment reality for you. In this scenario, the neuromorphic architecture could work on two levels. One is for someone without special technology; the other is for a person with these smart glasses who can interact with the building in this direct fashion.
So what I’m adding to an artificial-intelligence scenario is the notion that we are increasingly understanding how the brain region called the hippocampus plays into the animal’s ability to do way-finding for itself and how architects might build on that understanding.
Another effort is to think through the ways mirror neurons allow us to link our own actions to the actions of others and ponder what it might take to allow a building to respond constructively to the behavior of its inhabitants.
Michael A. Arbib’s “Brains, Machines, and Buildings: Towards a Neuromorphic Architecture” is available here.
This article was originally published in the Fall 2013 issue of Brain World Magazine.