The Transposed Mind: How Self-Aware Are We?

(Editor’s note: This article from Drew Turney is from the Fall 2015 issue of Brain World magazine.)

In the movie “Self/less,” Ben Kingsley plays a wealthy industrialist who has only a few months left to live. So he hires an enigmatic medical organization to transfer his consciousness into the body of a younger man, one they tell him has been grown in a lab, an empty vessel of youth, strength, and vigor waiting for him to enjoy. In terms of plot lines, the idea of transferring the mind into a new body — or even another substrate, like an electronic machine — isn’t new. We saw it in the most successful film of all time — James Cameron’s “Avatar,” as well as in the classic anime “Ghost in the Shell,” and even in last year’s “Transcendence.” But with our expanded knowledge of neuroscience and computing power, whose prospects today would have seemed like science fiction just a decade ago, might such a transposition be at all possible?

Some of the breakthroughs of the late 20th century in both data processing and cognition were achieved following the realization that brains and computers are actually quite similar. The on/off states of a computer bit are uncannily like the firing/dormant states of neurons.

If such mechanics gave rise to something as detailed and rich as human consciousness in the latter, couldn’t this awareness somehow be captured and transposed onto the digital body of the former?


When it comes to capturing and transferring the activity of the mind, imagine some sort of neural scanner that can read the on/off states of every neuron in your brain, and is then able to transpose these into a computer program or even into another human brain. Might the “I” you can feel living in your body shake its new head, blink its eyes, and say “I think, therefore I am”?

Even if we’re not at that stage yet, surely we have the computing power to isolate and transport a certain “piece” of “mind stuff” — an individual instance of subjective, conscious experience, an element referred to by analytic philosophy as “qualia.”

Why isn’t there an iPhone app that can scan and send me the knowledge of your spouse’s birthday you hold inside your head, or send you a few seconds of my experience of the immediate environment I was in while writing this very paragraph?

Baroness Susan Greenfield is a British scientist and author who’s been writing about minds and brains for decades. Besides her controversial views on how exposure to technology affects development in the young, her research also focuses on the effects of Parkinson’s and Alzheimer’s disease — and she reminds us that the brain isn’t a Google database where something like a fact can simply be downloaded and transmitted.

“Say you know the word for ‘table’ in French,” she says. “That’s called semantic memory, which is memory for facts that aren’t really personal to you. Now say you went to the seaside with Auntie Flo when you were 5 years old. Each time you recall it, it will be from a different perspective colored by changing attitudes to Auntie Flo, changing attitudes to holidays and other associations. Every memory you have is nested in other memories and other values, so it’s not as if you can take a snapshot and download it.”

As such, the memory of that seaside holiday when you were 5 years old isn’t a discrete “piece” of information, and, as it turns out, qualia is only a philosophical term. In the world of physics and biochemistry, a subjective memory or any other qualia (and the changes it undergoes) might be located in neural networks or maps spread all over the physical brain, an organ which itself changes its shape, position, and configuration all the time.

As a matter of fact, it’s the only way a brain is like a computer — that icon for a file on your desktop may look like a singular entity, but the bits and bytes that comprise it are spread all over your computer’s hard disk as magnetic impulses, and moreover, the file’s position and arrangement shifts every time you edit and re-save it or any other file on your system.


All of which means that no memory, thought, or emotion exists in isolation. Omit one neural impulse that fires when a memory is recalled or an incident experienced and you’re likely to miss or mix up critical data in the transfer. Maybe, as a result, you’ll end up with a memory of a holiday with Uncle Max, even though he wasn’t there. Maybe you’ll be certain that it was Martha’s Vineyard and not Laguna Beach that you visited. Maybe you’ll recall the ocean being purple.

So might the obvious solution be — as we wondered above — to scan the entire brain and take a “snapshot” of the brain’s state? If the particular arrangement of your brain in any given moment gives rise to a specific state of mind, won’t capturing this neural arrangement theoretically let you move your state of consciousness somewhere else?

We’ve entered the realm of functionalism, a school of thought that everything we think, feel, remember, and know indeed arises simply from the mechanical architecture of brain cells and electrical signals between successive synapses.

Let’s say we had the technology to replace a single neuron and have its job be done exactly the same way by a futuristic version of a vacuum tube or microprocessor. In this way, as you would get older and succumb to the inevitable physical breakdown caused by aging, this exchange would be done to more and more components, until eventually your entire biological brain will have been replaced with bits of machinery. Will the “I” you can feel living inside you still be there?

Functionalism says it will, and it’s a belief we ascribe to the rest of the body with surprising conviction. If you lost an arm in an accident and received a prosthetic implant, would you feel any less “you”?

As such, functionalism contends that if we can create a brain from a lab-grown sample or computer program and then move the “brain state snapshot” of your mind onto it, it will go on to feel, emote, and experience everything to the same degree you can.


A follow-on effect of functionalism many find disturbing is that the “I” inside us is just a byproduct of the brain reacting instinctively to the environment, no differently than we assume it does in an earthworm or flea.

Computer scientist Anthony Simola’s book, “The Roving Mind: A Modern Approach to Cognitive Enhancement,” explains more. On the subject, cognitive scientists Marvin Minsky and Steven Pinker have both argued that consciousness is an illusion constructed by the brain’s subcomponents, and as such there is no real “I” inside our heads making independent decisions: “Indeed, one of the longstanding tenets of neuroscience and philosophy is that minds are what brains do — in other words, our consciousness arises from electrical signals and there is no soul or ghost in the machine, so to speak. Consciousness, then, is a mere hallucination — we only feel like we are in charge and make free decisions, whereas in reality our decisions are dictated to us by the laws of physics and the motion of particles that were put in place 13.7 billion years ago at the birth of our Universe.”

Along these lines, Simola explains that if functionalism works, it’s because we actually can’t replace bits of your brain with technology and expect “you” to remain inside it because there’s no “you” to begin with. “What does exist — and only as an abstraction — is a pattern of continuation that can be stored somewhere as information, including your DNA, your memories, and the infinite number of iterations of your persona that followed each other.”

Be the first to comment

Leave a Reply

Your email address will not be published.