Read Connectome Online

Authors: Sebastian Seung

Connectome (39 page)

BOOK: Connectome
6.9Mb size Format: txt, pdf, ePub
ads

Carving, codebreaking, and comparing connectomes, as I envisioned them in Part IV, will depend on computers for analyzing large amounts of data, but they will not require the simulation of neural spiking. Having done some simulations myself, I regard this as a virtue. Analyzing data is less likely to lead us astray. Starting from the data, we extract what knowledge we can, with a minimum of assumptions. Simulation, in contrast, starts from the wish to reproduce an interesting phenomenon and tries to find the data necessary to do it. Wishful thinking can be dangerous if it's not based on reality. In the past, we've had to incorporate all kinds of assumptions into our models that are not backed up by empirical data. But connectomics and other methods of measuring from real brains are becoming more sophisticated. With better data, we'll be able to make our brain models more realistic. There's no denying that simulation will be a powerful way of doing neuroscience, when we can do it right.

Earlier I described how we might someday read a memory from a connectome by unscrambling its neurons to find a synaptic chain. This would enable us to guess the order in which neurons spike during recall of a sequential memory. An alternative approach is to use the connectome to build a computer simulation of the spiking of neurons in a network, then run the simulation and watch the neurons to see the order in which they spike during memory recall. It's only natural to dream of scaling up this approach to an entire brain. Uploading is the ultimate way of testing the hypothesis “You are your connectome.”

Researchers have engaged in protracted debates over the proper way to simulate the brain. The discussion of uploading in this chapter will raise all of the same conceptual difficulties, though—I hope—in more vivid form. Let's consider the first question that any modeler must answer: What constitutes success?

 

The promises of Alcor, resurrection and eternal youth, are easy to imagine. But uploading is a different story. What would it be like to live as a simulation inside a computer? Would you feel bored and lonely?

This question has been explored by the “brain in a vat”
scenario, a staple of science fiction and college philosophy courses. Suppose that a mad scientist captures you, removes your brain, and manages to keep it alive and functioning in a vat of chemicals. Neural activity would still come and go, but would have no relation to the external world because of your brain's disembodiment. The isolation would far exceed lying in bed and closing your eyes. Severed from your sense organs and muscles, you would be enclosed in the darkest, most solitary confinement possible.

It's not a pretty picture, but uploaders need not worry. Any future civilization advanced enough to create a brain simulation would also be able to handle its input and output. Actually, input and output would be easy in comparison, because the connections between the brain and the external world are far less numerous than the connections
within
the brain. The optic nerve, which connects the eye to the brain, carries visual input through its million axons. That may sound like a lot, but there are many more axons running within the brain. (Most of the brain's 100 billion neurons have axons.) On the output side, the pyramidal tract carries signals from the motor cortex to the spinal cord, so that the brain can control movement of the body. Like the optic nerve, the pyramidal tract contains
a million axons. Therefore, our future civilization could hook the simulation up to cameras and other sensors, or to an artificial body. If these “peripherals” were well crafted, the uploaded would be able to smell a rose and enjoy all the other pleasures of the real world.

But why stop at simulating the brain? Why not the world too? The uploaded could smell a virtual rose and pal around with other simulated brains. Many people seem to prefer virtual worlds these days anyway, judging from the time and money spent on computer games. And who knows? Maybe our physical world is actually a virtual world. If it were, would we have any way of knowing? Some physicists and philosophers—and those modern-day sages known as movie directors—suggest that we and the entire universe are actually simulations running on a gigantic computer.
We may dismiss this idea as absurd, but logical reasoning cannot exclude it.

If the simulation feels exactly the same as reality, then living as a simulation will be just as much fun as real living. (Or for those who don't like the latter much, let's put it this way: Living as a simulation won't be any worse.) Audiophiles attempt to achieve “high fidelity” through electronic systems that faithfully reproduce a live musical performance. Uploaders will be obsessed by verisimilitude of a much more important kind. They can only hope for a very good approximation, not an exact replica. How accurate is accurate enough?

Most problems in computer science are straightforward to define. If we want to multiply two numbers, it's clear what success means. The goal of artificial intelligence (AI) is more difficult to state precisely. The mathematician Alan Turing
provided an operational definition in 1950. He imagined a test in which an examiner interrogates a human and a machine. The examiner's task is to decide which is which. This might sound easy, but there is a catch: The interrogation is conducted by typing and reading text, in the style of Internet “chat.” This prevents the examiner from distinguishing by appearance, sound, or other properties that Turing deemed irrelevant to intelligence. Now suppose that many examiners attempt the task. If this panel cannot come to the correct consensus, then we can declare the machine a successful example of AI.

Turing proposed his test to evaluate generic AI. We can easily refine it to measure success at simulating a specific person. Just restrict the examiners to friends and family, those who know the person best. If they are unable to distinguish between reality and simulation, then uploading has been successful.

Should sight and sound be barred from the specific Turing test, as they were from the generic version? You might balk at this, since voice and smile seem integral to the experience of loving someone. But people have fallen in love through Internet chat and email, before ever meeting each other. The surgical procedure of tracheotomy, which cuts a hole into the windpipe to relieve obstructed breathing, has the side effect of damaging the voice, yet everyone agrees that it's the same person afterward. A final reason to exclude the body from the test is that uploaders hope to escape their bodies. It's only their minds they care about preserving.

Will friends and family be vigilant enough to detect all differences between the simulation and the real person? Historical cases of impostors don't inspire confidence. In the sixteenth century, a man appeared in the French village of Artigat claiming to be Martin Guerre, who had been missing for eight years. He moved in with Guerre's wife and had children with her. Eventually accused of being an impostor, the “new” Guerre was acquitted at the first trial but found guilty at the second. He was on the verge of winning his appeal when another man dramatically appeared and claimed to be the real Guerre. All family members were suddenly unanimous in declaring the new Guerre—the man on trial—an impostor. He was convicted, and confessed to his crime shortly before his execution.

The new Guerre had excelled at imitation, failing only at side-by-side comparison. He might have survived a proper Turing test,
conducted without sight or sound, as the real Guerre turned out not to remember his married life that well.

This and other cases of impostors show that friends and family are not perfect judges of personal identity. But if the differences are too subtle to be noticed, perhaps they don't matter. And even if they are noticeable, the simulation might not be considered a complete failure. Victims of brain damage are not the same after their injury, yet they are still accepted by others. If friends and family are the “customers” for uploading, their satisfaction is all that counts.

Then again, maybe the real customer is you, the person who wants to be uploaded. Of course it's important that your friends and family welcome the digitized you. But it's even more important that
you
be satisfied. This issue leads us onto shaky ground, but we can't avoid confronting it.

Suppose that you are uploaded to a computer. I turn the power switch on for the first time, and the simulation starts to run. I'm sure I would ask you, “How do you feel?” as if you were waking up from a deep sleep, or coming out of a coma. How would you reply?

The Turing test strives for objectivity by appealing to external examiners, but it would be silly to ignore subjective evaluation. Surely I'd want to ask your uploaded self, “Are you satisfied with your simulation?” We would never ask this of an equation that models a chemical reaction or a black hole, but it would be completely appropriate for a brain simulation.

At the same time, it's not clear whether I should believe your response. If your brain simulation malfunctions, you might act like a victim of brain damage. Neurologists know that such victims often deny their problems. Amnesics, for example, sometimes accuse others of deceiving them when they have memory lapses. Stroke victims don't always acknowledge paralysis, and may contrive fantastic explanations as to why they cannot perform certain tasks. Your subjective opinion simply might not be reliable.

Yet one could certainly argue that it's your opinion that should count the most. The satisfaction of your friends and family would depend on how well your simulation conforms to their expectations of your behavior. These expectations would be based on models of you, which they have constructed through years of observing your behavior. But you also have a self-model based on introspection as well as self-observation. Your self-model is based on far more data than someone else's model of you.

Perhaps there have been times when you've thought, “I'm not feeling like myself today.” Maybe you've lost your temper over something trivial, or behaved in some other way that you found uncharacteristic. But usually you behave in a way that you expect. Your self-model would presumably be uploaded along with all your other memories. You would be able to check the fidelity of your simulation by continuously comparing your behavior with the predictions of your self-model. The more accurate the simulation,
the fewer the inconsistencies.

Now let's suppose that uploading has been judged successful by both objective and subjective criteria. Your friends and family say they are satisfied. You (your simulation, that is) say you are satisfied. Can we now declare the uploading a success? There's one final catch: We do not have direct access to your feelings. Even if you say you feel fine, how do we know that you feel anything at all? Perhaps you're just going through the motions. What if uploading turned you into a zombie?

Some philosophers believe that it's fundamentally impossible to simulate consciousness on a computer. They say that a simulation of water, no matter how accurate, isn't actually wet. Similarly, your simulation might seem accurate to your friends and family, and might even proclaim its satisfaction, while still lacking the subjective experiences that we call consciousness. That may not seem bad, but it certainly doesn't sound like a route to immortality.

There is no way to refute the zombie idea, because there is no objective way to measure subjective feelings. In fact, the idea is so powerful that it can be applied to real brains as well as simulations. For all you know, your dog could be a zombie. It may act hungry, but it doesn't really have the feeling of hunger. (The French philosopher René Descartes argued that animals are zombies because they lack souls.) For all I know, you're a zombie too. There is no proof otherwise, because no person has direct experience of anyone else's feelings. Yet most people, especially pet lovers, believe that animals can feel pain. And virtually everyone believes that other humans feel pain.

I don't see any way to resolve such philosophical debates. It's just your intuition against mine. Personally, I think that a sufficiently accurate brain simulation would be conscious. The real difficulty is not philosophical but practical: Can that level of accuracy really be achieved?

 

Henry Markram has become famous as the creator of the world's most expensive brain simulation, but neuroscientists know him best for his pioneering experiments on synapses. Markram was one of the first
to investigate the sequential version of Hebb's rule in a systematic way, by varying the time delay between the spiking of the two neurons when inducing synaptic plasticity. When I first heard Markram speak at a conference, I also encountered the chain-smoking and charming Alex Thomson, another prominent neuroscientist, who lectured about synapses with bubbling enthusiasm. She was in love with them, and wanted us to love them too. Markram, in contrast, came across as the high priest of synapses, summoning our awe and respect for their intricate mysteries.

In a 2009 lecture Markram promised a computer simulation of a human brain within ten years, a sound bite that traveled around the world. If you view the video of the lecture online, you might agree with me that his handsomely sculpted face looks a bit fierce, but his manner of speaking is gentle and inviting, with the quiet conviction of a visionary. He didn't sound so calm later that year. His competitor, the IBM researcher Dharmendra Modha,
announced a simulation of a cat brain, after having claimed a mouse brain simulation in 2007. Markram responded with an angry letter to IBM's chief technology officer:

 

Dear Bernie,

You told me you would string this guy up by the toes the last time Mohda [
sic
] made his stupid statement about simulating the mouse's brain.

I thought that . . . journalists would be able to recognize that what IBM reported is a scam—no where near a cat-scale brain simulation, but somehow they are totally deceived by these incredible statements.

BOOK: Connectome
6.9Mb size Format: txt, pdf, ePub
ads

Other books

Fugitive by Phillip Margolin
The Dwarfs by Harold Pinter
Battlesaurus by Brian Falkner
Oh-So-Sensible Secretary by Jessica Hart
The Zombie in the Basement by Giangregorio, Anthony
Honor Bound by Moira Rogers
Show and Tell by Jasmine Haynes
Northern Star by Jodi Thomas
Patente de corso by Arturo Pérez-Reverte