Read The Most Human Human Online

Authors: Brian Christian

The Most Human Human (7 page)

BOOK: The Most Human Human
4.33Mb size Format: txt, pdf, ePub
ads

Now, any kid who grows up going to Sunday school knows that this is a touchy point of Christian theology. All kids ask uncomfortable questions once their pets start to die, and tend to get relatively awkward or ad hoc answers. It comes up all over the place in mainstream culture too, from the deliberately provocative title of
All Dogs Go to Heaven
to the wonderful moment in
Chocolat
when the new priest, tongue-tied and flummoxed by a parishioner’s asking whether it was sinful for his (soulless) dog to enter a sweet shop during Lent, summarily
prescribes some Hail Marys and Our Fathers and slams the confessional window. End of discussion.

Where some of the Greeks had imagined animals and even plants as “ensouled”—Empedocles thinking he’d lived as a bush in a past life—Descartes, in contrast, was firm and unapologetic. Even Aristotle’s idea of multiple souls, or Plato’s of partial souls, didn’t satisfy him. Our proprietary, uniquely human soul was the only one. No dogs go to heaven.

The End to End All Ends: Eudaimonia

Where is all this soul talk going, though? To describe our animating force is to describe our nature, and our place in the world, which is to describe how we ought to live.

Aristotle, in the fourth century
B.C.
, tackled the issue in
The Nicomachean Ethics
. The main argument of
The Nicomachean Ethics
, one of his most famous works, goes a little something like this. In life there are means and ends: we do
x
so that
y
. But most “ends” are just, themselves, means to other ends. We gas up our car to go to the store, go to the store to buy printer paper, buy printer paper to send out our résumé, send out our résumé to get a job, get a job to make money, make money to buy food, buy food to stay alive, stay alive to … well, what, exactly, is the goal of
living
?

There’s one end, only one, Aristotle says, which doesn’t give way to some other end behind it. The name for this end, εuδauovia in Greek—we write it “eudaimonia”—has various translations: “happiness” is the most common, and “success” and “flourishing” are others. Etymologically, it means something along the lines of “well-being of spirit.” I like “flourishing” best as a translation—it doesn’t allow for the superficially hedonistic or passive pleasures that can sometimes sneak in under the umbrella of “happiness” (eating Fritos often makes me “happy,” but it’s not clear that I “flourish” by doing so), nor the superficially competitive and potentially cutthroat aspects of
“success” (I might “succeed” by beating my middle school classmate at paper football, or by getting away with massive investor fraud, or by killing a rival in a duel, but again, none of these seems to have much to do with “flourishing”). Like the botanical metaphor underneath it, “flourishing” suggests transience, ephemerality, a kind of process-over-product emphasis, as well as the sense—which is crucial in Aristotle—of doing what one is meant to do, fulfilling one’s promise and potential.

Another critical strike against “happiness”—and a reason that it’s slightly closer to “success”—is that the Greeks don’t appear to care about what you actually
feel
. Eudaimonia is eudaimonia, whether you recognize and experience it or not. You can think you have it and be wrong; you can think you
don’t
have it and be wrong.
8

Crucial to eudaimonia is
—“arete”—translated as “excellence” and “fulfillment of purpose.” Arete applies equally to the organic and the inorganic: a blossoming tree in the spring has arete, and a sharp kitchen knife chopping a carrot has it.

To borrow from a radically different philosopher—Nietzsche—“There is nothing better than what is good! and that is: to have a certain kind of capacity and to use it.” In a gentler, slightly more botanical sense, this is Aristotle’s point too. And so the task he sets out for himself is to figure out the capacity of humans. Flowers are meant to bloom; knives are meant to cut; what are we meant to do?

Aristotle’s Sentence; Aristotle’s Sentence Fails

Aristotle took what I think is a pretty reasonable approach and decided to address the question of humans’ purpose by looking at
what capacities they had that animals lacked. Plants could derive nourishment and thrive physically; animals seemed to have wills and desires, and could move and run and hunt and create basic social structures; but only humans, it seemed, could
reason
.

Thus, says Aristotle, the human arete lies in contemplation—“perfect happiness is a kind of contemplative activity,” he says, adding for good measure that “the activity of the gods … must be a form of contemplation.” We can only imagine how unbelievably convenient a conclusion this is for a
professional philosopher
to draw—and we may rightly suspect a conflict of interest. Then again, it’s hard to say whether his conclusions derived from his lifestyle or his lifestyle derived from his conclusions, and so we shouldn’t be so quick to judge. Plus, who among us wouldn’t have some self-interest in describing their notion of “the most human human”? Still, despite the grain of salt that “thinkers’ praise of thinking” should have been taken with, the emphasis they placed on reason seemed to stick.

The Cogito

The emphasis on reason has its backers in Greek thought, not just with Aristotle. The Stoics, as we saw, also shrank the soul’s domain to that of reason. But Aristotle’s view on reason is tempered by his belief that sensory impressions are the currency, or language, of thought. (The Epicureans, the rivals of the Stoics, believed sensory experience—what contemporary philosophers call
qualia
—rather than intellectual thought, to be the distinguishing feature of beings with souls.) But Plato seemed to want as little to do with the actual, raw experience of the world as possible, preferring the relative perfection and clarity of abstraction, and, before him, Socrates spoke of how a mind that focused too much on sense experience was “drunk,” “distracted,” and “blinded.”
9

Descartes, in the seventeenth century, picks up these threads and leverages the mistrust of the senses toward a kind of radical skepticism: How do I know my hands are really in front of me? How do I know the world actually exists? How do I know that
I
exist?

His answer becomes the most famous sentence in all of philosophy.
Cogito ergo sum
. I think, therefore I am.

I
think
, therefore I am—not “I register the world” (as Epicurus might have put it), or “I experience,” or “I feel,” or “I desire,” or “I recognize,” or “I sense.” No. I
think
. The capacity furthest
away
from lived reality is that which assures us of lived reality—at least, so says Descartes.

This is one of the most interesting subplots, and ironies, in the story of AI, because it was deductive logic, a field that Aristotle helped invent, that was the very first domino to fall.

Logic Gates

It begins, you might say, in the nineteenth century, when the English mathematician and philosopher George Boole works out and publishes a system for describing logic in terms of conjunctions of three basic operations: AND, OR,
10
and NOT. The idea is that you begin with any number of simple statements, and by passing them through
a kind of flowchart of ANDs, ORs, and NOTs, you can build up and break down statements of essentially endless complexity. For the most part, Boole’s system is ignored, read only by academic logicians and considered of little practical use, until in the mid-1930s an undergraduate at the University of Michigan by the name of Claude Shannon runs into Boole’s ideas in a logic course, en route to a mathematics and electrical engineering dual degree. In 1937, as a twenty-one-year-old graduate student at MIT, something clicks in his mind; the two disciplines bridge and merge like a deck of cards. You can implement Boolean logic
electrically
, he realizes, and in what has been called “the most important master’s thesis of all time,” he explains how. Thus is born the electronic “logic gate”—and soon enough, the processor.

Shannon notes, also, that you might be able to think of
numbers
in terms of Boolean logic, namely, by thinking of each number as a series of true-or-false statements about the numbers that it contains—specifically, which powers of 2 (1, 2, 4, 8, 16 …) it contains, because every integer can be made from adding up at most one of each. For instance, 3 contains 1 and 2 but not 4, 8, 16, and so on; 5 contains 4 and 1 but not 2; and 15 contains 1, 2, 4, and 8. Thus a set of Boolean logic gates could treat them as bundles of logic, true and false, yeses and noes. This system of representing numbers is familiar to even those of us who have never heard of Shannon or Boole—it is, of course, binary.

Thus, in one fell swoop, the master’s thesis of twenty-one-year-old Claude Shannon will break the ground for the processor
and
for digital mathematics. And it will make his future wife’s profession—although he hasn’t met her yet—obsolete.

And it does more than that. It forms a major part of the recent history—from the mechanical logic gates of Charles Babbage through the integrated circuits in our computers today—that ends up amounting to a huge blow to humans’ unique claim to and dominance of the area of “reasoning.” Computers, lacking almost everything else that makes humans humans, have our
unique
piece in spades. They have
more of it than we do. So what do we make of this? How has this affected and been affected by our sense of self? How
should
it?

First, let’s have a closer look at the philosophy surrounding, and migration of, the self in times a little closer to home: the twentieth century.

Death Goes to the Head

Like our reprimanded ogler, like philosophy between Aristotle and Descartes, the gaze (if you will) of the medical community and the legal community moves upward too, abandoning the cardiopulmonary region as the brain becomes the center not only of life but of death. For most of human history, breath and heartbeat were the factors considered relevant for determining if a person was “dead” or not. But in the twentieth century, the determination of death became less and less clear, and so did its
definition
, which seemed to have less and less to do with the heart and lungs. This shift was brought on both by the rapidly increasing medical understanding of the brain, and by the newfound ability to restart and/or sustain the cardiopulmonary system through CPR, defibrillators, respirators, and pacemakers. Along with these changes, the increasing viability of organ donation added an interesting pressure to the debate: to declare certain people with a breath and a pulse “dead,” and thus available for organ donation, could save the lives of others.
11
The “President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research” presented Ronald Reagan in the summer of 1981 with a 177-page report called “Defining Death” wherein the American legal definition of death would be expanded, following the decision in 1968 of an ad hoc committee of the Harvard Medical School to
include
those with cardiopulmonary function (be it artificial or natural) who had sufficiently irreparable and severe brain
damage. The Uniform Determination of Death Act, passed in 1981, specifies “irreversible cessation of all functions of the entire brain, including the brain stem.”

Our legal and medical definitions of death—like our sense of what it means to live—move to the brain. We look for death where we look for life.

The bulk of this definitional shift is by now long over, but certain nuances and more-than-nuances remain. For instance: Will damage to certain specific
areas
of the brain be enough to count? If so, which areas? The Uniform Determination of Death Act explicitly sidestepped questions of “neocortical death” and “persistent vegetative state”—questions that, remaining unanswered, have left huge medical, legal, and philosophical problems in their wake, as evidenced by the nearly decade-long legal controversy over Terri Schiavo (in a sense, over whether or not Terri Schiavo was legally “alive”).

It’s not my intention here to get into the whole legal and ethical and neurological scrum over death, per se—nor to get into the theological one about where exactly the soul-to-body downlink has been thought to take place. Nor to get into the metaphysical one about Cartesian “dualism”—the question of whether “mental events” and “physical events” are made up of one and the same, or two different, kinds of stuff. Those questions go deep, and they take us too far off our course. The question that interests me is how this anatomical shift affects and is affected by our sense of what it
means
to be alive and to be human.

That core, that essence, that meaning, seems to have migrated in the past few millennia, from the whole body to the organs in the chest (heart, lungs, liver, stomach) to the one in the head. Where next?

Consider, for instance, the example of the left and right hemispheres.

BOOK: The Most Human Human
4.33Mb size Format: txt, pdf, ePub
ads

Other books

The Plunge by S., Sindhu
100 Days and 99 Nights by Alan Madison
Fugitive Prince by Janny Wurts
A Family for the Farmer by Laurel Blount
A New Home (Chasing Destiny) by Denver, Abigail