Alone Together (52 page)

Read Alone Together Online

Authors: Sherry Turkle

BOOK: Alone Together
9.24Mb size Format: txt, pdf, ePub
By the end of the film, we are left to wonder whether Deckard himself may be an android but unaware of his identity. Unable to resolve this question, we cheer for Deckard and Rachel as they escape to whatever time they have remaining—in other words, to the human condition. Decades after the film’s release, we are still nowhere near developing its androids. But to me, the message of
Blade Runner
speaks to our current circumstance: long before we have devices that can pass any version of the Turing test, the test will seem beside the point. We will not care if our machines are clever but whether they love us.
Indeed, roboticists want us to know that the point of affective machines is that they will take care of us. This narrative—that we are on our way to being tended by “caring” machines—is now cited as conventional wisdom. We have entered a realm in which conventional wisdom, always inadequate, is dangerously inadequate. That it has become so commonplace reveals our willingness to take the performance of emotion as emotion enough.
EMOTION ENOUGH
 
When roboticists argue that robots can develop emotions, they begin by asserting the material basis of all thought and take things from there. For example, Rodney Brooks says that a robot could be given a feeling like “sadness” by setting “a number in its computer code.” This sadness, for Brooks, would be akin to that felt by humans, for “isn’t humans’ level of sadness basically a number, too, just a number of the amounts of various neurochemicals circulating in the brain? Why should a robot’s numbers be any less authentic than a human’s?”
17
Given my training as a clinician, I tend to object to the relevance of a robot’s “numbers” for thinking about emotion because of something humans have that robots don’t: a human body and a human life. Living in our bodies sets our human “numbers.” Our emotions are tied to a developmental path—from childhood dependence to greater independence—and we experience the traces of our earlier dependencies in later fantasies, wishes, and fears. Brooks speaks of giving the robot the emotion of “sadness.” In a few months, I will send my daughter off to college. I’m both sad and thrilled. How would a robot “feel” such things? Why would its “numbers” even “want” to?
Cynthia Breazeal, one of Brooks’s former students, takes another tack, arguing that robotic emotions are valid if you take care to consider them as a new category. Cats have cat emotions, and dogs have dog emotions. These differ from each other and from human emotions. We have no problem, says Breazeal, seeing all of these as “genuine” and “authentic.” And now, robots will have robot emotions, also in their own category and likewise “genuine” and “authentic.” For Breazeal, once you give robotic emotions their own category, there is no need to compare. We should respect emotional robots as “different,” just as we respect all diversity.
18
But this argument confuses the authentic with the sui generis. That the robotic performance of emotion might exist in its own category implies nothing about the authenticity of the emotions being performed. And robots do not “have” emotions that we must respect. We build robots to do things that make us feel as though they have emotions. Our responses are their design template.
Whether one debates the question of robotic emotions in terms of materialism or category, we end up in a quandary. Instead of asking whether a robot has emotions, which in the end boils down to how different constituencies define emotion, we should be asking what kind of relationships we want to have with machines. Why do we want robots to perform emotion? I began my career at MIT arguing with Joseph Weizenbaum about whether a computer program might be a valuable dialogue partner. Thirty years later, I find myself debating those who argue, with David Levy, that my daughter might want to marry one.
19
Simulation is often justified as practice for real-life skills—to become a better pilot, sailor, or race-car driver. But when it comes to human relations, simulation gets us into trouble. Online, in virtual places, simulation turns us into its creatures. But when we step out of our online lives, we may feel suddenly as though in too-bright light. Hank, a law professor in his late thirties, is on the Net for at least twelve hours a day. Stepping out of a computer game is disorienting, but so is stepping out of his e-mail. Leaving the bubble, Hank says, “makes the flat time with my family harder. Like it’s taking place in slow motion. I’m short with them.” After dinner with his family, Hank is grateful to return to the cool shade of his online life.
Nothing in real life with real people vaguely resembles the environment (controlled yet with always-something-new connections) that Hank finds on the Net. Think of what is implied by his phrase “flat time.” Real people have consistency, so if things are going well in our relationships, change is gradual, worked through slowly. In online life, the pace of relationships speeds up. One quickly moves from infatuation to disillusionment and back. And the moment one grows even slightly bored, there is easy access to someone new. One races through e-mail and learns to attend to the “highlights.” Subject lines are exaggerated to get attention. In online games, the action often reduces to a pattern of moving from scary to safe and back again. A frightening encounter presents itself. It is dealt with. You regroup, and then there is another. The adrenaline rush is continual; there is no “flat time.”
Sometimes people try to make life with others resemble simulation. They try to heighten real-life drama or control those around them. It would be fair to say that such efforts do not often end well. Then, in failure, many are tempted to return to what they do well: living their lives on the screen. If there is an addiction here, it is not to a technology. It is to the habits of mind that technology allows us to practice.
Online, we can lose confidence that we are communicating or cared for. Confused, we may seek solace in even more connection. We may become intolerant of our own company: “I never travel without my BlackBerry,” says a fifty-year-old management consultant. She cannot quiet her mind without having things on her mind.
My own study of the networked life has left me thinking about intimacy—about being with people in person, hearing their voices and seeing their faces, trying to know their hearts. And it has left me thinking about solitude—the kind that refreshes and restores. Loneliness is failed solitude.
20
To experience solitude you must be able to summon yourself by yourself; otherwise, you will only know how to be lonely. In raising a daughter in the digital age, I have thought of this very often.
In his history of solitude, Anthony Storr writes about the importance of being able to feel at peace in one’s own company.
21
But many find that, trained by the Net, they cannot find solitude even at a lake or beach or on a hike. Stillness makes them anxious. I see the beginnings of a backlash as some young people become disillusioned with social media. There is, too, the renewed interest in yoga, Eastern religions, meditating, and “slowness.”
These new practices bear a family resemblance to what I have described as the romantic reaction of the 1980s. Then, people declared that something about their human nature made them unlike any machine (“simulated feeling may be feeling; simulated love is never love”). These days, under the tutelage of imaging technology and neurochemistry, people seem willing to grant their own machine natures. What they rebel against is how we have responded to the affordances of the networked life. Offered continual connectivity, we have said yes. Offered an opportunity to abandon our privacy, so far we have not resisted. And now comes the challenge of a new “species”—sociable robots—whose “emotions” are designed to make us comfortable with them. What are we going to say?
The romantic reaction of the 1980s made a statement about computation as a model of mind; today we struggle with who we have become in the presence of computers. In the 1980s, it was enough to change the way you saw yourself. These days, it is a question of how you live your life. The first manifestations of today’s “push back” are tentative experiments to do without the Net. But the Net has become intrinsic to getting an education, getting the news, and getting a job. So, today’s second thoughts will require that we actively reshape our lives on the screen. Finding a new balance will be more than a matter of “slowing down.” How can we make room for reflection?
QUANDARIES
 
In arguing for “caring machines,” roboticists often make their case by putting things in terms of quandaries. So, they ask, “Do you want your parents and grandparents cared for by robots, or would you rather they not be cared for at all?” And alternatively, “Do you want seniors lonely and bored, or do you want them engaged with a robotic companion?”
22
The forced choice of a quandary, posed over time, threatens to become no quandary at all because we come to accept its framing—in this case, the idea that there is only one choice, between robotic caregivers and loneliness. The widespread use of this particular quandary makes those uncomfortable with robotic companions out to be people who would consign an elderly population to boredom, isolation, and neglect.
There is a rich literature on how to break out of quandary thinking. It suggests that sometimes it helps to turn from the abstract to the concrete.
23
This is what the children in Miss Grant’s fifth-grade class did. Caught up in a “for or against” discussion about robot caregivers, they turned away from the dilemma to ask a question (“Don’t we have people for these jobs?”) that could open up a different conversation. While the children only began that conversation, we, as adults, know where it might go. What about bringing in some new people? What must be done to get them where they are needed? How can we revisit social priorities so that funds are made available? We have the unemployed, the retired, and those currently at war—some of these might be available if there were money to pay them. One place to start would be to elevate elder care above the minimum-wage job that it usually is, often without benefits. The “robots-or-no-one” quandary takes social and political choice out of the picture when it belongs at the center of the picture.
I experienced a moment of reframing during a seminar at MIT that took the role of robots in medicine as its focus. My class considered a robot that could help turn weak or paralyzed patients in their beds for bathing. A robot now on the market is designed as a kind of double spatula: one plate slides under the patient; another is placed on top. The head is supported, and the patient is flipped. The class responded to this technology as though it suggested a dilemma: machines for the elderly or not. So some students insisted that it is inevitable for robots to take over nursing roles (they cited cost, efficiency, and the insufficient numbers of people who want to take the job). Others countered that the elderly deserve the human touch and that anything else is demeaning. The conversation argued absolutes: the inevitable versus the unsupportable.
Into this stalled debate came the voice of a woman in her late twenties whose mother had recently died. She did not buy into the terms of the discussion. Why limit our conversation to no robot or a robotic flipper? Why not imagine a machine that is an extension of the body of one human trying to care lovingly for another? Why not build robotic arms, supported by hydraulic power, into which people could slip their own arms, enhancing their strength? The problem as offered presented her with two unacceptable images: an autonomous machine or a neglected patient. She wanted to have a conversation about how she might have used technology as prosthesis. Had her arms been made stronger, she might have been able to lift her mother when she was ill. She would have welcomed such help. It might have made it possible for her to keep her mother at home during her last weeks. A change of frame embraces technology even as it provides a mother with a daughter’s touch.
In the spirit of “break the frame and see something new,” philosopher Kwame Anthony Appiah challenges quandary thinking:
The options are given in the description of the situation. We can call this the
package problem
. In the real world, situations are not bundled together with options. In the real world, the act of framing—the act of describing a situation, and thus of determining that there’s a decision to be made—is itself a moral task. It’s often
the
moral task. Learning how to recognize what is and isn’t an option is part of our ethical development.... In life, the challenge is not so much to figure out how best to play the game; the challenge is to figure out what game you’re playing.
24
 
For Appiah, moral reasoning is best accomplished not by responding to quandaries but by questioning how they are posed, continually reminding ourselves that we are the ones choosing how to frame things.
FORBIDDEN EXPERIMENTS
 
When the fifth graders considered robot companions for their grandparents and wondered, “Don’t we have people for these jobs?” they knew they were asking, “Isn’t ‘taking care’ our parents’ job?” And by extension, “Are there people to take care of us if we become ‘inconvenient’?” When we consider the robots in our futures, we think through our responsibilities to each other.
Why do we want robots to care for us? I understand the virtues of partnership with a robot in war, space, and medicine. I understand that robots are useful in dangerous working conditions. But why are we so keen on “caring”?
25
To me, it seems transgressive, a “forbidden experiment.”
26

Other books

The Liar's Wife by Mary Gordon
West (A Roam Series Novella) by Stedronsky, Kimberly
Duty Bound by Sharon Lee and Steve Miller, Steve Miller
Jowendrhan by Poppet
Lord Mullion's Secret by Michael Innes
Dog Songs by Oliver, Mary
Mystic Rider by Patricia Rice