Read Reclaiming Conversation Online

Authors: Sherry Turkle

Reclaiming Conversation (40 page)

BOOK: Reclaiming Conversation
12.96Mb size Format: txt, pdf, ePub
ads
Places

S
o, there are guideposts and ways to begin. But distractions too often override conversations. We've seen family dinner tables where children literally beg for the attention of parents who love them. We've seen classes where a teacher is present but students' faces are lowered to phones. And we've created a political culture in which contention rather than conversation is the rule. We show little interest in listening to good ideas if they come from political opponents. Indeed, we see politicians awkwardly rejecting their own good ideas if they are now put forth by members of an opposing party.

In this environment, it makes sense to recall what is hopeful: We can reclaim places for conversation, and we still know where to find each other
.
Parents can find children at the dinner table; teachers can find students in class and office hours. Colleagues at work can find each other in hallways, in mini-kitchens, and in meetings. In politics, we have institutions for debate and action. Looking at these, we've seen disruptions in the field: meetings that aren't meetings and classes that are
waiting to be digitized. And of course, where this book began: family dinners that are silent because each member is taken away on a device.

But the importance of focusing on the places where conversation can happen, and reclaiming them—as opposed to just saying, “Put down your phone”—is that the places themselves propose a sustained conversation, week after week, year after year. Legislatures in democracies—these have been built over centuries. When they go through rough patches, we count on the idea that their existence means that there will be other days and other chances, because, in a democracy, certain conversations are a responsibility. The family dinner at your house is something created and built over time. As you build it, you teach your children that problems need not be catastrophes; they can be talked through today and again tomorrow. It is a place to develop a sense of proportion. It may seem innocuous when parents are too distracted to discuss the small ups and downs of childhood. But there is a cost. Parental attention helps children learn what is and is not an emergency and what children can handle on their own. Parental inattention can mean that, to a child, everything feels urgent.

A child alone with a problem has an emergency. A child in conversation with a grown-up is facing a moment in life and learning how to cope with it.

When we reclaim conversation and the places to have them, we are led to reconsider the importance of long-term thinking. Life is not a problem looking for a quick fix. Life is a conversation and you need places to have it.
The virtual provides us with more spaces for these conversations and these are enriching.
But what makes the physical so precious is that it supports continuity in a different way; it doesn't come and go, and it binds people to it. You can't just log off or drop out. You learn to live things through.

Students who resist coming to office hours speak in glowing terms of how, when they finally show up, they find mentors who have persisted in asking them to come in and talk. The phrase that sticks with me is a student quoting his teacher, who kept saying, “You're going to come tomorrow, right?”

I've said that our crisis in conversation can also be described as a crisis
in mentorship. People step away from mentorship and use technology as an excuse. Employers delegate to email an evaluation that could be a mentoring conversation if done face-to-face. Teachers are encouraged to equate what they can offer their students in class with something that can be captured in a series of six-minute videos. Parents don't ask their children to put down their smartphones at dinner, as if the phones are a generational right; many parents seem prepared to accept robot babysitters if they can prove their safety. In all these cases, I see us turn away from what we know about love and work.

Public Conversations

W
e turn away because we feel helpless. And so many people tell me that they feel alone—that they have to figure things out by themselves, everything from their privacy on Facebook to their sense that their data are being used and they don't quite know how or why. But we can think through these things together.

Public conversations give us a way to reclaim private conversations by modeling them, including how to show tolerance and genuine interest in what other people are saying. They can teach how conversation unfolds, not in proclamations or bullet points but in turn taking, negotiation, and other rhythms of respect.

People have long sensed that this kind of public conversation is crucial to democracy. Historically, there have been markets and town squares and town meetings. There have been clubs and coffeehouses and salons. The sociologist Jürgen Habermas associates the seventeenth-century English coffeehouse with
the rise of a “public sphere
.” That was a place where people of all classes could talk about politics without fear of arrest. “What a lesson,” the Abbé Prévost said in 1728, “to see a lord, or two, a baronet, a shoemaker, a tailor, a wine merchant, and a few others of the same stamp poring over the same newspapers. Truly the coffeehouses . . . are the
seats of English liberty
.”

Of course there was never any perfect public sphere. The coffeehouse
required leisure and some money. It was not a place for women. Nevertheless, the coffeehouses were a place to talk about politics and
learn
how to talk about it. Joseph Addison, the essayist and politician, writing in 1714 as the voice of the newspaper
The Spectator
, makes the point that he enjoys coffeehouse debates because they are a place to learn. “Coffee houses have ever since been my chief Places of Resort, where I have made the greatest Improvements; in order to which I have taken a particular Care never to be of the same Opinion with
the Man I conversed with
.”

When Addison went to the coffeehouse, he wanted to talk only to people he
disagreed
with. That is a long way from today's politically committed students who avoid talking about politics with those who disagree with them, even if they live just down the hall. But, long way or not, the image of Addison inspires: He uses a public conversation to keep him open to changing his mind.

A public conversation can model freedom of thought. It can model courage and compromise. It can help people think things through.

When Thoreau thought about our responsibility to occupy the present, he talked about improving his “nick of time.” To capture this thought, Thoreau takes a moment to reflect, even to put a notch on his walking stick:

In any weather, at any hour of the day or night, I have been anxious
to improve the nick of time
, and notch it on my stick too; to stand on the meeting of two eternities, the past and the future, which is precisely the present moment; to toe that line.

The “nick” raises the question of legacy. We represent a past that needs to be considered precisely, even as we create a new world. Whatever the weather, Thoreau chooses to improve his moment. He summons us to
ours.

A Fourth
Chair?
The End of Forgetting

What Do We Forget When We Talk to Machines?

There are some people who have tried to make friends . . . but they've fallen through so badly that they give up. So when they hear this idea about robots being made to be companions, well, it's not going to be like a human and have its own mind to walk away or ever leave you or anything like that.

—A SIXTEEN-YEAR-OLD GIRL, CONSIDERING THE IDEA OF A MORE SOPHISTICATED SIRI

T
horeau talks of three chairs and I think about a fourth. Thoreau says that for the most expansive conversations, the deepest ones, he brought his guests out into nature—
he calls it his withdrawing room
, his “best room.” For me, the fourth chair defines a philosophical space. Thoreau could go into nature, but now, we contemplate both nature and a second nature of our own making, the world of the artificial and virtual. There, we meet machines that present themselves as open for conversation. The fourth chair raises the question: Who do we become when we talk to machines?

Some talking machines have modest ambitions—such as putting you through the paces of a job interview. But others aspire to far more. Most of these are just now coming on the scene: “caring robots” that will tend to our children and elders if we ourselves don't have the time, patience, or resources; automated psychotherapy programs that will
substitute for humans in conversation
. These present us with something new.

It may not feel new. All day every day, we connect with witty apps, we type our information into dialogue programs, and we get information from personal digital assistants. We are comfortable talking at machines and through machines. Now we are asked to join a new kind of conversation, one that promises “empathic” connections.

Machines have none to offer, and yet we persist in the desire for companionship and even communion with the inanimate. Has the simulation of empathy become empathy enough? The simulation of communion, communion enough?

The fourth chair defines a space that Thoreau could not have seen. It is our nick of time.

What do we forget when we talk to machines—and what can we remember?

“A Computer Beautiful Enough That a Soul Would Want to Live in It”

I
n the early 1980s, I interviewed one of Marvin Minsky's young students who told me that, as he saw it, his hero, Minsky, one of the founders of artificial intelligence (AI), was “trying to create a computer beautiful enough that a soul would want to live in it.”

That image has stayed with me for more than thirty years.

In the AI world, things have gone from mythic to prosaic. Today, children grow up with robotic pets and digital dolls. They think it natural to chat with their phones. We are at what I have called
a “robotic moment
,” not because of the merits of the machines we've built but because of our eagerness for their company. Even before we make the robots, we remake ourselves as people ready to be their companions.

For a long time, putting hope in robots has expressed an enduring technological optimism, a belief that as things go wrong, science will go right. In a complicated world, what robots promise has always seemed like calling in the cavalry. Robots save lives in war zones; they can
function in space and in the sea—indeed, anywhere that humans would be in danger. They perform medical procedures that humans cannot do; they have revolutionized design and manufacturing.

But robots get us to hope for more. Not only for the feats of the cavalry, but for simple salvations. What are the simple salvations? These are the hopes that robots will be our companions. That taking care of us will be their jobs. That we will take comfort in their company and conversation. This is a station on our voyage of forgetting.

What do we forget when we talk to machines? We forget what is special about being human. We forget what it means to have authentic conversation. Machines are programmed to have conversations “as if” they understood what the conversation is about. So when we talk to them, we, too, are reduced and confined to the “as if.”

Simple Salvations

O
ver the decades, I have heard the hopes for robot companionship grow stronger, even though most people don't have experience with an embodied robot companion at all but rather with something like Siri, Apple's digital assistant, where the conversation is most likely to be “locate a restaurant” or “locate a friend.”

But even telling Siri to “locate a friend” moves quickly to the fantasy of finding a friend in Siri. People tell me that they look forward to the time, not too far down the road, when Siri or one of her near cousins will be something like a best friend, but in some ways better: one you can always talk to, one that will never be angry, one you can never disappoint.

And, indeed, Apple's first television advertising campaign for Siri introduced “her” not as a feature, a convenient way of getting information, but as a companion. It featured a group of movie stars—Zooey Deschanel, Samuel L. Jackson, John Malkovich—who put Siri in the role of confidante. Deschanel, playing the ditzy ingénue, discusses the weather, and how she doesn't want to wear shoes or clean house on a
rainy day. She just wants to dance and have tomato soup. Siri plays the role of the best friend who “gets her.” Jackson has a conversation with Siri that is laced with double meanings about a hot date: A lady friend is coming over and Jackson is cooking gazpacho and risotto. It's fun to joke with his sidekick Siri about his plans for seduction. Malkovich, sitting in a deep leather chair in a room with heavy wall moldings and drapes—it might be an apartment in Paris or Barcelona—talks seriously with Siri about the meaning of life. He likes it that Siri has a sense of humor.

In all of this, we are being schooled in how to have conversations with a machine that may approximate banter but doesn't understand our meaning at all; in these conversations, we're doing all the work but we don't mind.

I was on a radio show about Siri with a panel of engineers and social scientists. The topic turned to how much people like to talk to Siri, part of the general phenomenon that people feel uninhibited when they talk to a machine. They like the feeling of no judgment. One of the social scientists on the program suggested that soon a souped-up and somewhat smoothed-out Siri could serve as a psychiatrist.

It didn't seem to bother him that Siri, in the role of psychiatrist, would be counseling people about their lives without having lived one. If Siri could
behave
like a psychiatrist, he said, it could be a psychiatrist. If no one minded the difference between the as-if and the real thing, let the machine take the place of the person. This is the pragmatism of the robotic moment.

But the suggestions of a robotic friend or therapist—the simple salvations of the robotic moment—are not so simple at all.

Because for all that they are programmed to pretend, machines that talk to us as though they care about us don't know the arc of a human life. When we speak to them of our human problems of love and loss, or the pleasures of tomato soup and dancing barefoot on a rainy day, they can deliver only performances of empathy and connection.

What an artificial intelligence
can
know is your schedule, the literal content of your email, your preferences in film, TV, and food. If you wear
body-sensing technologies, an AI can know what emotionally activates you because it may infer this from physiological markers. But it won't understand what any of these things
mean
to you.

But the meaning of things is just what we want our machines to understand. And we are willing to fuel the fantasy that they do.

Vulnerability Games

W
e have been playing vulnerability games with artificial intelligence for a very long time, since before programs were anywhere near as sophisticated as they are now. In the 1960s,
a computer program called ELIZA
, written by MIT's Joseph Weizenbaum, adopted the “mirroring” style of a Rogerian psychotherapist. So, if you typed, “Why do I hate my mother?” ELIZA might respond, “I hear you saying that you hate your mother.” This program was effective—at least for a short while—in creating the illusion of intelligent listening. And there is this:
We want to talk to machines even when we know they do not deserve our confidences. I call this the “ELIZA effect.”

Weizenbaum was shocked that people (for example, his secretary and graduate students) who knew the limits of ELIZA's ability to know and understand nevertheless wanted to be alone with the program in order to confide in it. ELIZA demonstrated that almost universally, people project human attributes onto programs that present as humanlike, an effect that is magnified when they are with robots called “sociable” machines—machines that do such things as track your motion, make eye contact, and
remember your name
. Then people feel in the presence of a knowing other that cares about them. A young man, twenty-six, talks with a robot named Kismet that makes eye contact, reads facial expressions, and vocalizes with the cadences of human speech.
The man finds Kismet so supportive
that he speaks with it about the ups and downs of his day.

Machines with voices have particular power
to make us feel understood. Children first learn to know their mothers by recognizing their
voices, even while still in the womb. During our evolution, the only speech we heard was the speech of other humans. Now, with the development of sophisticated artificial speech, we are the first humans asked to distinguish human from non-human speech. Neurologically, we are not set up to do this job. Since human beings have for so long—say, 200,000 years—heard only human voices, it takes serious mental effort to
distinguish human speech from the machine-generated kind
. To our brains, speaking is something that people do.

And machines with humanlike faces have particular power as well.

In humans, the shape of a smile or a frown releases chemicals that affect our mental state. Our mirror neurons fire both when we act and
when we observe others acting
.
We feel what we see on the face of another.
An expressive robot face can have this impact on us. The philosopher Emmanuel Lévinas writes that the presence of a face initiates the human ethical compact. The face communicates, “Thou shalt not kill me.” We are bound by the face even before we know what stands behind it, even before we might learn it is the face of a machine that cannot be killed. And the robot's face certainly announces, for Lévinas, “Thou shalt not abandon me”—again, an ethical and emotional compact that captures us but
has no meaning when we feel it
for a machine.

An expressive machine face—on a robot or on a screen-based computer program—puts us on a landscape where we seek recognition and feel we can get it. We are in fact triggered to seek empathy from an object that has none to give.

I worked at the MIT Artificial Intelligence Laboratory
as people met the sociable, emotive robot Kismet for the first time. What Kismet actually said had no meaning, but the sound came out warm or inquiring or concerned.

Sometimes Kismet's visitors felt the robot had recognized them and had “heard” their story. When things worked perfectly from a technical standpoint, they experienced what felt like an empathic connection. This convincing imitation of understanding is impressive and can be a lot of fun if you think of these encounters as theater. But I saw children look to Kismet for a friend in the real. I saw children hope for the robot's
recognition, and sometimes become bereft when there was nothing nourishing on offer.

Estelle, twelve, comes to Kismet wanting a conversation. She is lonely, her parents are divorced; her time with Kismet makes her feel special. Here is a robot who will listen just to her. On the day of Estelle's visit, she is engaged by Kismet's changing facial expressions, but Kismet is not at its vocal best. At the end of a disappointing session, Estelle and the small team of researchers who have been working with her go back to the room where we interview children before and after they meet the robots. Estelle begins to eat the juice, crackers, and cookies we have left out as snacks. And she does not stop, not until we ask her to please leave some food for the other children. Then she stops, but only briefly. She begins to eat again, hurriedly, as we wait for the car service that will take her back to her after-school program.

Estelle tells us why she is upset: Kismet does not like her. The robot began to talk with her and then turned away. We explain that this is not the case. The problem had been technical. Estelle is not convinced. From her point of view, she has failed on her most important day. As Estelle leaves, she takes four boxes of cookies from the supply closet and stuffs them into her backpack. We do not stop her. Exhausted, my team reconvenes at a nearby coffee shop to ask ourselves a hard question: Can a broken robot break a child?

We would not be concerned with the ethics of having a child play with a buggy copy of Microsoft Word or a torn Raggedy Ann doll. A word-processing program is there to do an instrumental thing. If it does worse than usual on a particular day, well, that leads to frustration but no more. But a program that encourages you to connect with it—this is a different matter.

How is a broken Kismet different from a broken doll? A doll encourages children to project their own stories and their own agendas onto a passive object. But children see sociable robots as “alive enough” to have their own agendas. Children attach to them not with the psychology of projection but with the psychology of relational engagement, more in the way they attach to people.

If a little girl is feeling guilty for breaking her mother's crystal, she may punish a row of Barbie dolls, putting the dolls into detention as a way of working through her own feelings. The dolls are material for what the child needs to accomplish emotionally. That is how the psychology of projection works: It enables the working through of the child's feelings. But the sociable robot presents itself as having a mind of its own. As the child sees it, if this robot turns away, it wanted to. That's why children consider winning the heart of a sociable robot to be a personal achievement. You've gotten something lovable to love you. Again, children interact with sociable robots not with the psychology of projection but with engagement. They react as though they face another person. There is room for new hurt.

BOOK: Reclaiming Conversation
12.96Mb size Format: txt, pdf, ePub
ads

Other books

Evan's Gallipoli by Kerry Greenwood
Guarding Me by Slayer, Megan
When We Meet Again by Kristin Harmel
B00BSH8JUC EBOK by Cohen, Celia
The Seascape Tattoo by Larry Niven
Fortune's Lead by Barbara Perkins
Point Hope by Kristen James
Wanton by Jezebel Jorge