The Design of Future Things (2 page)

BOOK: The Design of Future Things
3.61Mb size Format: txt, pdf, ePub
ads

Consider Tom's predicament. He asks his car's navigation system for directions, and it provides them. Sounds simple.
Human-machine interaction: a nice dialogue. But notice Tom's lament: “It doesn't give me any say.” Designers of advanced technology are proud of the “communication capabilities” they have built into their systems. But closer analysis shows this to be a misnomer: there is no communication, none of the back-and-forth discussion that characterizes true dialogue. Instead, we have two monologues. We issue commands to the machine, and it, in turn, commands us. Two monologues do not make a dialogue.

In this particular case, Tom does have a choice. If he turns the navigation system off, the car still functions, so because his navigation system doesn't give him enough say over the route, he simply doesn't use it. But other systems do not provide this option: the only way to avoid them is not to use the car. The problem is that these systems can be of great value. Flawed though they may be, they can save lives. The question, then, is how we can change the way we interact with our machines to take better advantage of their strengths and virtues, while at the same time eliminating their annoying and sometimes dangerous actions.

As our technology becomes more powerful, its failure in terms of collaboration and communication becomes ever more critical. Collaboration means synchronizing one's activities, as well as explaining and giving reasons. It means having trust, which can only be formed through experience and understanding. With automatic, so-called intelligent devices, trust is sometimes conferred undeservedly—or withheld, equally undeservedly. Tom decided not to trust his navigational system's instructions, but in some instances, rejecting technology can cause harm. For example, what if Tom turned off his car's antiskid brakes or the stability control? Many drivers believe
they can control the car better than these automatic controls. But antiskid and stability systems actually perform far better than all but the most expert professional drivers. They have saved many lives. But how does the driver know which systems can be trusted?

Designers tend to focus on the technology, attempting to automate whatever possible for safety and convenience. Their goal is complete automation, except where this is not yet possible because of technical limitations or cost concerns. These limitations, however, mean that the tasks can only be partially automated, so the person must always monitor the action and take over whenever the machine can no longer perform properly. Whenever a task is only partially automated, it is essential that each party, human and machine, know what the other is doing and what is intended.

Two Monologues Do Not Make a Dialogue

SOCRATES: You know, Phaedrus, that's the strange thing about writing. . . . they seem to talk to you as if they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you just the same thing forever.

—Plato: Collected Dialogues, 1961.

Two thousand years ago, Socrates argued that the book would destroy people's ability to reason. He believed in dialogue, in conversation and debate. But with a book, there is no debate: the written word cannot answer back. Today, the book is such a
symbol of learning and knowledge that we laugh at this argument. But take it seriously for a moment. Despite Socrates's claims, writing does instruct because we do not need to debate its content with the author. Instead, we debate and discuss with one another, in the classroom, with discussion groups, and if the work is important enough, through all the media at our disposal. Nonetheless, Socrates's point is valid: a technology that gives no opportunity for discussion, explanation, or debate is a poor technology.

As a business executive and as a chair of university departments, I learned that the process of making a decision is often more important than the decision itself. When a person makes decisions without explanation or consultation, people neither trust nor like the result, even if it is the identical course of action they would have taken after discussion and debate. Many business leaders ask, “Why waste time with meetings when the end result will be the same?” But the end result is not the same, for although the decision itself is identical, the way it will be carried out and executed and, perhaps most importantly, the way it will be handled if things do not go as planned will be very different with a collaborating, understanding team than with one that is just following orders.

Tom dislikes his navigation system, even though he agrees that at times it would be useful. But he has no way to interact with the system to tailor it to his needs. Even if he can make some high-level choices—“fastest,” “shortest,” “most scenic,” or “avoid toll road”—he can't discuss with the system why a particular route is chosen. He can't know why the system thinks route A is better than route B. Does it take into account the long
traffic signals and the large number of stop signs? And what if two routes barely differ, perhaps by just a minute out of an hour's journey? He isn't given alternatives that he might well prefer despite a slight cost in time. The system's methods remain hidden so that even if Tom were tempted to trust it, the silence and secrecy promotes distrust, just as top-down business decisions made without collaboration are distrusted.

What if navigation systems were able to discuss the route with the driver? What if they presented alternative routes, displaying them both as paths on a map and as a table showing the distance, estimated driving time, and cost, allowing the driver to choose? Some navigation systems do this, so that the drive from a city in California's Napa Valley to Palo Alto might be presented like this:

FROM ST. HELENA, CA TO PALO ALTO, CA

This is a clear improvement, but it still isn't a conversation. The system says, “Here are three choices: select one.” I can't ask for details or seek some modification. I am familiar with all these routes, so I happen to know that the fastest, shortest, cheapest route is also the least scenic, and the most scenic route is not even offered. But what about the driver who is not so
knowledgeable? We would never settle for such limited engagement with a human driver. The fact that navigation systems offering drivers even this limited choice of routes are considered a huge improvement over existing systems demonstrates how bad the others are, how far we still have to go.

If my car decides an accident is imminent and straightens the seat or applies the brakes, I am not asked or consulted; nor am I even told why. Is the car necessarily more accurate because, after all, it is a mechanical, electronic technology that does precise arithmetic without error? No, actually it's not. The arithmetic may be correct, but before doing the computation, it must make assumptions about the road, the other traffic, and the capabilities of the driver. Professional drivers will sometimes turn off automatic equipment because they know the automation will not allow them to deploy their skills. That is, they will turn off whatever they are permitted to turn off: many modern cars are so authoritarian that they do not even allow this choice.

Don't think that these behaviors are restricted to the automobile. The devices of the future will present the same issues in a wide variety of settings. Automatic banking systems already exist that determine whether you are eligible for a loan. Automated medical systems determine whether you should receive a particular treatment or medication. Future systems will monitor your eating, your reading, your music and television preferences. Some systems will watch where you drive, alerting the insurance company, the rental car agency, or even the police if they decide that you have violated their rules. Other systems monitor for copyright violations, making decisions about what should be permitted. In all these cases, actions are apt to be
taken arbitrarily, with the systems making gross assumptions about your intentions from a limited sample of your behavior.

So-called intelligent systems have become too smug. They think they know what is best for us. Their intelligence, however, is limited. And this limitation is fundamental: there is no way a machine has sufficient knowledge of all the factors that go into human decision making. But this doesn't mean we should reject the assistance of intelligent machines. As machines start to take over more and more, however, they need to be socialized; they need to improve the way they communicate and interact and to recognize their limitations. Only then can they become truly useful. This is a major theme of this book.

When I started writing this book, I thought that the key to socializing machines was to develop better systems for dialogue. But I was wrong. Successful dialogue requires shared knowledge and experiences. It requires appreciation of the environment and context, of the history leading up to the moment, and of the many differing goals and motives of the people involved. I now believe this to be a fundamental limitation of today's technology, one that prevents machines from full, humanlike interaction. It is hard enough to establish this shared, common understanding with people, so how do we expect to be able to develop it with machines?

In order to cooperate usefully with our machines, we need to regard the interaction somewhat as we do interaction with animals. Although both humans and animals are intelligent, we are different species, with different understandings and different capabilities. Similarly, even the most intelligent machine is a different species, with its own set of strengths and weaknesses,
its own set of understandings and capabilities. Sometimes we need to obey the animals or machines; sometimes they need to obey us.

Where Are We Going? Who Is in Charge?

“My car almost got me into an accident,” Jim told me.

“Your car? How could that be?” I asked.

“I was driving down the highway using the adaptive cruise control. You know, the control that keeps my car at a constant speed unless there is a car in front, and then it slows up to keep a safe distance. Well, after awhile, the road got crowded, so my car slowed. Eventually, I came to my exit, so I maneuvered into the right lane and then turned off the highway. By then, I had been using the cruise control for so long, but going so slowly, that I had forgotten about it. But not the car. I guess it said to itself, ‘Hurrah! Finally, there's no one in front of me,' and it started to accelerate to full highway speed, even though this was the off-ramp that requires a slow speed. Good thing I was alert and stepped on the brakes in time. Who knows what might have happened.”

We are in the midst of a major change in how we relate to technology. Until recently, people have been in control. We turned the technology on and off, told it which operation to perform, and guided it through its operations. As technology became more powerful and complex, we became less able to understand how it worked, less able to predict its actions. Once computers and microprocessors entered the scene, we often found ourselves lost and confused, annoyed and angered. But
still, we considered ourselves to be in control. No longer. Now, our machines are taking over. They act as if they have intelligence and volition, even though they don't.

Machines monitor us with the best of intentions, of course, in the interest of safety, convenience, or accuracy. When everything works, these smart machines can indeed be helpful, increasing safety, reducing the boredom of tedious tasks, making our lives more convenient, and performing tasks more accurately than we could. It is indeed convenient that the automobile automatically slows when a car darts too closely in front of us, that it shifts gears quietly and smoothly, or, in the home, that our microwave oven knows just when the potatoes are cooked. But what about when the technology fails? What about when it does the wrong thing or fights with us for control? What about when Jim's auto notices that there are no cars in front of it, so it accelerates to highway speed, even though it is no longer on a highway? The same mechanisms that are so helpful when things are normal can decrease safety, decrease comfort, and decrease accuracy when unexpected situations arise. For us, the people involved, it leads to danger and discomfort, frustration and anger.

Today, machines primarily signal their states through alerts and alarms, meaning only when they get into trouble. When a machine fails, a person is required to take over, often with no advance warning and often with insufficient time to react properly. Jim was able to correct his car's behavior in time, but what if he couldn't have? He would have been blamed for causing an accident. Ironically, if the actions of a so-called intelligent device lead to an accident, it will probably be blamed on human error!

The proper way to provide for smooth interaction between people and intelligent devices is to enhance the coordination and cooperation of both parties, people and machines. But those who design these systems often don't understand this. How is a machine to judge what is or is not important, especially when what is important in one situation may not be in another?

BOOK: The Design of Future Things
3.61Mb size Format: txt, pdf, ePub
ads

Other books

The Road by Cormac McCarthy
Marked by Passion by Kate Perry
Beast of Burden by Marie Harte
Grounded by Constance Sharper
If Angels Fight by Richard Bowes
Morrighan by Mary E. Pearson
The Ogre's Pact by Denning, Troy