The Design of Future Things (5 page)

BOOK: The Design of Future Things
2.57Mb size Format: txt, pdf, ePub
ads

Although monitoring eating habits wasn't really possible until recently, we can now attach tiny, barely visible tags on everything: clothes, products, food, items, even people and pets, so everything and everybody can be tracked. These are called radio frequency identification (RFID) tags. No batteries are required because these devices cleverly take their power from the very signal sent to them asking them to state their business, their identification number, and any other tidbits about the person or object they feel like sharing. When all the food in the house is tagged, the house knows what you are eating. RFID tags plus TV cameras, microphones, and other sensors equals “Eat your broccoli,” “No more butter,” “Do your exercises.” Cantankerous kitchens? That's the least of it.

“What if appliances could understand what you need?” asked one group of researchers at the MIT Media Lab. They built a kitchen with sensors everywhere they could put them, television cameras, and pressure gauges on the floor to determine where people were standing. The system, they said, “infers that when a person uses the fridge and then stands in front of the microwave, he/she has a high probability of re-heating food.” “KitchenSense,” they call it. Here is their description:

 

KitchenSense is a sensor-rich networked kitchen research platform that uses CommonSense reasoning to simplify control interfaces and augment interaction. The system's
sensor net attempts to interpret people's intentions to create fail-soft support for safe, efficient and aesthetic activity. By considering embedded sensor data together with daily-event knowledge, a centrally-controlled OpenMind system can develop a shared context across various appliances.

If people use the refrigerator and then walk to the microwave oven, they have a “high probability of reheating food.” This is highfalutin scientific jargon for guessing. Oh, to be sure, it is a sophisticated guess, but a guess it is. This example makes the point: the “system,” meaning the computers in the kitchen, doesn't know anything. It simply makes guesses—statistically plausible guesses based on the designer's observations and hunches. But these computer systems can't know what the person really has in mind.

To be fair, even statistical regularity can be useful. In this particular case, the kitchen doesn't take any action. Rather, it gets ready to act, projecting a likely set of alternative actions on the counter so that if by chance one of them is what you are planning to do, you only have to touch and indicate yes. If the system doesn't anticipate what you had in mind, you can just ignore it—if you can ignore a house that constantly flashes suggestions to you on the counters, walls, and floors.

The system uses CommonSense (any confusion with the English term “common sense” is deliberate). Just as Common-Sense is not really a word, the kitchen doesn't actually have any real common sense. It only has as much sense as the designers were able to program into it, which isn't much, given that it can't really know what is going on.

But what if you decide to do something that the house thinks is bad for you, or perhaps simply wrong? “No,” says the house, “that's not the proper way to cook that. If you do it that way, I can't be responsible for the result. Here, look at this cookbook. See? Don't make me say ‘I told you so.'” This scenario has shades of
Minority Report
, the Steven Spielberg movie based upon the great futurist Philip K. Dick's short story by that name. As the hero, John Anderton, flees from the authorities, he passes through the crowded shopping malls. The advertising signs recognize him, calling him by name, tempting him with offers of clothes and special sale prices just for him. A car advertisement calls out, “It's not just a car, Mr. Anderton. It's an environment, designed to soothe and caress the tired soul.” A travel agency entices him: “Stressed out, John Anderton? Need a vacation? Come to Aruba!” Hey, signs, he's running away from the cops; he isn't going to stop and buy some clothes.

Minority Report
was fiction, but the technology depicted in the movie was designed by clever, imaginative experts who were very careful to depict only plausible technologies and activities. Those active advertising signs are already close to becoming a reality. Billboards in multiple cities recognize owners of BMW's Mini Cooper automobile by the RFID tags they carry. The Mini Cooper advertisements are harmless, and each driver has volunteered and selected the phrases that will be displayed. But now that this has started, where will it stop? Today, the billboard requires its audience to carry RFID tags, but this is a temporary expedient. Already, researchers are hard at work, using television cameras to view people and automobiles, then to identify them by their gait and facial features or their model, year, color,
and license plate. This is how the City of London keeps track of cars that enter the downtown area. This is how security agencies expect to be able to track suspected terrorists. And this is how advertising agencies will track down potential customers. Will signs in shopping malls offer special bargains for frequent shoppers? Will restaurant menus offer your favorite meals? First in a science fiction story, then in a movie, then on the city streets: look for them at your nearest shops. Actually, you won't have to look: they will be looking for you.

Communicating with Our Machines: We Are Two
Different Species

I can imagine it now: it's the middle of the night, but I can't sleep. I quietly get out of bed, careful not to wake up my wife, deciding that as long as I can't sleep, I might as well do some work. But my house detects my movement and cheerfully announces “good morning” as it turns on the lights and starts the radio news station. The noise wakes my wife: “Why are you waking me up so early?” she mumbles.

In this scenario, how could I explain to my house that behavior perfectly appropriate at one time is not so at another? Should I program it according to the time of day? No, sometimes my wife and I need to wake up early, perhaps to catch a morning flight. Or I might have a telephone conference with colleagues in India. For the house to know how to respond appropriately, it would need to understand the context, the reasoning behind the actions. Am I waking up deliberately? Does my wife still want to sleep? Do I really want the radio and the
coffeemaker turned on? For the house to understand the reasons behind my awakening, it would have to know my intentions, but that requires effective communication at a level not possible today or in the near future. For now, automatic, intelligent devices must still be controlled by people. In the worst of cases, this can lead to conflict. In the best of cases, the human+machine forms a symbiotic unit, functioning well. Here, we could say that it is humans who make machines smart.

The technologists will try to reassure us that all technologies start off as weak and underpowered, that eventually their deficits are overcome and they become safe and trustworthy. At one level they are correct. Steam engines and steamships used to explode; they seldom do anymore. Early aircraft crashed frequently. Today, they hardly ever do. Remember Jim's problem with the cruise control that regained speed in an inappropriate location? I am certain that this particular situation can be avoided in future designs by coupling the speed control with the navigation system, or perhaps by developing systems in which the roads themselves transmit the allowable speeds to the cars (hence, no more ability to exceed speed limits), or better yet, by having the car itself determine safe speeds given the road, its curvature, slipperiness, and the presence of other traffic or people.

I am a technologist. I believe in making lives richer and more rewarding through the use of science and technology. But that is not where our present path is taking us. Today we are confronting a new breed of machines with intelligence and autonomy, machines that can indeed take over for us in many situations. In many cases, they will make our lives more effective,
more fun, and safer. In others, however, they will frustrate us, get in our way, and even increase danger. For the first time, we have machines that are attempting to interact with us socially.

The problems that we face with technology are fundamental. They cannot be overcome by following old pathways. We need a calmer, more reliable, more humane approach. We need augmentation, not automation.

 

CHAPTER TWO
The Psychology of
People & Machines

Three scenarios are possible now:

•
“Pull
up! Pull up!” cries the airplane to the pilots when it decides that the airplane is too low for safety.

•
“Beep,
beep,” signals the automobile, trying to get the driver's attention, while tightening the seat belts, straightening the seat backs, and pretensing the brakes. It is watching the driver with its video camera, and because the driver is not paying attention to the road, it applies the brakes.

•
“Bing,
bing,” goes the dishwasher, signaling that the dishes are clean, even if it is 3 a.m. and the message serves no purpose except to wake you up.

 

Three scenarios likely to be possible in the future:

•
“No,”
says the refrigerator. “Not eggs again. Not until your weight comes down, and your cholesterol levels
are lower. Scale tells me you still have to lose about five pounds, and the clinic keeps pinging me about your cholesterol. This is for your own good, you know.”

•
“I
just checked your appointments diary in your smart phone,” says the automobile as you get into the car after a day's work. “You have free time, so I've programmed that scenic route with those curves you like so much instead of the highway—I know you'll enjoy driving it. Oh, and I've picked your favorite music to go with it.”

•
“Hey,”
says your house one morning as you prepare to leave. “What's the rush? I took out the garbage. Won't you even say thank you? And can we talk about that nice new controller I've been showing you pictures of? It would make me much more efficient, and you know, the Jones's house already has one.”

 

Some machines are obstinate. Others are temperamental. Some are delicate, some rugged. We commonly apply human attributes to our machines, and often these terms are fittingly descriptive, even though we use them as metaphors or similes. The new kinds of intelligent machines, however, are autonomous or semiautonomous: they create their own assessments, make their own decisions. They no longer need people to authorize their actions. As a result, these descriptions no longer are metaphors—they have become legitimate characterizations.

The first three scenarios I've depicted are already real. Airplane warning systems do indeed cry out, “Pull up!” (usually with a female voice). At least one automobile company has
announced a system that monitors the driver with its video camera. If the driver does not appear to be watching the road when its forward-looking radar system senses a potential collision, it sounds an alarm—not with a voice (at least, not yet), but with buzzers and vibration. If the driver still does not respond, the system automatically applies the brakes and prepares the car for a crash. And I have already been awakened in the middle of the night by my dishwasher's beeps, anxious to tell me that the dishes have been cleaned.

Much is known about the design of automated systems. Slightly less is known about the interaction between people and these systems, although this too has been a topic of deep study for the past several decades. But these studies have dealt with industrial and military settings, where people were using the machines as part of their jobs. What about everyday people who might have no training, who might only use any particular machine occasionally? We know almost nothing of this situation, but this is what concerns me: untrained, everyday people, you and me, using our household appliances, our entertainment systems, and our automobiles.

How do everyday people learn how to use the new generation of intelligent devices? Hah! In bits and pieces, by trial and error, with endless feelings of frustration. The designers seem to believe that these devices are so intelligent, so perfect in their operation, that no learning is required. Just tell them what to do and get out of the way. Yes, the devices always come with instruction manuals, often big, thick, heavy ones, but these manuals are neither explanatory nor intelligible. Most do not even attempt to explain how the devices work. Instead, they give magical, mystical names to the mechanisms, oftentimes using
nonsensical marketing terms, stringing the words together as in “SmartHomeSensor,” as if naming something explains it.

BOOK: The Design of Future Things
2.57Mb size Format: txt, pdf, ePub
ads

Other books

Let's Stay Together by Murray, J.J.
Casted (Casted series) by Loveday, Sonya
Blood Lust by Zoe Winters
The Garden Thief by Gertrude Chandler Warner
Many Roads Home by Ann Somerville
Home Truths by Mavis Gallant
Japanese Fairy Tales by Yei Theodora Ozaki
Safe in His Arms by Dana Corbit