Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (31 page)

BOOK: Machines of Loving Grace
12.14Mb size Format: txt, pdf, ePub
ads

However, the power of “convivial” technologies, which was Illich’s name for tools that are under individual control, remains a vitally important design point that is possibly even more relevant today.
Evidence of this was apparent in an interaction between Felsenstein and Illich, when the radical scholar visited Berkeley in 1986.
Upon meeting him, Illich mocked Felsenstein for trying to substitute communication using computers for direct communication.
“Why do you want to go
deet-deet-deet
to talk to Pearl over there?
Why don’t you just go talk to Pearl?”
Illich asked.

Felsenstein responded: “What if I didn’t know that it was Pearl that I wanted to talk to?”

Illich stopped, thought, and said, “I see what you mean.”

To which Felsenstein replied: “So you see, maybe a bicycle society needs a computer.”

Felsenstein had convinced Illich that their communication could create community even if it was not face-to-face.
Given the rapid progress in robotics, Felsenstein and Illich’s insight about design and control is even more important today.
In Felsenstein’s world, drudgery would be the province of machines and work would be transformed into play.
As he described it in the context of his proposed “Tom Swift Terminal,”
14
which was a hobbyist system that foreshadowed the first PCs, “if work is to become play, then tools must become toys.”

T
oday, Microsoft’s corporate campus is a sprawling set of interlocking walkways, buildings, sports fields, cafeterias, and parking garages dotted with fir trees.
In some distinct ways it feels different from the Googleplex in Silicon Valley.
There are no brightly colored bicycles, but the same cadres of young tech workers who could easily pass for college or even high school students amble around the campus.

When you approach the elevator in the lobby of Building 99, where the firm’s corporate research laboratories are housed, the door senses your presence and opens automatically.
It feels like
Star Trek
: Captain Kirk never pushed a button either.
The intelligent elevator is the brainchild of Eric Horvitz, a senior Microsoft research scientist and director of Microsoft’s Redmond Research Center.
Horvitz is well known among AI researchers as one of the first generation of computer scientists to use statistical techniques to improve the performance of AI applications.

He, like many others, began with an intense interest in understanding how human minds work.
He obtained a medical degree at Stanford during the 1980s, and soon immersed himself further in graduate-level neurobiology research.
One night in the laboratory he was using a probe to insert a single neuron
into the brain of a rat.
Horvitz was thrilled.
It was a dark room and he had an oscilloscope and an audio speaker.
As he listened to the neuron fire, he thought to himself, “I’m finally inside.
I am somewhere in the midst of vertebrate thought.”
At the same moment he realized that he had no idea what the firing actually suggested about the animal’s thought process.
Glancing over toward his laboratory bench he noticed a recently introduced Apple IIe computer with its cover slid off to the side.
His heart sank.
He realized that he was taking a fundamentally wrong approach.
What he was doing was no different from taking the same probe and randomly sticking it inside the computer in search of an understanding of the computer’s software.

He left medicine, shifting his course of study, and started taking cognitive psychology and computer science courses.
He adopted Herbert Simon, the Carnegie Mellon cognitive scientist and AI pioneer, as an across-the-country mentor.
He also became close to Judea Pearl, the UCLA computer science professor who had pioneered an approach to artificial intelligence breaking with the early logic- and rule-based approach, instead focusing on recognizing patterns by building nesting webs of probabilities.
This approach is not conceptually far from the neural network ideas so harshly criticized by Minsky and Papert in the 1960s.
As a result, during the 1980s at Stanford, Horvitz was outside the mainstream in computer science research.
Many mainstream AI researchers thought his interest in probability theory was dated, a throwback to an earlier generation of “control theory” methods.

After he arrived at Microsoft Research in 1993, Horvitz was given a mandate to build a group to develop AI techniques to improve the company’s commercial products.
Microsoft’s Office Assistant, a.k.a.
Clippy, was first introduced in 1997 to help users master hard-to-use software, and it was largely a product of the work of Horvitz’s group at Microsoft Research.
Unfortunately, it became known as a laughingstock failure in
human-computer interaction design.
It was so widely reviled that Microsoft’s thriller-style promotional video for Office 2010 featured Clippy’s gravestone, dead in 2004 at the age of seven.
15

The failure of Clippy offered a unique window into the internal politics at Microsoft.
Horvitz’s research group had pioneered the idea of an intelligent assistant, but Microsoft Research—and hence Horvitz’s group—was at that point almost entirely separate from Microsoft’s product development department.
In 2005, after Microsoft had killed the Office Assistant technology, Steven Sinofsky, the veteran head of Office engineering, described the attitude toward the technology during program development: “The actual feature name used in the product is never what we named it during development—the Office Assistant was famously named TFC during development.
The ‘C’ stood for clown.
I will let your active imagination figure out what the TF stood for.”
16
It was clear that the company’s software engineers had no respect for the idea of an intelligent assistant from the outset.
Because Horvitz and his group couldn’t secure enough commitment from the product development group for Clippy, Clippy fell by the wayside.

The original, more general concept of the intelligent office assistant, which Horvitz’s research group had described in a 1998 paper, was very different from what Microsoft later commercialized.
The final shipping version of the assistant omitted software intelligence that would have prevented the assistant from constantly popping up on the screen with friendly advice.
The constant intrusions drove many users to distraction and the feature was irreversibly—perhaps prematurely—rejected by Microsoft’s customers.
However, the company chose not to publicly explain
why
the features required to make Clippy work well were left out.
A graduate student once asked Horvitz this after a public lecture and the response given was that the features had bloated Office 97 to such an extent that it would no longer fit on its intended distribution
disk.
17
(Before the Internet offered feature updates, leaving something out was the only practical option.)

Such are the politics of large corporations, but Horvitz would persist.
Today, a helpful personal assistant—who resides inside a computer monitor—greets visitors to his fourth-floor glass-walled corner cubicle.
The monitor is perched on a cart outside his office, and the display shows the cartoon head of someone who looks just like Max Headroom, the star of the British television series about a stuttering artificial intelligence that incorporated the dying memories of Edison Carter, an earnest investigative reporter.
Today Horvitz’s computerized greeter can inform visitors of where he is, set up appointments, or suggest when he’ll next be available.
It tracks almost a dozen aspects of Horvitz’s work life, including his location and how busy he is likely to be at any moment during the day.

Horvitz has remained focused on systems that augment humans.
His researchers design applications that can monitor a doctor and patient or other essential conversation, offering support so as to eliminate potentially deadly misperceptions.
In another application, his research team maintains a book of morbid transcripts from plane crashes to map what can go wrong between pilots and air traffic control towers.
The classic and tragic example of miscommunication between pilots and air traffic control is the Tenerife Airport disaster of 1977, during which two 747 jetliners were navigating a dense fog without ground radar and collided while one was taxiing and the other was taking off, killing 583 people.
18
There is a moment in the transcript where two people attempt to speak at the same time, causing interference that renders a portion of the conversation unintelligible.
One goal in the Horvitz lab is to develop ways to avoid these kinds of tragedies.
When developers integrate machine learning and decision-making capabilities into AI systems, Horvitz believes that those systems will be able to reason about human conversations and then make judgments about what part of a problem people are best capable to solve and
what part should be filtered through machines.
The ubiquitous availability of cheap computing and the Internet has made it easier for these systems to show results and gain traction, and there are already several examples of this kind of augmentation on the market today.
As early as 2005, for example, two chess amateurs used a chess-playing software program to win a match against chess experts and individual chess-playing programs.

Horvitz is continuing to deepen the human-machine interaction by researching ways to couple machine learning and computerized decision-making with human intelligence.
For example, his researchers have worked closely with the designers of the crowd-sourced citizen science tool called Galaxy Zoo, harnessing armies of human Web surfers to categorize images of galaxies.
Crowd-sourced labor is becoming a significant resource in scientific research: professional scientists can enlist amateurs, who often need to do little more than play elaborate games that exploit human perception, in order to help scientists map tricky problems like protein folding.
19
In a number of documented cases teams of human experts have exceeded the capability of some of the most powerful supercomputers.

By assembling ensembles of humans and machines and designating a specific research task for each group, scientists can create a powerful hybrid research team.
The computers possess staggering image recognition capabilities and they can create tables of the hundreds of visual and analytic features for every galaxy currently observable by the world’s telescopes.
That was very inexpensive but did not yield perfect results.
In the next version of the program, dubbed Galaxy Zoo 2, computers with machine-learning models would interpret the images of the galaxies in order to present accurate specimens to human classifiers, who could then catalog galaxies with much less effort than they had in the past.
In yet another refinement, the system would add the ability to recognize the
particular skills of different human participants and leverage them appropriately.
Galaxy Zoo 2 was able to automatically categorize the problems it faced and knew which people could contribute to solving which problem most effectively.

At a TED talk in 2013, Horvitz showed the reaction of a Microsoft intern to her first encounter with his robotic greeter.
He played a clip of the interaction from the point of view of the system, which tracked her face.
The young woman approached the system and, when it told her that Eric was speaking with someone in his office and offered to put her on his calendar, she balked and declined the computer’s offer.
“Wow, this is amazing,” she said under her breath, and then, anxious to end the conversation added, “Nice meeting you!”
This was a good sign, Horvitz concluded, and he suggested that this type of interaction presages a world in which humans and machines are partners.

C
onversational systems are gradually slipping into our daily interactions.
Inevitably, the partnerships won’t always develop in the way we have anticipated.
In December of 2013, the movie
Her,
a love story starring Joaquin Phoenix and the voice of Scarlett Johansson, became a sensation.
Her
was a science-fiction film set in some unspecified not-far-in-the-future Southern California, and it told the story of a lonely man falling in love with his operating system.
This premise seemed entirely plausible to many people who saw it.
By the end of 2013 millions of people around the globe already had several years of experience with Apple’s Siri, and there is a growing sense that “virtual agents” are making the transition from novelties to the mainstream.

Part of
Her
is also about the singularity, the idea that machine intelligence is accelerating at such a pace that it will eventually surpass human intelligence and become independent, rendering humans “left behind.”
Both
Her
and
Transcendence,
another singularity-obsessed science-fiction movie introduced the following spring, are most intriguing for the way they portray human-machine relationships.
In
Transcendence
the human-computer interaction moves from pleasant to dark, and eventually a superintelligent machine destroys human civilization.
In
Her,
ironically, the relationship between the man and his operating system disintegrates as the computer’s intelligence develops so quickly that, not satisfied even with thousands of simultaneous relationships, it transcends humanity and .
.
.
departs.

BOOK: Machines of Loving Grace
12.14Mb size Format: txt, pdf, ePub
ads

Other books

The Mystery of the Lost Village by Gertrude Chandler Warner
Saint Nicked by Herschel Cozine
Z 2135 by Wright, David W., Platt, Sean
The Lost Gods by Brickley, Horace