Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (27 page)

BOOK: Machines of Loving Grace
14Mb size Format: txt, pdf, ePub
ads

By the end of the 1990s Winograd believed that the artificial intelligence and human-computer interaction research communities represented fundamentally different philosophies about how computers and humans should interact.
The easy solution, he argued, would be to agree that both camps were equally “right” and to stipulate that there will obviously be problems in the world that could be solved by either approach.
This answer, however, would obscure the fact that inherent in these differing approaches are design consequences that play out in the nature of the systems.
Adherents of the different philosophies, of course, construct these systems.
Winograd had come to believe that the way computerized systems are designed has consequences both in how we understand humans and how technologies are designed for their benefit.

The AI approach, which Winograd describes as “rationalistic,” views people as machines.
Humans are modeled with internal mechanisms very much like digital computers.
“The key assumptions of the rationalistic approach are that the essential aspects of thought can be captured in a formal symbolic representation,” he wrote.
“Armed with this logic, we
can create intelligent programs and we can design systems that optimize human interaction.”
20
In opposition to the rational AI approach was the augmentation method that Winograd describes as “design.”
That approach is more common in the human-computer interaction community, in which developers focus not on modeling a single human intelligence, but rather on using the relationship between the human and the environment as the starting point for their investigations, be it with humans or an ensemble of machines.
Described as “human-centered” design, this school of thought eschews formal planning in favor of an iterative approach to design, encapsulated well in the words of industrial designer and IDEO founder David Kelley: “Enlightened trial and error outperforms the planning of flawless intellect.”
21
Pioneered by psychologists and computer scientists like Donald Norman at the University of California at San Diego and Ben Shneiderman at the University of Maryland, human-centered design would become an increasingly popular approach that veered away from the rationalist AI model that was popularized in the 1980s.

In the wake of the defeats of the AI Winter in the 1980s, in the 1990s, the artificial intelligence community also changed dramatically.
It largely abandoned its original formal, rationalist, top-down straitjacket that had been described as GOFAI, or “Good Old-Fashioned Artificial Intelligence,” in favor of statistical and “bottom-up,” or “constructivist,” approaches, such as those pursued by roboticists led by Rod Brooks.
Nevertheless, the two communities have remained distant, preoccupied with their contradictory challenges of either replacing or augmenting human skills.

In breaking with the AI community, Winograd became a member of a group of scientists and engineers who took a step back and rethought the relationship between humans and the smart tools they were building.
In doing so, he also reframed the concept of “machine” intelligence.
By posing the question of whether humans were actually “thinking machines” in the
same manner of the computing machines that the AI researchers were trying to create, he argued that the very question makes us engage—wittingly or not—in an act of projection that tells us more about our concept of human intelligence than it does about the machines we are trying to understand.
Winograd came to believe that intelligence is an artifact of our social nature, and that we flatten our humaneness by simplifying and distorting what it is to be human as simulated by a machine.

W
hile artificial intelligence researchers rarely spoke to the human-centered design researchers, the two groups would occasionally organize confrontational sessions at technical conferences.
In the 1990s, Ben Shneiderman was a University of Maryland computer scientist who had become a passionate advocate of the idea of human-centered design through what became known as “direct manipulation.”
During the 1980s, with the advent of Apple’s Macintosh and Microsoft’s Windows software systems, direct manipulation had become the dominant style in computer user interfaces.
For example, rather than entering commands on a keyboard, users could change the shape of an image displayed on a computer screen by grabbing its edges or corners with a mouse and dragging them.

Shneiderman was at the top of his game and, during the 1990s, he was a regular consultant at companies like Apple, where he dispensed advice on how to efficiently design computer interfaces.
Shneiderman, who considered himself to be an opponent of AI, counted among his influences Marshall McLuhan.
During college, after attending a McLuhan lecture at the Ninety-Second Street Y in New York City, he had felt emboldened to pursue his own various interests, which crossed the boundaries between science and the humanities.
He went home and printed a business card describing his job title as “General Eclectic” and subtitled it “Progress is not our most important product.”
22

He would come to take pride in the fact that Terry Winograd had moved from the AI camp to the HCI world.
Shneiderman sharply disagreed with Winograd’s thesis when he read it in the 1970s and had written a critical chapter about SHRDLU in his 1980 book
Software Psychology
.
Some years later, when Winograd and Flores published
Understanding Computers and Cognition,
which made the point that computers were unable to “understand” human language, he called Winograd up and told him, “You were my enemy, but I see you’ve changed.”
Winograd laughed and told Shneiderman that
Software Psychology
was required reading in his classes.
The two men became good friends.

In his lectures and writing, Shneiderman didn’t mince words in his attacks on the AI world.
He argued not only that the AI technologies would fail, but also that they were poorly designed and ethically compromised because they were not designed to help humans.
With great enthusiasm, he argued that autonomous systems raised profound moral issues related to who was responsible for the actions of the systems, issues that weren’t being addressed by computer researchers.
This fervor wasn’t new for Shneiderman, who had previously been involved in legendary shouting matches at technical meetings over the wisdom of designing animated human agents like Microsoft Clippy, the Office assistant, and Bob, the ill-received attempts Microsoft made to design more “friendly” user interfaces.

In the early 1990s anthropomorphic interfaces had become something of a fad in computer design circles.
Inspired in part by Apple’s widely viewed Knowledge Navigator video, computer interface designers were adding helpful and chatty animated cartoon figures to systems.
Banks were experimenting with animated characters that would interact with customers from the displays of automated teller machines, and car manufacturers started to design cars with speech synthesis that would, for example, warn drivers when their door was ajar.
The initial
infatuation would come to an abrupt halt, however, with the embarrassing failure of Microsoft Bob.
Although it had been designed with the aid of Stanford University user interface specialists, the program was widely derided as a goofy idea.

Did the problem with Microsoft Bob lie with the idea of a “social” interface itself, or instead with the way it was implemented?
Microsoft’s bumbling efforts were rooted in the work of Stanford researchers Clifford Nass and Byron Reeves, who had discovered that humans responded well to computer interfaces that offered the illusion of human interaction.
The two researchers arrived at the Stanford Communications Department simultaneously in 1986.
Reeves had been a professor of communications at the University of Wisconsin, and Nass had studied mathematics at Princeton and worked at IBM and Intel before turning his interests toward sociology.

As a social scientist Nass worked with Reeves to conduct a series of experiments that led to a theory of communications they described as “the Media Equation.”
In their book,
The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places,
they explored what they saw as the human desire to interact with technological devices—computers, televisions, and other electronic media—in the same “social” fashion with which they interacted with other humans.
After writing
The Media Equation,
Reeves and Nass were hired as consultants for Microsoft in 1992 and encouraged the design of familiar social and natural interfaces.
This extended the thinking underlying Apple’s graphical interface for the Macintosh, which, like Windows, had been inspired by the original work done on the Alto at Xerox PARC.
Both were designs that attempted to ease the task of using a computer by creating a graphical environment that was evocative of a desk and office environment in the physical world.
However, Microsoft Bob, which attempted to extend the “desktop” metaphor by creating a graphical computer environment that evoked the family home, adopted a cartoonish and dumbed-down
approach that the computer digerati found insulting to users, and the customer base overwhelmingly rejected it.

Decades later the success of Apple’s Siri has vindicated Nass and Reeves’s early research, suggesting that the failure of Microsoft Bob lay in how Microsoft built and applied the system rather than in the approach itself.
Siri speeds people up in contexts where keyboard input might be difficult or unsafe, such as while walking or driving.
Both Microsoft Bob and Clippy, on the other hand, slowed down user engagement with the program and came across as overly simplistic and condescending to users: “as if they were being asked to learn to ride a bicycle by starting with a tricycle,” according to Tandy Trower, a veteran Microsoft executive.
23
That said, Trower pointed out that Microsoft may have fundamentally mistaken the insights offered by the Stanford social scientists: “Nass and Reeves’ research suggests that user expectations of human-like behavior are raised as characters become more human,” he wrote.
“This Einstein character sneezed when you asked it to exit.
While no users were ever sprayed upon by the character’s departure, if you study Nass and Reeves, this is considered to be socially inappropriate and rude behavior.
It doesn’t matter that they are just silly little animations on the screen; most people still respond negatively to such behavior.”
24

Software agents had originally emerged during the first years of the artificial intelligence era when Oliver Selfridge and his student Marvin Minsky, both participants in the original Dartmouth AI conference, proposed an approach to machine perception called “Pandemonium,” in which collaborative programs called “demons,” described as “intelligent agents,” would work in parallel on a computer vision problem.
The original software agents were merely programs that ran inside a computer.
Over two decades computer scientists, science-fiction authors, and filmmakers embellished the idea.
As it evolved, it became a powerful vision of an interconnected, computerized world in which software programs cooperated in pursuit of a
common goal.
These programs would collect information, perform tasks, and interact with users as animated servants.
But was there not a Faustian side to this?
Shneiderman worried that leaving computers to complete human tasks would create more problems than it solved.
This concern was at the core of his attack on the AI designers.

B
efore their first debate began, Shneiderman tried to defuse the tension by handing Pattie Maes, who had recently become a mother, a teddy bear.
At two technical meetings in 1997 he squared off against Maes over AI and software agents.
Maes was a computer scientist at the MIT Media Lab who, under the guidance of laboratory founder Nicholas Negroponte, had started developing software agents to perform useful tasks on behalf of a computer user.
The idea of agents was just one of many future-of-computing ideas pursued at Negroponte’s laboratory, which started out as ArcMac, the Architecture Machine Group, and groomed multiple generations of researchers who took the lab’s “demo or die” ethos to heart.
His original ArcMac research group and its follow-on, the MIT Media Laboratory, played a significant role in generating many of the ideas that would filter into computing products at both Apple and Microsoft.

In the 1960s and 1970s, Negroponte, who had trained as an architect, traced a path from the concept of a human-machine design partnership to a then far-out vision of “architecture without architects” in his books
The Architecture Machine
and
Soft Architecture Machines.

In his 1995 book
Being Digital,
Negroponte, a close friend to AI researchers like Minsky and Papert, described his view of the future of human-computer interaction: “What we today call ‘agent-based interfaces’ will emerge as the dominant means by which computers and people talk with one another.”
25
In 1995, Maes founded Agents, Inc., a music recommendation service,
with a small group of Media Lab partners.
Eventually the company would be sold to Microsoft, which used the privacy technologies her company had developed but did not commercialize its original software agent ideas.

BOOK: Machines of Loving Grace
14Mb size Format: txt, pdf, ePub
ads

Other books

Sunrise Crossing by Jodi Thomas
Agent Running in the Field by John le Carré
Jane Was Here by Kernochan, Sarah
Satisfying Angela by Erica Storm
Love Lies Bleeding by Jess Mcconkey