Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (28 page)

BOOK: Machines of Loving Grace
13.76Mb size Format: txt, pdf, ePub
ads

At first the conference organizers had wanted Shneiderman and Maes to debate the possibility of artificial intelligence.
Shneiderman declined and the topic was changed.
The two researchers agreed to debate the contrasting virtues of software agents that acted on a user’s behalf, on the one hand, and software technologies that directly empowered a computer user, on the other.

The high-profile debate took place in March of 1997 at the Association for Computing Machinery’s Computers and Human Interaction (CHI) Conference in Atlanta.
The event was given top billing along with other questions of pressing concern like, “Why Aren’t We All Flying Personal Helicopters?”
and “The Only Good Computer Is an Invisible Computer?”
In front of an audience of the world’s best computer interface designers, the two computer scientists spent an hour laying out the pros and cons of designs that directly augment humans and those that work more or less independently of them.

“I believe the language of ‘intelligent autonomous agents’ undermines human responsibility,” Shneiderman said.
“I can show you numerous articles in the popular press which suggest the computer is the active and responsible party.
We need to clarify that either programmers or operators are the cause of computer failures.”
26

Maes responded pragmatically.
Shneiderman’s research was in the Engelbart tradition of building complex systems to give users immense power, and as a result they required significant training.
“I believe that there are real limits to what we can do with visualization and direct manipulation because our computer environments are becoming more and more complex,” she responded.
“We cannot just add more and more sliders and buttons.
Also, there are limitations because the users
are not computer-trained.
So, I believe that we will have to, to some extent, delegate certain tasks or certain parts of tasks to agents that can act on our behalf or that can at least make suggestions to us.”
27

Perhaps Maes’s most effective retort was that it might be wrong to believe that humans always wanted to be in control and to be responsible.
“I believe that users sometimes want to be couch-potatoes and wait for an agent to suggest a movie for them to look at, rather than using 4,000 sliders, or however many it is, to come up with a movie that they may want to see,” she argued.
Things politely concluded with no obvious winner, but it was clear to Jonathan Grudin, who was watching from the audience, that Pattie Maes had been brave to debate this at a CHI conference, on Shneiderman’s home turf.
The debate took place a decade and a half before Apple unveiled Siri, which successfully added an entirely artificial human element to human-computer interaction.
Years later Shneiderman would acknowledge that there were some cases in which using speech and voice recognition might be appropriate.
He did, however, remain a staunch critic of the basic idea of software agents, and pointed out that aircraft cockpit designers had for decades tried and failed to use speech recognition to control airplanes.

When Siri was introduced in 2010, the “Internet of Things” was approaching the peak in the hype cycle.
This had originally been Xerox PARC’s next big idea after personal computing.
In the late 1980s PARC computer scientist Mark Weiser had predicted that as microprocessor cost, size, and power collapsed, it would be possible to discreetly integrate computer intelligence into everyday objects.
He called this “UbiComp” or ubiquitous computing.
Computing would disappear into the woodwork, he argued, just as electric motors, pulleys, and belts are now “invisible.”
Outside Weiser’s office was a small sign:
UBICOMP IS UPWARDLY COMPATIBLE WITH REALITY
.
(A popular definition of “ubiquitous” is “notable only for its absence.”)

It would be Steve Jobs who once again most successfully took advantage of PARC’s research results.
In the 1980s he had borrowed the original desktop computing metaphor from PARC to design the Lisa and then the Macintosh computers.
Then, a little more than a decade later, he would be the first to successfully translate Xerox’s ubiquitous computing concept for a broad consumer audience.
The iPod, first released in October of 2001, was a music player reconceptualized for the ubiquitous computing world, and the iPhone was a digital transformation of the telephone.
Jobs also understood that while Clippy and Bob were tone deaf on the desktop, on a mobile phone, a simulated human assistant made complete sense.

Shneiderman, however, continued to believe that he had won the debate handily and that the issue of software agents had been put to bed.

6
   
|
   
COLLABORATION

I
n a restored brick building in Boston, a humanoid figure turned its head.
The robot was no more than an assemblage of plastic, motors, and wires, all topped by a movable flat LCD screen with a cartoon visage of eyes and eyebrows.
Yet the distinctive motion elicited a small shock of recognition and empathy from an approaching human.
Even in a stack of electronic boxes, sensors, and wires, the human mind has an uncanny ability to recognize the human form.

Meet Baxter, a robot designed to work alongside human workers that was unveiled with some fanfare in the fall of 2012.
Baxter is relatively ponderous and not particularly dexterous.
Instead of moving around on wheels or legs, it sits in one place on an inflexible fixed stand.
Its hands are pincers capable of delicately picking up objects and putting them down.
It is capable of little else.
Despite its limitations, however, Baxter represents a new chapter in robotics.
It is one of the first examples of Andy Rubin’s credo that personal computers are
sprouting legs and beginning to move around in the environment.
Baxter is the progeny of Rodney Brooks, whose path to building helper robots traces directly from the founders of artificial intelligence.

McCarthy and Minsky went their separate ways in 1962, but the Stanford AI Laboratory where McCarthy settled attracted a ragtag crowd of hackers, a mirror image of the original MIT AI Lab remaining under Minsky’s guidance.
In 1969 the two labs were electronically linked via the ARPAnet, a precursor of the modern Internet, thus making it simple for researchers to share information.
It was the height of the Vietnam War and artificial intelligence and robotics were heavily funded by the military, but the SAIL ethos was closer to the countercultural style of the San Francisco’s Fillmore Auditorium than it was to the Pentagon on the Potomac.

Hans Moravec, an eccentric young graduate student, was camping in the attic of SAIL, while working on the Stanford Cart, an early four-wheeled mobile robot.
A sauna had been installed in the basement, and psychodrama groups shared the lab space in the evenings.
Available computer terminals displayed the message “Take me, I’m yours.”
“The Prancing Pony”—a fictional wayfarer’s inn in Tolkien’s
Lord of the Rings
—was a mainframe-connected vending machine selling food suitable for discerning hackers.
Visitors were greeted in a small lobby decorated with an ungainly “You Are Here” mural echoing the famous Leo Steinberg
New Yorker
cover depicting a relativistic view of the most important place in the United States.
The SAIL map was based on a simple view of the laboratory and the Stanford campus, but lots of people had added their own perspectives to the map, ranging from placing the visitor at the center of the human brain to placing the laboratory near an obscure star somewhere out on the arm of an average-sized spiral galaxy.

It provided a captivating welcome for Rodney Brooks, another new Stanford graduate student.
A math prodigy from
Adelaide, Australia, raised by working-class parents, Brooks had grown up far from the can-do hacker culture in the United States.
However, in 1969—along with millions of others around the world—he saw Kubrick’s
2001: A Space Odyssey
.
Like Jerry Kaplan, Brooks was not inspired to train to be an astronaut.
He was instead seduced by HAL, the paranoid (or perhaps justifiably suspicious) AI.

Brooks puzzled about how he might create his own AI, and arriving at college, he had his first opportunity.
On Sundays he had solo access to the school’s mainframe for the entire day.
There, he created his own AI-oriented programming language and designed an interactive interface on the mainframe display.
1
Brooks now went on to writing theorem proofs, thus unwittingly working in the formal, McCarthy-inspired artificial intelligence tradition.
Building an artificial intelligence was what he wanted to do with his life.

Looking at a map of the United States, he concluded that Stanford was the closest university to Australia with an artificial intelligence graduate program and promptly applied.
To his surprise, he was admitted.
By the time of his arrival in the fall of 1977, the pulsating world of antiwar politics and the counterculture was beginning to wane in the Bay Area.
Engelbart’s group at SRI had been spun off, with his NLS system augmentation technology going to a corporate time-sharing company.
Personal computing, however, was just beginning to turn heads—and souls—on the Midpeninsula.
This was the heyday of the Homebrew Computer Club, which held its first meeting in March of 1975, the very same week the new Xerox PARC building opened.
In his usual inclusive spirit McCarthy had invited the club to meet at his Stanford laboratory, but he remained skeptical about the idea of “personal computing.”
McCarthy had been instrumental in pioneering the use of mainframe computers as shared resources, and in his mental calculus it was wasteful to own an underpowered computer that would sit idle most of the time.
Indeed, McCarthy’s time-sharing
ideas had developed from this desire to use computing systems more efficiently while conducting AI research.
Perhaps in a display of wry humor, he placed a small note in the second Homebrew newsletter suggesting the formation of the “Bay Area Home Terminal Club,” chartered to provide shared access on a Digital Equipment Corp.
VAX mainframe computer.
He thought that seventy-five dollars a month, not including terminal hardware and communications connectivity costs, might be a reasonable fee.
He later described PARC’s Alto/Dynabook design prototype—the template for all future personal computers—as “Xerox Heresies.”

Alan Kay, who would become one of the main heretics, passed through SAIL briefly during his time teaching at Stanford.
He was already carrying his “interim” Dynabook around and happily showing it off: a wooden facsimile preceding laptops by more than a decade.
Kay hated his time in McCarthy’s lab.
He had a very different view of the role of computing, and his tenure at SAIL felt like working in the enemy’s camp.

Alan Kay had first envisioned the idea of personal computing while he was a graduate student under Ivan Sutherland at the University of Utah.
Kay had seen Engelbart speak when the SRI researcher toured the country giving demonstrations of NLS, the software environment that presaged the modern desktop PC windows-and-mouse environment.
Kay was deeply influenced by Engelbart and NLS, and the latter’s emphasis on boosting the productivity of small groups of collaborators—be they scientists, researchers, engineers, or hackers.
He would take Engelbart’s ideas a step further.
Kay would reinvent the book for an interactive age.
He wrote about the possibilities of “Personal Dynamic Media,” inspiring the look and feel of the portable computers and tablets we use today.
Kay believed personal computers would become a new universal medium, as ubiquitous as the printed page was in the 1960s and 1970s.

Like Engelbart’s, Kay’s views were radically different from those held by McCarthy’s researchers at SAIL.
The labs were
not antithetical to each other, but there was a significant difference in emphasis.
Kay, like Engelbart, put the human user at the center of his design.
He wanted to build technologies to extend the intellectual reach of humans.
He did, however, differ from Engelbart in his conception of cyberspace.
Engelbart thought the intellectual relation between humans and information could be compared to driving a car; computer users would sail along an information highway.
In contrast, Kay had internalized McLuhan’s insight that “the medium is the message.”
Computing, he foresaw, would become a universal, overarching medium that would subsume speech, music, text, video, and communications.

Neither of those visions found traction at SAIL.
Les Earnest, brought to SAIL by ARPA officials in 1965 to provide management skills that McCarthy lacked, has written that many of the computing technologies celebrated as coming out of SRI and PARC were simultaneously designed at SAIL.
The difference was one of philosophy.
SAIL’s mission statement had originally been to build a working artificial intelligence in the span of a decade—perhaps a robot that could match wits with a human while physically exceeding their strength, speed, and dexterity.
Generations of SAIL researchers would work toward systems supplanting rather than supplementing humans.

When Rod Brooks arrived at Stanford in the fall of 1977, McCarthy was already three years overdue on his ten-year goal for creating a working AI.
It had also been two years since Hans Moravec fired his first broadside at McCarthy, arguing that exponentially growing computing power was the baseline ingredient to consider in artificial intelligence systems development.
Brooks, whose Australian outsider’s sensibility offered him a different perspective into the goings-on at Stanford, would become Moravec’s night-shift assistant.
Both had their quirks.
Moravec was living at SAIL around the clock and counted on friends to bring him groceries.
Brooks, too, quickly adopted the countercultural style of the era.
He had
shoulder-length hair and experimented with a hacker lifestyle: he worked a “28-hour day,” which meant that he kept a 20-hour work-cycle, followed by 8 hours of sleep.
The core thrust of Brooks’s Ph.D.
thesis, on symbolic reasoning about visual objects, followed in the footsteps of McCarthy.
Beyond that, however, the Australian was able to pioneer the use of geometric reasoning in extracting a third dimension using only a single-lens camera.
In the end, Brooks’s long nights with Moravec seeded his disaffection and break with the GOFAI tradition.

As Moravec’s sidekick, Brooks would also spend a good deal of time working on the Stanford Cart.
In the mid-1970s, the mobile robot’s image recognition system took far too long to process its surroundings for anything deserving the name “real-time.”
The Cart took anywhere from a quarter of an hour to four hours to compute the next stage of its assigned journey, depending on the mainframe computer load.
After it processed one image, it would lurch forward a short distance and resume scanning.
2
When the robot operated outdoors, it had even greater difficulty moving by itself.
It turned out that moving shadows confused the vision recognition software of the robot.
The complexity in moving shadows was an entrancing discovery for Brooks.
He was aware of early experiments by W.
Grey Walter, a British-American neurophysiologist credited with the design of the first simple electronic autonomous robots in 1948 and 1949, intended to demonstrate how the interconnections in small collections of brain cells might cause autonomous behavior.
Grey Walter had built several robotic “tortoises” that used a scanning phototube “eye” and a simple circuit controlling motors and wheels to exhibit “lifelike” movement.

While Moravec considered simple robots the baseline for his model of the evolution of artificial intelligence, Brooks wasn’t convinced.
In Britain in the early fifties, Grey Walter had built surprisingly intelligent robots—a species zoologically named Machina speculatrix—costing a mere handful of British
pounds.
Now more than two decades later, “A robot relying on millions of dollars of equipment did not operate nearly as well,” Brooks observed.
He noticed that many U.S.
developers used Moravec’s sophisticated algorithms, but he wondered what they were using them for.
“Were the internal models truly useless, or were they a down payment on better performance in future generations of the Cart?”
3

After receiving his Ph.D.
in 1981, Brooks left McCarthy’s “logic palace” for MIT.
Here, in effect, he would turn the telescope around and peer through it from the other end.
Brooks fleshed out his “bottom-up” approach to robotics in 1986.
If the computing requirements for modeling human intelligence dwarfed the limits of human-engineered computers, he reasoned, why not build intelligent behavior as ensembles of simple behaviors that would eventually scale into more powerful symphonies of computing in robots as well as other AI applications?
He argued that if AI researchers ever wanted to realize their goal of mimicking biological intelligence, they should start at the lowest level by building artificial insects.
The approach precipitated a break with McCarthy and fomented a new wave in AI: Brooks argued in favor of a design that mimicked the simplest biological systems, rather than attempting to match the capability of humans.
Since that time the bottom-up view has gradually come to dominate the world of artificial intelligence, ranging from Minsky’s
The Society of Mind
to the more recent work of electrical engineers such as Jeff Hawkins and Ray Kurzweil, who both have declared that the path to human-level AI is to be found by aggregating the simple algorithms they see underlying cognition in the human brain.

BOOK: Machines of Loving Grace
13.76Mb size Format: txt, pdf, ePub
ads

Other books

McKean S02 Blood Tide by Thomas Hopp
VirtualWarrior by Ann Lawrence
All for One by Ryne Douglas Pearson
Children of the Dust by Louise Lawrence
Devilishly Wicked by Love, Kathy
Banners of the Northmen by Jerry Autieri
Art on Fire by Hilary Sloin