Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (26 page)

BOOK: Machines of Loving Grace
2Mb size Format: txt, pdf, ePub
ads

Instead he spent his time trying to elaborate and expand on the research he pursued at MIT, research that would bear fruit almost four decades later.
During the 1970s, however, it seemed to present an impossible challenge, and many started to wonder how, or even if, science could come to understand how humans process language.
After spending a half decade
on language-related computing Winograd found himself growing more and more skeptical that real progress in AI would be possible.
In addition to making little headway, he rejected artificial intelligence in part because of the influence of a new friendship with a Chilean political refugee named Fernando Flores, and in part because of his recent engagement with a group of Berkeley philosophers, led by Dreyfus, intent on stripping away the hype around the new AI industry now emerging.
Flores, a bona fide technocrat who had been finance minister during the Allende government, barely escaped his office in the palace when it was bombed during the coup.
He spent three years in prison before arriving in the United States, his release coming in response to political pressure by Amnesty International.
Stanford had appointed Flores as a visiting scholar in computer science, but he left Palo Alto instead to pursue a Ph.D.
at Berkeley under the guidance of a quartet of anti-AI philosophers: Hubert and Stuart Dreyfus, John Searle, and Ann Markussen.

Winograd thought Flores was one of the most impressive intellectuals he had ever met.
“We started talking in a casual way, then he handed me a book on philosophy of science and said, ‘You should read this.’
I read it, and we started talking about it, and we decided to write a paper about it, that turned into a monograph, and that turned into a book.
It was a gradual process of finding him interesting, and finding the stuff we were talking about intellectually stimulating,” Winograd recalled.
15
The conversations with Flores put the young computer scientist “in touch” with the ways in which he was unhappy with what he thought of as the “ideology” of AI.
Flores aligned himself with the charismatic Werner Erhard, whose cultlike organization EST (Erhard Seminars Training) had a large following in the Bay Area during the 1970s.
(At Stanford Research Institute, Engelbart sent the entire staff of his lab through EST training and joined the board of the organization.)

Although the computing world was tiny at the time, the tensions between McCarthy and Minsky’s AI design approach and Engelbart’s IA approach were palpable around Stanford.
PARC was inventing the personal computer; the Stanford AI Lab was doing research on everything from robot arms to mobile robots to chess-playing AI systems.
At the recently renamed SRI (which changed its name from Stanford Research Institute due to student antiwar protests) researchers were working on projects that ranged from Engelbart’s NLS system to Shakey the robot, as well as early speech recognition research and “smart” weapons.
Winograd would visit Berkeley for informal lunchtime discussions with Searle and Dreyfus, the Berkeley philosophers, their grad students, and Fernando Flores.
While Hubert Dreyfus objected to the early optimistic predictions by AI researchers, it was John Searle who raised the stakes and asked one of the defining philosophical questions of the twentieth century: Is it possible to build an intelligent machine?

Searle, a dramatic lecturer with a flair for showmanship, was never one to avoid an argument.
Before teaching philosophy he had been a political activist.
While at the University of Wisconsin in the 1950s he had been a member of Students Against Joseph McCarthy, and in 1964 he would become the first tenured Berkeley faculty to join the Free Speech Movement.
As a young philosopher Searle had been drawn to the interdisciplinary field of cognitive science.
At the time, the core assumption of the field was that the biological mind was analogous to the software that animated machines.
If this was the case, then understanding the processes of human thought would merely be a matter of teasing out the program inside the intertwined billions of neurons making up the human brain.

The Sloan Foundation had sent Searle to Yale to discuss the subject of artificial intelligence.
While on the plane to the meeting he began reading a book about artificial intelligence written by Roger Schank and Robert Abelson, the leading Yale AI researchers during the second half of the 1970s.
Scripts,
Plans, Goals, and Understanding
16
made the assertion that artificial intelligence programs could “understand” stories that had been designed by their developers.
For example, developers could present the computer with a simple story, such as a description of a man going into a restaurant, ordering a hamburger, and then storming out without paying for it.
In response to a query, the program was able to infer that the man had not eaten the hamburger.
“That can’t be right,” Searle thought to himself, “because you could give me a story in Chinese with a whole lot of rules for shuffling the Chinese symbols, and I don’t understand a word of Chinese but all the same I could give the right answer.”
17
He decided that it just didn’t follow that the computer had the ability to understand anything just because it could interpret a set of rules.

While flying to his lecture, he came up with what has been called the “Chinese Room” argument against sentient machines.
Searle’s critique was that there could be no simulated “brains in a box.”
His argument was different from the original Dreyfus critique, which asserted that obtaining human-level performance from AI software was impossible.
Searle simply argued that a computing machine is little more than a very fast symbol shuffler that uses a set of syntactical rules.
What it lacks is what the biological mind has—the ability to interpret semantics.
The biological origin of semantics, the formal study of meaning, remains a great mystery.
Searle’s argument was infuriating to the AI community in part because he implied that their argument implicitly linked them with a theological argument that the mind is outside the physical, biological world.
His argument was that mental processes are entirely caused by biological processes in the brain and they are realized there, and if you want to make a machine that can think, you must duplicate, rather than simulate, those processes.
At the time Searle thought that they had probably already considered his objection and the discussion wouldn’t last a week, let alone decades.
But it has.
Searle’s original article
generated thirty published refutations.
Three decades later, the debate is anything but settled.
To date, there are several hundred published attacks on his idea.
And Searle is still alive and busy defending his position.

It is also notable that the lunchtime discussions about the possibility of intelligent and conceivably self-aware machines took place against a backdrop of the Reagan military buildup.
The Vietnam War had ended, but there were still active pockets of political dissent around the country.
The philosophers would meet at the Y across the street from the Berkeley campus.
Winograd and Danny Bobrow from Xerox PARC had become regular visitors at these lunches, and Winograd found that they challenged his intellectual biases about the philosophical underpinnings of AI.

He would eventually give up the AI “faith.”
Winograd concluded that there was nothing mystical about human intelligence.
In principle, if you could discover the way the brain worked, you could build a functional artificially intelligent machine, but you couldn’t build that same machine with symbolic logic and computing, which was the dominant approach in the 1970s and 1980s.
Winograd’s interest in artificial intelligence had been twofold: AI served both as a model for understanding language and the human brain and as a system that could perform useful tasks.
At that point, however, he took an “Engelbartian” turn.
Philosophically and politically, human-centered computing was a better fit with his view of the world.
Winograd had gotten intellectually involved with Flores, which led to a book,
Understanding Computers and Cognition: A New Foundation for Design,
a critique of artificial intelligence.
Understanding Computers,
though, was philosophy, not science, and Winograd still had to figure out what to do with his career.
Eventually, he set down his effort to build smarter machines and focused instead on the question of how to use computers to make people smarter.
Winograd crossed the chasm.
From designing systems that were intended to supplant humans he
turned his focus to working on technologies that enhanced the way people interact with computers.

Though Winograd would argue years later that politics had not directly played a role in his turn away from artificial intelligence, the political climate of the time certainly influenced many other scientists’ decisions to abandon the artificial intelligence camp.
During a crucial period from 1975 to 1985, artificial intelligence research was overwhelmingly funded by the Defense Department.
Some of the nation’s most notable computer scientists—including Winograd—had started to worry about the increasing involvement of the military in computing technology R & D.
For a generation who had grown up watching the movie
Dr.
Strangelove,
the Reagan administration Star Wars antimissile program seemed like dangerous brinkmanship.
It was at least a part of Winograd’s moral background and was clearly part of the intellectual backdrop during the time when he decided to leave the field he had helped to create.
Winograd was a self-described “child of the ’60s,”
18
and during the crucial years when he turned away from AI, he simultaneously played a key role in building a national organization of computer scientists, led by researchers at Xerox PARC and Stanford, who had become alarmed at the Star Wars weapons buildup.
The group shared a deep fear that the U.S.
military command would push the country into a nuclear confrontation with the Soviet Union.
As a graduate student Winograd had been active against the war in Vietnam while he was in Boston as part of a group called “Computer People for Peace.”
In 1981 he became active again as a leader in helping create a national organization of computer scientists who opposed nuclear weapons.

In response to the highly technical Strategic Defense Initiative, the disaffected computer scientists believed they could use the weight of their expertise to create a more effective anti–nuclear weapons group.
They evolved from being “people” and became “professionals.”
In 1981, they founded a new organization
called Computer Professionals for Social Responsibility.
Winograd ran the first planning meeting, held in a large classroom at Stanford.
Those who attended recalled that unlike many political meetings from the antiwar era that were marked by acrimony and debate, the evening was characterized by an unusual sense of unity and common purpose.
Winograd proved an effective political organizer.

In a 1984 essay on the question of whether computer scientists should accept military funding, Winograd pointed out that he had avoided applying for military funding in the past, but by keeping his decision private, he had ducked what he would come to view as a broader responsibility.
He had, of course, received his training in a military-funded laboratory at MIT.
Helping establish Computer Professionals for Social Responsibility was the first of a set of events that would eventually lead Winograd to “desert” the AI community and turn his attention from building intelligent machines to augmenting humans.

I
ndirectly it was a move that would have a vast impact on the world.
Winograd was recognized enough in the artificial intelligence community that, if he had decided to pursue a more typical academic career, he could have built an academic empire based on his research interests.
Personally, however, he had no interest in building a large research lab or even supporting postdoctoral researchers.
He was passionate about one-to-one interaction with his students.

One of these was Larry Page, a brash young man with a wide range of ideas for possible dissertation topics.
Under Winograd’s guidance Page settled on the idea of downloading the entire Web and improving the way information was organized and discovered.
He set about doing this by mining human knowledge, which was embodied in existing Web hyperlinks.
In 1998, Winograd and Page joined with Sergey Brin, another Stanford graduate student and a close friend of
Page’s, and Brin’s faculty advisor, Rajeev Motwani, an expert in data mining, to coauthor a journal article titled “What Can You Do with a Web in Your Pocket?”
19
In the paper, they described the prototype version of the Google search engine.

Page had been thinking about other more conventional AI research ideas, like self-driving cars.
Instead, with Winograd’s encouragement, he would find an ingenious way of mining human behavior and intelligence by exploiting the links created by millions of Web users.
He used this information to significantly improve the quality of the results returned by a search engine.
This work would be responsible for the most significant “augmentation” tool in human history.
In September of that year, Page and Brin left Stanford and founded Google, Inc.
with the modest goal of “organizing the world’s knowledge and making it universally useful.”

BOOK: Machines of Loving Grace
2Mb size Format: txt, pdf, ePub
ads

Other books

Garnet's TreasureBN.html by Hart, Jillian
Crisis in Crittertown by Justine Fontes
Into the Dark by Stacy Green
1989 by Peter Millar