Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (19 page)

BOOK: Machines of Loving Grace
12.01Mb size Format: txt, pdf, ePub
ads

The earlier work on Dendral in 1977 had led to a cascade of similar systems.
Mycin, also produced at Stanford, was based on an “inference engine” that did if/then–style logic and a “knowledge base” of roughly six hundred rules to reason about blood infections.
At the University of Pittsburgh during the 1970s a program called Internist-I was another early effort to
tackle the challenge of disease diagnosis and therapy.
In 1977 at SRI, Peter Hart, who began his career in artificial intelligence working on Shakey the robot, and Richard Duda, another pioneering artificial intelligence researcher, built Prospector to aid in the discovery of mineral deposits.
That work would eventually get CBS’s overheated attention.
In the midst of all of this, in 1982, Japan announced its Fifth Generation Computer program.
Heavily focused on artificial intelligence, it added an air of competition and inevitability to the AI boom that would lead to a market in which newly minted Ph.D.s could command unheard-of $30,000 annual salaries right out of school.

The genie was definitely out of the bottle.
Developing expert systems was becoming a discipline called “knowledge engineering”—the idea was that you could package the expertise of a scientist, an engineer, or a manager and apply it to the data of an enterprise.
The computer would effectively become an oracle.
In principle that technology could be used to augment a human, but software enterprises in the 1980s would sell it into corporations based on the promise of cost savings.
As a productivity tool its purpose was as often as not to displace workers.

Breiner looked around for industries where it might be easy to package the knowledge of human experts and quickly settled on commercial lending and insurance underwriting.
At the time there was no widespread alarm about automation and he didn’t see the problem framed in those terms.
The computing world was broken down into increasingly inexpensive personal computers and more costly “workstations,” generally souped-up machines for computer-aided design applications.
Two companies, Symbolics and Lisp Machines, Inc., spun directly out of the MIT AI Lab to focus on specialized computers running the Lisp programming language, designed for building AI applications.

Breiner founded his own start-up, Syntelligence.
Along with Teknowledge and Intellicorp, it would become one of the
three high-profile artificial intelligence companies in Silicon Valley in the 1980s.
He went shopping for artificial intelligence talent and hired Hart and Duda from SRI.
The company created its own programming language, Syntel, which ran on an advanced workstation used by the company’s software engineers.
It also built two programs, Underwriting Advisor and Lending Advisor, which were intended for use on IBM PCs.
He positioned the company as an information utility rather than as an artificial intelligence software publisher.
“In every organization there is usually one person who is really good, who everybody calls for advice,” he told a
New York Times
reporter writing about the emergence of commercial expert systems.
“He is usually promoted, so that he does not use his expertise anymore.
We are trying to protect that expertise if that person quits, dies or retires and to disseminate it to a lot of other people.”
The article, about the ability to codify human reasoning, ran on the paper’s front page in 1984.
37

When marketing his loan expert and insurance expert software packages, Breiner demonstrated dramatic, continuing cost savings for customers.
The idea of automating human expertise was compelling enough that he was able to secure preorders from banks and insurance companies and investments from venture capital firms.
AIG, St.
Paul, and Fireman’s Fund as well as Wells Fargo and Wachovia advanced $6 million for the software.
Breiner stuck with the project for almost a half decade, ultimately growing the company to more than a hundred employees and pushing revenues to $10 million annually.
The problem was that wasn’t fast enough for his investors.
In 1983 the five-year projections had been to be at $50 million of annual revenue.
When the commercial market for artificial intelligence software failed to materialize quickly enough, inside the company he struggled, most bitterly with board member Pierre Lamond, a venture capitalist who was a veteran of the semiconductor industry with no software experience.
Ultimately Breiner lost his battle and Lamond brought in
an outside corporate manager who moved the company headquarters to Texas, where the manager lived.

Syntelligence itself would confront directly what would be become known as the “AI Winter.”
One by one the artificial intelligence firms of the early 1980s were eclipsed either because they failed financially or because they returned to their roots as experimental efforts or consulting companies.
The market failure became an enduring narrative that came to define artificial intelligence, with a repeated cycle of hype and failure fueled by overly ambitious scientific claims that are inevitably followed by performance and market disappointments.
A generation of true believers, steeped in the technocratic and optimistic artificial intelligence literature of the 1960s, clearly played an early part in the collapse.
Since then the same boom-and-bust cycle has continued for decades, even as AI has advanced.
38
Today the cycle is likely to repeat itself again as a new wave of artificial intelligence technologies is being heralded by some as being on the cusp of offering “thinking machines.”

The first AI Winter had actually come a decade earlier in Europe.
Sir Michael James Lighthill, a British applied mathematician, led a study in 1973 that excoriated the field for not delivering on the promises and predictions, such as the early SAIL prediction of a working artificial intelligence in a decade.
Although it had little impact in the United States, the Lighthill report, “Artificial Intelligence: A General Survey,” led to the curtailment of funding in England and a dispersal of British researchers from the field.
In a footnote of the report the BBC arranged a televised debate on the future of AI where the targets of Lighthill’s criticism were given a forum to respond.
John McCarthy was flown in for the event but was unable to offer a convincing defense of his field.

A decade later a second AI Winter would descend in the United States, beginning in 1984, when Breiner managed to push Syntelligence sales to $10 million before departing.
There
had been warnings of “irrational exuberance” for several years when Roger Schank and Marvin Minsky raised the issue early on at a technical conference, claiming that emerging commercial expert systems contained no significant technical advances from work that had begun two decades earlier.
39
The year 1984 was also when Doug Engelbart’s and Alan Kay’s augmentation ideas dramatically came within the reach of every office worker.
Needing a marketing analogy to frame the value of the personal computer with the launch of the Macintosh, Steve Jobs hit on the perfect metaphor for the PC.
It was a “bicycle for our minds.”

Pushed out of the company he had founded, Breiner went on to his next venture, a start-up company designing software for Apple’s Macintosh.
From the 1970s through the 1980s it was a path followed by many of Silicon Valley’s best and brightest.

B
eginning in the 1960s, the work that had been conducted quietly at the MIT and Stanford artificial intelligence laboratories and at the Stanford Research Institute began to trickle out into the rest of the world.
The popular worldview of robotics and artificial intelligence had originally been given form by literary works—the mythology of the Prague Golem, Mary Shelley’s
Frankenstein,
and Karel Čapek’s pathbreaking
R.
U.
R.
(Rossum’s Universal Robots)
—all posing fundamental questions about the impact of robotics on humans life.
However, as America prepared to send humans to the moon, a wave of technology-rich and generally optimistic science fiction appeared from writers like Isaac Asimov, Robert Heinlein, and Arthur C.
Clarke.
HAL, the run-amok sentient computer in Clarke’s
2001: A Space Odyssey,
not only had a deep impact on popular culture, it changed people’s lives.
Even before he began as a graduate student in computer science at the University of Pennsylvania, Jerry Kaplan knew what he planned to do.
The film version of
2001
was released in the spring of 1968,
and over the summer Kaplan watched it six times.
With two of his friends he went back again and again and again.
One of his friends said, “I’m going to make movies.”
And he did—he became a Hollywood director.
The other friend became a dentist, and Kaplan went into AI.

“I’m going to build that,” he told his friends, referring to HAL.
Like Breiner, he would become instrumental as part of the first generation to attempt to commercialize AI, and also like Breiner, when that effort ran aground in the AI Winter, he would turn to technologies that augmented humans instead.

As a graduate student Kaplan had read Terry Winograd’s SHRDLU tour de force on interacting with computers via natural language.
It gave him a hint about what was possible in the world of AI as well as a path toward making it happen.
Like many aspiring computer scientists at the time, he would focus on understanding natural language.
A math whiz, he was one of a new breed of computer nerds who weren’t just pocket-protector-clad geeks, but who had a much broader sense of the world.

After he graduated with a degree in the philosophy of science from the University of Chicago, he followed a girlfriend to Philadelphia.
An uncle hired him to work in the warehouse of his wholesale pharmaceuticals business while grooming him to one day take over the enterprise.
Dismayed by the claustrophobic family business, he soon desperately needed to do something different and he remembered both a programming class he had taken at Chicago and his obsession with
A Space Odyssey
.
He enrolled as a graduate student in computer science at the University of Pennsylvania.
Once there he studied with Aravind Krishna Joshi, an early specialist in computational linguistics.
Even though he had come in with a liberal arts background he quickly became a star.
He went through the program in five years, getting perfect scores in all of his classes and writing his graduate thesis on the subject of building natural language front ends to databases.

As a newly minted Ph.D., Kaplan gave job audition lectures at Stanford and MIT, visited SRI, and spent an entire week being interviewed at Bell Labs.
Both the telecommunications and computer industry were hungry for computer science Ph.D.s and on his first visit to Bell Labs he was informed that the prestigious lab had a target of hiring 250 Ph.D.s, and had no intention of hiring below average.
Kaplan couldn’t help pointing out that 250 was more than the entire number of Ph.D.s that the United States would produce that year.
He picked Stanford, after Ed Feigenbaum had recruited him as a research associate in the Knowledge Engineering Laboratory.
Stanford was not as intellectually rigorous as Penn, but it was a technological paradise.
Silicon Valley had already been named, the semiconductor industry was under assault from Japan, and Apple Computer was the nation’s fastest-growing company.

There was free food at corporate and academic events every evening and no shortage of “womanizing” opportunities.
He bought a home in Los Trancos Woods several miles from Stanford, near SAIL, which was just in the process of moving from the foothills down to a new home on the central Stanford campus.

When he arrived at Stanford in 1979 the first golden age of AI was in full swing—graduate students like Douglas Hofstadter, the author of
Gödel, Escher, Bach: An Eternal Golden Braid;
Rodney Brooks; and David Shaw, who would later take AI techniques and transform them into a multibillion-dollar hedge fund on Wall Street, were all still around.
The commercial forces that would lead to the first wave of AI companies like Intellicorp, Syntelligence, and Teknowledge were now taking shape.
While Penn had been like an ivory castle, the walls between academia and the commercial world were coming down at Stanford.
There was wheeling and dealing and start-up fever everywhere.
Kaplan’s officemate, Curt Widdoes, would soon take the software used to build the S1 supercomputer with him to cofound Valid Logic Systems, an early electronic
design automation company.
They used newly developed Stanford University Network (SUN) workstations.
Graduate student Andy Bechtolsheim—sitting in the next room—had designed the original SUN hardware, and would soon cofound Sun Microsystems, thus commercializing the hardware he had developed as a graduate student.

BOOK: Machines of Loving Grace
12.01Mb size Format: txt, pdf, ePub
ads

Other books

The Whole Day Through by Patrick Gale
Four Gated City by Doris Lessing
Desperate and Dateless by Elizabeth Lapthorne
Longing for Love by Marie Force
Obsessed by G. H. Ephron
Lost in Paradise by Tianna Xander