Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (40 page)

BOOK: Machines of Loving Grace
12.43Mb size Format: txt, pdf, ePub
ads

For Gruber, AI turned out to be the fun part of computer science.
It was philosophically rich and scientifically interesting,
offering ideas about psychology and the function of the human mind.
In his view, the rest of computer science was really just engineering.
When he arrived in Massachusetts in 1981, he worked with Paul Cohen, a young computer scientist who had been a student of Feigenbaum’s at Stanford and shared Gruber’s interest in AI and psychology.
Paul Cohen’s father, Harold Cohen, was a well-known artist who had worked at the intersection of art and artificial intelligence.
He had designed the computer program Aaron and used it to paint and sell artistic images.
The program didn’t create an artistic style, but it was capable of generating an infinite series of complex images based on parameters set by Cohen.
Aaron proved to be a powerful environment for pondering philosophical questions about autonomy and creativity.

Gruber had mentioned to the computer science department chairman that he wanted to have a social impact in his career and so he was directed to a project designing systems that would allow people with severely crippling conditions like cerebral palsy to communicate.
Many of those with the worst cases couldn’t speak and, at the time, used a writing system called Bliss Boards that allowed them to spell words by pointing at letters.
This was a painstaking and limiting process.
The system that Gruber helped develop was an early version of what researchers now call “semantic autocomplete.”
The researchers worked with children who could understand language clearly but had difficulty speaking.
They organized the interaction scheme so the system anticipated what a participant might say next.
The challenge was to create a system to communicate things like “I want a hamburger for lunch.”

It was a microcosm of the entire AI world at the time.
There was no big data; researchers could do little more than build a small model of the child’s world.
After working on this project for a while, Gruber built a software program to simulate that world.
He made it possible for the caregivers and parents to add sentences to the program that personalized the system for
a particular child.
Gruber’s program was an example of what the AI community would come to call “knowledge-based systems,” programs that would reason about complex problems using rules and a database of information.
The idea was to create a program that would be able to act like a human expert such as a doctor, lawyer, or engineer.
Gruber, however, quickly realized that acquiring this complex human knowledge would be difficult and made this problem the subject of his doctoral dissertation.

Gruber was a skilled computer hacker, and many faculty members wanted to employ him to do their grunt work.
Instead, he moonlighted at Digital Equipment Corporation, the minicomputer manufacturer.
He was involved in a number of projects for DEC, including the development of an early windowing system that was written in McCarthy’s AI programming language Lisp.
The fact that the program ran well surprised many software developers because Lisp was not intended for graphical applications where you needed blinding speed.
It took Gruber a month during the summer to write the program.
It was much more common for developers to write these kinds of applications in assembly language or C in order to save time, but it turned out that for Gruber, Lisp was efficient enough.
To show off the power of the Lisp programming language, he built a demo of an automated “clipping service” for visitors from the NSA.
The program featured an interactive interface that allowed a computer user to tailor a search, then save it in a permanent alert system that would allow the filtering of that information.
The idea stuck with him, and he would reuse it years later when he founded his first company.

Focused on getting a Ph.D.
and still intrigued by science-of-mind questions, he avoided going to work for the then booming DEC.
Graduate school was nirvana.
He rode his bike frequently in Western Massachusetts and was able to telecommute, making more than thirty dollars an hour from his home terminal.
He spent his summers in Cambridge because it was a lively
place to be, working in Digital’s laboratory.
He also became part of a small community of AI researchers who were struggling to build software systems that approximated human expertise.
The group met annually in Banff.
AI researchers quickly realized that some models of human reasoning defied conventional logic.
For example, engineering design is made up of a set of widely divergent activities.
An HVAC—heating, ventilation, and air-conditioning—system designer might closely follow a set of rules and constraints with few exceptions.
In optics, precise requirements make it possible to write a program that would design the perfect glass.
Then there is messy design, product design, for example, where there are no obvious right answers and a million questions about what is required and what is optional.
In this case the possible set of answers is immense and there is no easy way to capture the talent of a skilled designer in software.

Gruber discovered early in his research why conventional expert systems models failed: human expertise isn’t reducible to discrete ideas or practices.
He had begun by building small models, like a tool for minimizing the application of pesticides on a tree farm.
Separately, he worked with cardiologists to build a diagnostic system that modeled how they used their expertise.
Both were efforts to capture human expertise in software.
Very simple models might work, but the complexity of real-world expertise was not easily reducible to a set of rules.
The doctors had spent decades practicing medicine, and Gruber soon realized that attempting to reduce what they did to “symptoms and signs” was impossible.
A physician might ask patients about what kind of pain they were experiencing, order a test, and then prescribe nitroglycerin and send them home.
Medicine could be both diagnostic and therapeutic.
What Gruber was seeing was a higher-level strategy being played out by the human experts, far above the rote actions of what was then possible with relatively inflexible expert system programs.

He soon realized that he wasn’t interested in building better expert systems.
He wanted to build better tools to make it easier for people to design better expert systems.
This was to become known as the “knowledge acquisition problem.”
In his dissertation he made the case that researchers did not need to model knowledge itself but rather strategy—that is, knowledge about what to do next—in order to build a useful expert system.
At the time, expert systems broke easily, were built manually, and required experts to compile the knowledge.
His goal was to design a way to automate the acquisition of this elusive “strategic knowledge.”

As a graduate student his approach was within the existing AI community framework: At the outset he defined artificial intelligence conventionally, as being about understanding intelligence and performing human-level tasks.
Over time, his perspective changed.
Not only should AI imitate human intelligence; he came to believe it should aim to
amplify
that intelligence as well.
He hadn’t met Engelbart and he wasn’t familiar with his ideas, but using computing to extend, rather than simulate or replace, humans would become a motivating concept in his research.

While he was still working on his dissertation he decided to make the leap to the West Coast.
Stanford was the established center for artificial intelligence research and Ed Feigenbaum, then a rising star in the AI world, was working there.
He had launched a project to build the world’s largest expert system on “engineering knowledge,” or how things like rocket ships and jet engines were designed and manufactured.
Gruber’s advisor Paul Cohen introduced him to Feigenbaum, who politely told him that his laboratory was on soft money and he just didn’t have any slots for new employees.

“What if I raise my own money?”
Gruber responded.

“Bring your own money?!”

Feigenbaum agreed, and Gruber obtained support from some of the companies he had consulted for.
Before long, he
was managing Feigenbaum’s knowledge engineering project.
In 1989, Gruber thus found himself at Stanford University during the personal computing boom and the simultaneous precipitous decline of the AI field in the second AI Winter.
At Stanford, Gruber was insulated from the commercial turmoil.
Once he started on Feigenbaum’s project, however, he realized that he was still faced with the problem of how to acquire the knowledge necessary to simulate a human expert.
It was the same stumbling block he had tried to solve in his dissertation.
That realization quickly led to a second: to transition from “building” to “manufacturing” knowledge systems, developers needed standard parts.
He became part of an effort to standardize languages and categories used in the development of artificial intelligence.
Language must be used precisely if developers want to build systems in which many people and programs communicate.
The modules would fail if they didn’t have standardized definitions.
The AI researchers borrowed the term “ontology,” which was the philosophical term for the study of being, using it in a restricted fashion to refer to the set of concepts—events, items, or relations—that constituted knowledge in some specific area.
He made the case that an ontology was a “treaty,” a social agreement among people interested in sharing information or conducting commerce.

It was a technology that resonated perfectly with the then new Internet.
All of a sudden a confused world of multiple languages and computer protocols were all connected in an electronic Tower of Babel.
When the World Wide Web first emerged, it offered a universal mechanism for easily retrieving documents via the Internet.
The Web was loosely based on the earlier work of Doug Engelbart and Ted Nelson in the 1960s, who had independently pioneered the idea of hypertext linking, making it possible to easily access information stored in computer networks.
The Web rapidly became a medium for connecting anyone to anything in the 1990s, offering a Lego-like way to link information, computers, and people.

Ontologies offered a more powerful way to exchange any kind of information by combining the power of a global digital library with the ability to label information “objects.”
This made it possible to add semantics, or meaning, to the exchange of electronic information, effectively a step in the direction of artificial intelligence.
Initially, however, ontologies were the province of a small subset of the AI community.
Gruber was one of the first developers to apply engineering principles to building ontologies.
Focusing on that engineering effort drew him into collaborative work with a range of other programmers, some of whom worked across campus and others a world away.
He met Jay “Marty” Tenenbaum, a computer scientist who had previously led research efforts in artificial intelligence at SRI International and who at the time directed an early Silicon Valley AI lab set up by the French oil exploration giant Schlumberger.
Tenenbaum had an early and broad vision about the future of electronic commerce, preceding the World Wide Web.
In 1992 he founded Enterprise Integration Technologies (EIT), a pioneer in commercial Internet commerce transactions, at a time when the idea of “electronic commerce” was still largely unknown.

From an office near the site where the Valley’s first chipmaker, Fairchild Semiconductor, once stood, Tenenbaum sketched out a model of “friction free” electronic commerce.
He foresaw a Lego-style automated economy in which entire industries would be woven together by computer networks and software systems that automated the interchange of goods and services.
Gruber’s ontology work was an obvious match for Tenenbaum’s commerce system because it was a system that required using a common language to connect disparate parts.
Partly as a result of their collaboration, Gruber was one of the first Silicon Valley technologists to immerse himself in the World Wide Web.
Developed by Tim Berners-Lee in the heart of the particle physics community in Switzerland, the Web was rapidly adopted by computer scientists.
It became known to a
much wider audience when it was described in the
New York Times
in December of 1993.
1

The Internet allowed Gruber to create a small group that blossomed into a living cyber-community expressed in the exchange of electronic mail.
Even though few of the participants had face-to-face contact, they were in fact a “virtual” organization.
The shortcoming was that all of their communications were point-to-point and there was no single shared copy of the group electronic conversation.
“Why don’t I try to build a living memory of all of our exchanges?”
Gruber thought.
His idea was to create a public, retrievable, permanent group memory.
Today, with online conferences, support systems, and Google, the idea seems trivial, but at the time it was a breakthrough.
It had been at the heart of Doug Engelbart’s original NLS system, but as the personal computer had emerged, much of Engelbart’s broader vision had been sidelined as first Xerox PARC and then Apple and Microsoft had cherry-picked his ideas, like the mouse and hypertext, while ignoring his broader mission for an intelligence augmentation system that would facilitate small groups of knowledge workers.
Gruber created a software program that automatically generated a living document of the work done by a group of people.
Over a couple of weeks he sat down and built a program named Hypermail that would “live” on the same computer that was running a mail server and would generate a threaded copy of an email conversation that could be retrieved from the Web.
What emerged was a digital snapshot of the email conversation complete with permanent links that could be bookmarked and archived.

BOOK: Machines of Loving Grace
12.43Mb size Format: txt, pdf, ePub
ads

Other books

Dead Money by Banks, Ray
Four Fish by Paul Greenberg
Unfit to Practice by Perri O'Shaughnessy
A Kind of Magic by Susan Sizemore
Roseblood by Paul Doherty
The Green Man by Kate Sedley
The Biology of Luck by Jacob M. Appel