Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (24 page)

BOOK: Machines of Loving Grace
13.93Mb size Format: txt, pdf, ePub
ads

Since then Engelbart’s adherents have transformed the world.
They have extended human capabilities everywhere in modern life.
Today, shrunk into smartphones, personal computers will soon be carried by all but the allergic or iconoclastic
adult and teenager.
Smartphones are almost by definition assembled into a vast distributed computing fabric woven together by the wireless Internet.
They are also relied on as artificial memories.
Today many people are literally unable to hold a conversation or find their way around town without querying them.

While Engelbart’s original research led directly to the PC and the Internet, McCarthy’s lab was most closely associated with two other technologies—robotics and artificial intelligence.
There had been no single dramatic breakthrough.
Rather, the falling cost of computing (both in processing and storage), the gradual shift from the symbolic logic-based approach of the first generation of AI research to more pragmatic statistics and machine-learning algorithms of the second generation of AI, and the declining price of sensors now offer engineers and programmers the canvas to create computerized systems that see, speak, listen, and move around in the world.

The balance has shifted.
Computing technologies are emerging that can be used to replace and even outpace humans.
At the same time, in the ensuing half century there has been little movement toward unification in the two fields, IA and AI, the offshoots of Engelbart’s and McCarthy’s original work.
Rather, as computing and robotics systems have grown from laboratory curiosities into the fabric that weaves together modern life, the opposing viewpoints of those in each community have for the most part continued to speak past each other.

The human-computer interaction community keeps debating metaphors ranging from windows and mice to autonomous agents, but has largely operated within the philosophical framework originally set down by Engelbart—that computers should be used to augment humans.
In contrast, the artificial intelligence community has for the most part pursued performance and economic goals elaborated in equations and algorithms, largely unconcerned with defining or in any way
preserving a role for individual humans.
In some cases the impact is easily visible, such as manufacturing robots that directly replace human labor.
In other cases it is more difficult to discern the direct effect on employment caused by deployment of new technologies.
Winston Churchill said: “We shape our buildings, and afterwards our buildings shape us.”
Today our systems have become immense computational edifices that define the way we interact with our society, from how our physical buildings function to the very structure of our organizations, whether they are governments, corporations, or churches.

As the technologies marshaled by the AI and IA communities continue to reshape the world, alternative visions of the future play out: In one world humans coexist and prosper with the machines they’ve created—robots care for the elderly, cars drive themselves, and repetitive labor and drudgery vanish, creating a new Athens where people do science, make art, and enjoy life.
It will be wonderful if the Information Age unfolds in that fashion, but how can it be a foregone conclusion?
It is equally possible to make the case that these powerful and productive technologies, rather than freeing humanity, will instead facilitate a further concentration of wealth, fomenting vast new waves of technological unemployment, casting an inescapable surveillance net around the globe, while unleashing a new generation of autonomous superweapons.

W
hen Ed Feigenbaum finished speaking the room was silent.
No polite applause, no chorus of boos.
Just a hush.
Then the conference attendees filed out of the room and left the artificial intelligence pioneer alone at the podium.

Shortly after Barack Obama was elected president in 2008, it seemed possible that the Bush administration plan for space exploration, which focused on placing a manned base on the moon, might be replaced with an even more audacious program
that would involve missions to asteroids and possibly even manned flights to Mars with human landings on the Martian moons Phobos and Deimos.
4
Shorter-term goals included the possibility of sending astronauts to Lagrangian points one million miles from Earth where the Earth’s and Sun’s gravitational pull cancel each other and create convenient long-term parking for ambitious devices like a next-generation Hubble Space Telescope.

Human exploration of the solar system was the pet project of G.
Scott Hubbard, a head of NASA’s Ames Research Center in Mountain View, California, who was heavily backed by the Planetary Society, a nonprofit that advocates for space exploration and science.
As a result, NASA organized a conference to discuss the possible resurrection of human exploration of the solar system.
A star-studded cast of space luminaries, including astronaut Buzz Aldrin, the second human to set foot on the moon, and celebrity astrophysicist Neil deGrasse Tyson, showed up for the day.
One of the panels focused on the role of robots, which were envisioned by the conference organizers as providing intelligent systems that would assist humans on long flights to other worlds.

Feigenbaum had been a student of one of the founders of the field of AI, Herbert Simon, and he had led the development of the first expert systems as a young professor at Stanford.
A believer in the potential of artificial intelligence and robotics, he had been irritated by a past run-in with a Mars geologist who had insisted that sending a human to Mars would provide more scientific information in just a few minutes than a complete robot mission might return.
Feigenbaum also had a deep familiarity with the design of space systems.
Moreover, having once served as chief scientist of the air force, he was a veteran of the human-in-the-loop debates stretching back to the space program.

He showed up to speak at the panel with a chip on his shoulder.
Speaking from a simple set of slides, he sketched out
an alternative to the manned flight to Mars vision.
He rarely used capital letters in his slides, but he did this time:

ALMOST EVERYTHING THAT HAS BEEN LEARNED ABOUT THE SOLAR SYSTEM AND SPACE BEYOND HAS BEEN LEARNED BY PEOPLE
ON EARTH
ASSISTED BY THEIR NHA (NON-HUMAN AGENTS) IN SPACE OR IN ORBIT
5

The whole notion of sending humans to another planet when robots could perform just as well—and maybe even better—for a fraction of the cost and with no risk of human life seemed like a fool’s errand to Feigenbaum.
His point was that AI systems and robots in the broader sense of the term were becoming so capable so quickly that the old human-in-the-loop idea had lost its mystique as well as its value.
All the coefficients on the nonhuman side of the equation had changed.
He wanted to persuade the audience to start thinking in terms of agents, to shift gears and think about humans exploring the solar system with augmented senses.
It was not a message that the audience wanted to hear.
As the room emptied, a scientist who worked at NASA’s Goddard Space Flight Center came to the table and quietly said that she was glad that Feigenbaum had said what he did.
In her job, she whispered, she could not say that.

Feigenbaum’s encounter underscores the reality that there isn’t a single “right” answer in the dichotomy between AI and IA.
Sending humans into space is a passionate ideal for some.
For others like Feigenbaum, however, the vast resources the goal entails are wasted.
Intelligent machines are perfectly suited for the hostile environment beyond Earth, and in designing them we can perfect technologies that can be used to good effect on Earth.
His quarrel is also indicative that there won’t be any easy synthesis of the two camps.

While the separate fields of artificial intelligence and human-computer interaction have largely remained isolated
domains, there are people who have lived in both worlds and researchers who have famously crossed from one camp to the other.
Microsoft cognitive psychologist Jonathan Grudin first noted that the two fields have risen and fallen in popularity, largely in opposition to each other.
When the field of artificial intelligence was more prominent, human-computer interaction generally took a backseat, and vice versa.

Grudin thinks of himself as an optimist.
He has written that he believes it is possible that in the future there will be a grand convergence of the fields.
Yet the relationship between the two fields remains contentious and the human-computer interaction perspective as pioneered by Engelbart and championed by people like Grudin and his mentor Donald Norman is perhaps the most significant counterweight to artificial intelligence–oriented technologies that have the twin potential for either liberating or enslaving humanity.

While Grudin has oscillated back and forth between the AI and IA worlds throughout his career, Terry Winograd became the first high-profile deserter from the world of AI.
He chose to walk away from the field after having created one of the defining software programs of the early artificial intelligence era and has devoted the rest of his career to human-centered computing, or IA.
He crossed over.

Winograd’s interest in computing was sparked while he was a junior studying math at Colorado College, when a professor of medicine asked his department for help doing radiation therapy calculations.
6
The computer available at the medical center was a piano-sized Control Data minicomputer, the CDC 160A, one of Seymour Cray’s first designs.
One person at a time used it, feeding in programs written in Fortran by way of a telex-like punched paper tape.
On one of Winograd’s first days using the machine, it was rather hot so there was a fan sitting behind the desk that housed the computer terminal.
He managed to feed his paper tape into the computer and then, by mistake, right into the fan.
7

Terry Winograd was a brilliant young graduate student at MIT who developed an early program capable of processing natural language.
Years later he rejected artificial intelligence research in favor of human-centered software design.
(
Photo courtesy of Terry Winograd
)

In addition to his fascination with computing, Winograd had become intrigued by some of the early papers about artificial intelligence.
As a math whiz with an interest in linguistics, the obvious place for graduate studies was MIT.
When he arrived, at the height of the Vietnam War, Winograd discovered there was a deep gulf between the rival fiefdoms of Marvin Minsky and Noam Chomsky, leaders in the respective fields of artificial intelligence and linguistics.
The schism was so deep that when Winograd would bump into Chomsky’s students at parties and mention that he was in the AI Lab, they would turn and walk away.

Winograd tried to bridge the gap by taking a course from Chomsky, but he received a C on a paper in which he argued for the AI perspective.
Despite the conflict, it was a heady time for AI research.
The Vietnam War had opened the Pentagon’s research coffers and ARPA was essentially writing blank checks to researchers at the major research laboratories.
As at Stanford, at MIT there was a clear sense of what “serious”
research in computer science was about.
Doug Engelbart came around on a tour and showed a film demonstration of his NLS system.
The researchers at the MIT AI Lab belittled his accomplishments.
After all, they were building systems that would soon have capabilities matching those of humans, and Engelbart was showing off a computer editing system that seemed to do little more than sort grocery lists.

At the time Winograd was very much within the mainstream of computing, and as the zeitgeist pointed toward artificial intelligence, he followed.
Most believed that it wouldn’t be long before machines would see, hear, speak, move, and otherwise perform humanlike tasks.
Winograd was soon encouraged to pursue linguistic research by Minsky, who was eager to prove that his students could do as well or better at “language” than Chomsky’s.
That challenge was fine with Winograd, who was interested in studying how language worked by using computing as a simulation tool.

BOOK: Machines of Loving Grace
13.93Mb size Format: txt, pdf, ePub
ads

Other books

La alargada sombra del amor by Mathias Malzieu
A Dragon's Seduction by Tamelia Tumlin
Winter's Dawn by Moon, Kele
Farslayer's Story by Fred Saberhagen
Mountain Wilds Bundle by Hazel Hunter
Halfway to the Grave by Jeaniene Frost