Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (23 page)

BOOK: Machines of Loving Grace
5.31Mb size Format: txt, pdf, ePub
ads

Whether or not Google is on the trail of a genuine artificial “brain” has become increasingly controversial.
There is certainly no question that the deep learning techniques are paying off in a wealth of increasingly powerful AI achievements in vision and speech.
And there remains in Silicon Valley a growing
group of engineers and scientists who believe they are once again closing in on “Strong AI”—the creation of a self-aware machine with human or greater intelligence.

R
ay Kurzweil, the artificial intelligence researcher and barnstorming advocate for technologically induced immortality, joined Google in 2013 to take over the brain work from Ng, shortly after publishing
How to Create a Mind,
a book that purported to offer a recipe for creating a working AI.
Kurzweil, of course, has all along been one of the most eloquent backers of the idea of a singularity.
Like Moravec, he posits a great acceleration of computing power that would lead to the emergence of autonomous superhuman machine intelligence, in Kurzweil’s case pegging the date to sometime around 2023.
The idea became codified in Silicon Valley in the form of the Singularity University and the Singularity Institute, organizations that focused on dealing with the consequences of that exponential acceleration.

Joining Kurzweil are a diverse group of scientists and engineers who believe that once they have discovered the mechanism underlying the biological human neuron, it will be simply a matter of scaling it up to create an AI.
Jeff Hawkins, a successful Silicon Valley engineer who had founded Palm Computing with Donna Dubinsky, coauthored
On Intelligence
in 2004, which argued that the path to human-level intelligence lay in emulating and scaling up neocortex-like circuits capable of pattern recognition.
In 2005, Hawkins formed Numenta, one of a growing list of AI companies pursuing pattern recognition technologies.
Hawkins’s theory has parallels with the claims that Kurzweil makes in
How to Create a Mind,
his 2012 effort to lay out a recipe for intelligence.
Similar paths have been pursued by Dileep George, a Stanford-educated artificial intelligence researcher who originally worked with Hawkins at Numenta and then left to form his own company, Vicarious,
with the goal of developing “the next generation of AI algorithms,” and Henry Markram, the Swiss researcher who has enticed the European Union into supporting his effort to build a detailed replica of the human brain with one billion euros in funding.

In 2013 a technology talent gold rush that was already under way reached startling levels.
Hinton left for Google because the resources available in Mountain View dwarfed what he had access to at the University of Toronto.
There is now vastly more computing power available than when Sejnowski and Hinton first developed the Boltzmann Machine approach to neural networks, and there is vastly more data to train the networks on.
The challenge now is managing a neural network that might have one billion parameters.
To a conventional statistician that’s a nightmare, but it has spawned a sprawling “big data” industry that does not shy away from monitoring and collecting virtually every aspect of human behavior, interaction, and thought.

After his arrival at Google, Hinton promptly published a significant breakthrough in making more powerful and efficient learning networks by discovering how to keep the parameters from effectively stepping on each other’s toes.
Rather than have an entire network process the whole image simultaneously, in the new model a subset is chosen, a portion of the image is processed, and the weights of the connections are updated.
Then another random set is picked and the image is processed again.
It offers a way to use randomness to reinforce the influence of each subset.
The insight might be biologically inspired, but it’s not a slavish copy.
By Sejnowski’s account, Hinton is an example of an artificial intelligence researcher who pays attention to the biology but is not constrained by it.

In 2012 Hinton’s networks, trained on a huge farm of computers at Google, did remarkably well at recognizing individual objects, but they weren’t capable of “scene understanding.”
For example, the networks could not recognize the sentence: “There
is a cat sitting on the mat and there is a person dangling a toy at the cat.”
The holy grail of computer vision requires what AI researchers call “semantic understanding”—the ability to interpret the scene in terms of human language.
In the 1970s the challenge of scene understanding was strongly influenced by Noam Chomsky’s ideas about generative grammar as a context for objects and a structure for understanding their relation within a scene.
But for decades the research went nowhere.

However, late in 2014, the neural network community began to make transformative progress in this domain as well.
Around the country research groups reported progress in combining the learning properties of two different types of neural networks, one to recognize patterns in human language and the other to recognize patterns in digital images.
Strikingly, they produced programs that could generate English-language sentences that described images at a high level of abstraction.
44
The advance will help in applications that improve the results generated by Internet image search applications.
The new approach also holds out the potential for creating a class of programs that can interact with humans with a more sophisticated level of understanding.

Deep learning nets have made significant advances, but for Hinton, the journey is only now beginning.
He said recently that he sees himself as an explorer who has landed on a new continent and it’s all very interesting, but he has only progressed a hundred yards inland and it’s still looking very interesting—except for the mosquitoes.
In the end, however, it’s a new continent and the researchers still have no idea what is really possible.

In late 2013, LeCun followed Hinton’s move from academia to industry.
He agreed to set up and lead Facebook’s AI research laboratory in New York City.
The move underscored the renewed corporate enthusiasm for artificial intelligence.
The AI Winter was only the dimmest of memories.
It was now clearly AI Spring.

Facebook’s move to join the AI gold rush was an odd affair.
It began with a visit by Mark Zuckerberg, Facebook cofounder and chief executive, to an out-of-the-way technical conference called Neural Information Processing Systems, or NIPS, held in a Lake Tahoe hotel at the end of 2013.
The meeting had long been a bone-dry academic event, but Zuckerberg’s appearance to answer questions was a clear bellwether.
Not only were the researchers unused to appearances by high-visibility corporate tycoons, but Zuckerberg was accompanied by uniformed guards, lending the event a surreal quality.
The celebrity CEO filled the room he was in and several other workshops were postponed as a video feed was piped into an overflow room.
“The tone changed rapidly: accomplished professors became little more than lowly researchers shuffling into the Deep Learning workshop to see a Very Important Person speak,”
45
blogged Alex Rubinsteyn, a machine-learning researcher who was an attendee at the NIPS meeting.

In the aftermath of the event there was an alarmed back-and-forth inside the tiny community of researchers about the impact of commercialization of AI on the culture of the academic research community.
It was, however, too late to turn back.
The field has moved on from the intellectual quarrels in the 1950s and 1960s over the feasibility of AI and the question of the correct path.
Today, a series of probabilistic mathematical techniques have reinvented the field and transformed it from an academic curiosity into a force that is altering many aspects of the modern world.

It has also created an increasingly clear choice for designers.
It is now possible to design humans into or out of the computerized systems that are being created to grow our food, transport us, manufacture our goods and services, and entertain us.
It has become a philosophical and ethical choice, rather than simply a technical one.
Indeed, the explosion of computing power and its accessibility everywhere via wireless networks has reframed with new urgency the question addressed
so differently by McCarthy and Engelbart at the dawn of the computing age.

In the future will important decisions be made by humans or by the deep learning–style algorithms?
Today, the computing world is demarcated between those who focus on creating intelligent machines and those who focus on how human capabilities can be extended by the same machines.
Will it surprise anyone that the differing futures emerging from those opposing stances must be very different worlds?

5
   
|
   
WALKING AWAY

A
s a young navy technician in the 1950s, Robert Taylor had a lot of experience flying, even without a pilot’s license.
He had become a favorite copilot of the real pilots, who needed both lots of flight hours and time to study for exams.
So they would take Taylor along in their training jets, and after they took off he would fly the plane—gently—while the real pilots studied in the backseat.
He even practiced instrument landing approaches, in which the plane is guided to a landing by radio communications, while the pilot wears a hood blocking the view of the outside terrain.

As a young NASA program administrator in the early 1960s, Taylor was confident when he received an invitation to take part in a test flight at a Cornell University aerospace laboratory.
On arrival they put him in an uncomfortable anti-g suit and plunked him down in the front seat of a Lockheed T-33 jet trainer while the real pilot sat behind him.
They took off and the pilot flew up to the jet’s maximum altitude, almost
fifty thousand feet, then Taylor was offered the controls to fly around a bit.
After a while the pilot said, “Let’s try something a little more interesting.
Why don’t you put the plane in a dive?”
So Taylor pushed the joystick forward until he thought he was descending steeply enough, then he began to ease the stick backward.
Suddenly he froze in panic.
As he pulled back, the plane entered a steeper dive.
It felt like going over the top of a roller-coaster ride.
He pulled the stick back farther but the plane was still descending almost vertically.

Finally he said to the pilot behind him, “Okay, you’ve got it, you better take over!”
The pilot laughed, leveled the plane out, and said, “Let’s try this again.”
They tried again and this time when he pushed the stick forward the plane went unexpectedly upward.
As he pushed a bit harder, the plane tilted up farther.
This time he panicked, about to stall the plane, and again the pilot leveled the plane out.

Taylor should have guessed.
He had had such an odd piloting experience because he was unwittingly flying a laboratory plane that the air force researchers were using to experiment with flight control systems.
The air force invited Taylor to Cornell because as a NASA program manager he had granted, unsolicited, $100,000 to a flight research group at Wright-Patterson Air Force Base.

Taylor, first at NASA and then at DARPA, would pave the way for systems used both to augment humans and to replace them.
NASA was three years old when, in 1961, President Kennedy had announced the goal of getting an American to the moon—and back—safely during that decade.
Taylor found himself at an agency with a unique charter, to fundamentally shape how humans and machines interact, not just in flight, but ultimately in all computer-based systems from the desktop PC to today’s mobile robots.

The term “cyborg,” for “cybernetic organism,” had been coined originally in 1960 by medical researchers thinking about intentionally enhancing humans to prepare them for the
exploration of space.
1
They foresaw a new kind of creature—half human, half mechanism—capable of surviving in harsh environments.

In contrast, Taylor’s organization was funding the design of electronic systems that closely collaborated with humans while retaining a bright line distinguishing what was human and what was machine.

In the early 1960s NASA was a brand-new government bureaucracy deeply divided by the question of the role of humans in spaceflight.
For the first time it was possible to conceive of entirely automated flight in space.
The deeply unsettling idea was an obvious future direction—one in which machines would steer and humans would be passengers, then already the default approach pursued by the Soviet space program.
In contrast, the U.S.
program underscored the deep division that was highlighted by a series of incidents where American astronauts had intervened, thus proving the survival value of what in NASA parlance came to be called “human in the loop.”
On
Gemini VI,
for example, Wally Schirra was hailed as a hero after he held off pushing the abort button during a launch sequence, even though he was violating a NASA mission rule.
2

The human-in-the-loop debates became a series of intensely fought battles inside NASA during the 1950s and 1960s.
When Taylor arrived at the agency in 1961 he found an engineering culture in love with a body of mathematics known as control theory, Norbert Wiener’s cybernetic legacy.
These NASA engineers were designing the nation’s aeronautic as well as astronautic flight systems.
These were systems of such complexity that the engineers found them abstractly, some might say inherently, beautiful.
Taylor could see early on that the aerospace designers were wedded to the aesthetics of control as much as the fact that the systems needed to be increasingly automated because humans weren’t fast or reliable enough to control them.

He had stumbled into an almost intractable challenge, and
hence a deeply divided technical culture.
NASA was split on the question of the role of humans in spaceflight.
Taylor saw that the dispute pervaded even the highest echelons of the agency, and that it was easy to predict which side of the debate each particular manager would take.
Former jet pilots would be in favor of keeping a human in the system, while experts in control theory would choose full automation.

As a program manager in 1961, Taylor was responsible for several areas of research funding, one of them called “manned flight control systems.”
Another colleague in the same funding office was responsible for “automatic control systems.”
The two got along well enough, but they were locked in a bitter budgetary zero-sum game.
Taylor began to understand the arguments his colleagues made in support of automated control, though he was responsible for mastering arguments for manned control.
His best card in the debate was that he had the astronauts on his side and they had tremendous clout.
NASA’s corps of astronauts had mostly been test pilots.
They were the pride of the space agency and proved Taylor’s invaluable allies.
Taylor had funded the design and construction of simulator technology used extensively in astronaut training—systems for practicing a series of spacecraft maneuvers, like docking—since the early days of the Mercury program, and had spent hours talking with astronauts about the strengths and weaknesses of the different virtual training environments.
He found that the astronauts were keenly aware of the debate over the proper role of humans in the space programs.
They had a huge stake in whether they would have a role in future space systems or be little more than another batch of dogs and smart monkeys coming along for the ride.

The political battle over the human in the loop was waged over two divergent narratives: that of the heroic astronauts landing on the surface of the moon and that of the specter of a catastrophic accident culminating in the deaths of the astronauts—and potentially, as a consequence, the death
of the agency.
The issue, however, was at least temporarily settled when during the first human moon landing Neil Armstrong heroically took command after a computer malfunction and piloted the
Apollo 11
spacecraft safely to the lunar surface.
The moon landing and other similar feats of courage, such as Wally Schirra’s decision not to abort the earlier Gemini flight, have firmly established a view of human-machine interaction that elevates human decision-making beyond the fallible machines of our mythology.
Indeed, the macho view of astronauts as modern-day Lewises and Clarks was from the beginning deeply woven into the NASA ethos, as well as being a striking contrast with the early Soviet decision to train women cosmonauts.
3
The American view of human-controlled systems was long partially governed by perceived distinctions between U.S.
and Soviet approaches to aeronautics as well as astronautics.
The Vostok spacecraft were more automated, and so Soviet astronauts were basically passengers rather than pilots.
Yet the original American commitment to human-controlled spaceflight was made when aeronautical technology was in its infancy.
In the ensuing half century, computers and automated systems have become vastly more reliable.

For Taylor, the NASA human-in-the-loop wars were a formative experience that governed his judgment at both NASA and DARPA, where he projected and sponsored technological breakthroughs in computing, robotics, and artificial intelligence.
While at NASA, Taylor fell into the orbit of J.
C.
R.
Licklider, whose interests in psychology and information technology led him to anticipate the full potential of interactive computing.
In his seminal 1960 paper “Man-Computer Symbiosis,” Licklider foresaw an era when computerized systems would entirely displace humans.
However, he also predicted an interim period that might span from fifteen to five hundred years in which humans and computers would cooperate.
He believed that that period would be the most “intellectually and most creative and exciting [time] in the history of mankind.”

Taylor moved to ARPA in 1965 as Licklider’s protégé.
He set about funding the ARPAnet, the first nationwide research-oriented computer network.
In 1968 the two men coauthored a follow-up to Licklider’s symbiosis paper titled “The Computer as a Communication Device.”
In it, Licklider and Taylor were possibly the first to delineate the coming impact of computer networks on society.

Today, even after decades of research in human-machine and human-computer interaction in the airplane cockpit, the argument remains unsettled—and has emerged again with the rise of autonomous navigation in trains and automobiles.
While Google leads in research in driverless cars, the legacy automobile industry has started to deploy intelligent systems that can offer autonomous driving in some well-defined cases, such as during stop-and-go traffic jams, but then return the car to human control in situations recognized as too complex or risky to autopilot.
It may take seconds for a human sitting in the driver’s seat, possibly distracted by an email or worse, to return to “situational awareness” and safely resume control of the car.
Indeed the Google researchers may have already come up against the limits to autonomous driving.
There is currently a growing consensus that the “handoff” problem—returning manual control of an autonomous car to a human in the event of an emergency—may not actually be a solvable one.
If that proves true, the development of the safer cars of the future will tend toward augmentation technology rather than automation technology.
Completely autonomous driving might ultimately be limited to special cases like low-speed urban services and freeway driving.

Nevertheless, the NASA disputes were a harbinger of the emerging world of autonomous machines.
During the first fifty years of interactive computing, beginning in the mid-sixties, computers largely augmented humans instead of replacing them.
The technologies that became the hallmark of Silicon Valley—personal computing and the Internet—largely amplified
human intellect, although it was undeniably the case that an “augmented” human could do the work of several (former) coworkers.
Today, in contrast, system designers have a choice.
As AI technologies including vision, speech, and reasoning have begun to mature, it is increasingly possible to design humans either in or out of “the loop.”

F
unded first by J.
C.
R.
Licklider and then, beginning in 1965, by Bob Taylor, John McCarthy and Doug Engelbart worked in laboratories just miles apart from each other at the outset of the modern computing era.
They might as well have been in different universes.
Both were funded by ARPA, but they had little if any contact.
McCarthy was a brilliant, if somewhat cranky, mathematician and Engelbart was an Oregon farm boy and a dreamer.

The outcome of their competing pioneering research was unexpected.
When McCarthy came to Stanford to create the Stanford Artificial Intelligence Laboratory in the mid-1960s, his work was at the very heart of computer science, focusing on big concepts like artificial intelligence and proof of software program correctness using formal logic.
Engelbart, on the other hand, set out to build a “framework” for augmenting the human intellect.
It was initially a more nebulous concept viewed as far outside the mainstream of academic computer science, and yet for the first three decades of the interactive computing era Engelbart’s ideas had more worldly impact.
Within a decade both the first modern personal computers and then later information-sharing technologies like the World Wide Web—both of which can be traced in part to Engelbart’s research—emerged.

BOOK: Machines of Loving Grace
5.31Mb size Format: txt, pdf, ePub
ads

Other books

Dog Shaming by Pascale Lemire
Corroboree by Graham Masterton
Zel by Donna Jo Napoli
Saved by an Angel by Doreen, Virtue, calibre (0.6.0b7) [http://calibre.kovidgoyal.net]
Seducing Mr Storm by Poppy Summers
My Last Love Story by Falguni Kothari
Guilty Thing Surprised by Ruth Rendell
So Yesterday by Scott Westerfeld