Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (5 page)

BOOK: Machines of Loving Grace
2.6Mb size Format: txt, pdf, ePub
ads

Despite the growing debate over the consequences of the next generation of automation, there has been very little discussion about the designers and their values.
When pressed, the computer scientists, roboticists, and technologists offer conflicting views.
Some want to replace humans with machines; some are resigned to the inevitability—“I for one, welcome our insect overlords” (later “robot overlords”) was a meme that was popularized by
The Simpsons
—and some of them just as passionately want to build machines to extend the reach of humans.
The question of whether true artificial intelligence—the concept known as “Strong AI” or Artificial General Intelligence—will emerge, and whether machines can do more than mimic humans, has also been debated for decades.
Today there is a growing chorus of scientists and technologists raising new alarms about the possibility of the emergence of self-aware machines and their consequences.
Discussions about the state of AI technology today veer into the realm of science fiction or perhaps religion.
However, the reality of machine autonomy is no longer merely a philosophical or hypothetical question.
We have reached the point where machines are capable of performing many human tasks that require intelligence as well as muscle: they can do factory work, drive vehicles, diagnose illnesses, and understand documents, and they can certainly control weapons and kill with deadly accuracy.

The AI versus IA dichotomy is nowhere clearer than in a new generation of weapons systems now on the horizon.
Developers at DARPA are about to cross a new technological threshold with a replacement for today’s cruise missiles, the Long Range Anti-Ship Missile, or LRASM.
Developed for the navy, it is scheduled for the U.S.
fleet in 2018.
Unlike its predecessors, this is a new weapon in the U.S.
arsenal with the ability to make targeting decisions autonomously.
The LRASM is designed to fly to an enemy fleet while out of contact with human controllers and then use artificial intelligence technologies to decide which target to kill.

The new ethical dilemma is, will humans allow their weapons to pull triggers on their own without human oversight?
Variations of that same challenge are inherent in rapid computerization of the automobile, and indeed transportation in general is emblematic of the consequences of the new wave of smart machines.
Artificial intelligence is poised to have an impact on society that will be greater than the effect that personal computing and the Internet have had beginning in the 1990s.
Significantly, the transformation is being shepherded by a group of elite technologists.

Several years ago Jerry Kaplan, a Silicon Valley veteran who began his career as a Stanford artificial intelligence researcher and then became one of those who walked away from the field during the 1980s, warned a group of Stanford computer scientists and graduate student researchers: “Your actions today, right here in the Artificial Intelligence Lab, as embodied in the systems you create, may determine how society deals with this issue.”
The imminent arrival of the next generation of AI is a crucial ethical challenge, he contended: “We’re in danger of incubating robotic life at the expense of our own life.”
1
The dichotomy that he sketched out for the researchers is the gap between intelligent machines that displace humans and human-centered computing systems that extend human capabilities.

Like many technologists in Silicon Valley, Kaplan believes we are on the brink of the creation of an entire economy that runs largely without human intervention.
That may sound apocalyptic, but the future Kaplan described will almost certainly arrive.
His deeper point was that today’s technology acceleration isn’t arriving blindly.
The engineers who are designing our future are each—individually—making choices.

O
n an abandoned military base in the California desert during the fall of 2007 a short, heavyset man holding a checkered
flag stepped out onto a dusty makeshift racing track and waved it energetically as a Chevrolet Tahoe SUV glided past at a leisurely pace.
The flag waver was Tony Tether, the director of DARPA.

There was no driver behind the wheel of the vehicle, which sported a large GM decal.
Closer examination revealed no passengers in the car, and none of the other cars in the “race” had drivers or passengers either.
Viewing the event, in which the cars glided seemingly endlessly through a makeshift town previously used for training military troops in urban combat, it didn’t seem to be a race at all.
It felt more like an afternoon of stop-and-go Sunday traffic in a science-fiction movie like
Blade Runner
.

Indeed, by almost any standard it was an odd event.
The DARPA Urban Challenge pitted teams of roboticists, artificial intelligence researchers, students, automotive engineers, and software hackers against each other in an effort to design and build robot vehicles capable of driving autonomously in an urban traffic setting.
The event was the third in the series of contests that Tether organized.
At the time military technology largely amplified a soldier’s killing power rather than replacing the soldier.
Robotic military planes were flown by humans and, in some cases, by extraordinarily large groups of soldiers.
A report by the Defense Science Board in 2012 noted that for many military operations it might take a team of several hundred personnel to fly a single drone mission.
2

Unmanned ground vehicles were a more complicated challenge.
The problem in the case of ground vehicles was, as one DARPA manager would put it, that “the ground was hard”—“hard” as in “hard to drive on,” rather than as in “rock.”
Following a road is challenging enough, but robot car designers are confronted with an endless array of special cases: driving at night, driving into the sun, driving in rain, on ice—the list goes on indefinitely.

Consider the problem of designing a machine that knows
how to react to something as simple as a plastic bag in a lane on the highway.
Is the bag hard, or is it soft?
Will it damage the vehicle?
In a war zone, it might be an improvised explosive device.
Humans can see and react to such challenges seemingly without effort, when driving at low speed with good visibility.
For AI researchers, however, solving that problem is the holy grail in computer vision.
It became one of a myriad of similar challenges that DARPA set out to solve in creating the autonomous vehicle Grand Challenge events.
In the 1980s roboticists in both Germany and the United States had made scattered progress toward autonomous driving, but the reality was that it was easier to build a robot to go to the moon than to build one that could drive by itself in rush-hour traffic.
And so Tony Tether took up the challenge.
The endeavor was risky: if the contests failed to produce results, the series of Grand Challenge self-driving contests would become known as Tether’s Folly.
Thus the checkered flag at the final race proved to be as much a victory lap for Tether as for the cars.

There had been darker times.
Under Tether’s directorship the agency hired Admiral John Poindexter to build the system known as Total Information Awareness.
A vast data-mining project that was intended to hunt terrorists online by collecting and connecting the dots in oceans of credit card, email, and phone records, the project started a privacy firestorm and was soon canceled by Congress in May of 2003.
Although Total Information Awareness vanished from public view, it in fact moved into the nation’s intelligence bureaucracy only to become visible again in 2013 when Edward Snowden leaked hundreds of thousands of documents that revealed a deep and broad range of systems for surveillance of any possible activity that could be of interest.
In the pantheon of DARPA directors, Tether was also something of an odd duck.
He survived the Total Information Awareness scandal and pushed the agency ahead in other areas with a deep and controlling involvement in all of the agency’s research projects.
(Indeed, the decision by
Tether to wave the checkered flag was emblematic of his tenure at DARPA—Tony Tether was a micromanager.)

DARPA was founded in response to the Soviet Sputnik, which was like a thunderbolt to an America that believed in its technological supremacy.
With the explicit mission of ensuring the United States was never again technologically superseded by another power, the directors of DARPA—at birth more simply named the Advanced Research Projects Agency—had been scientists and engineers willing to place huge bets on blue-sky technologies, with close relationships and a real sense of affection for the nation’s best university researchers.

Not so with Tony Tether, who represented the George W.
Bush era.
He had worked for decades as a program manager for secretive military contractors and, like many surrounding George W.
Bush, was wary of the nation’s academic institutions, which he thought were too independent to be trusted with the new mission.
Small wonder.
Tether’s worldview had been formed when he was an electrical engineering grad student at Stanford University during the 1960s, where there was a sharp division between the antiwar students and the scientists and engineers helping the Vietnam War effort by designing advanced weapons.

After arriving as director he went to work changing the culture of the agency that had gained a legendary reputation for the way it helped invent everything from the Internet to stealth fighter technology.
He rapidly moved money away from the universities and toward classified work done by military contractors supporting the twin wars in Iraq and Afghanistan.
The agency moved away from “blue sky” toward “deliverables.”
Publicly Tether made the case that it was still possible to innovate in secret, as long as you fostered the competitive culture of Silicon Valley, with its turmoil of new ideas and rewards for good tries even if they failed.

And Tether certainly took DARPA in new technology directions.
His concern for the thousands of maimed veterans coming
back without limbs and with increasing the power and effectiveness of military decision-makers inspired him to push agency dollars into human augmentation projects as well as artificial intelligence.
That meant robotic arms and legs for wounded soldiers, and an “admiral’s advisor,” a military version of what Doug Engelbart had set out to do in the 1960s with his vision of intelligence augmentation, or IA.
The project was referred to as PAL, for Perceptive Assistant that Learns, and much of the research would be done at SRI International, which dubbed the project CALO, or Cognitive Assistant that Learns and Organizes.

It was ironic that Tether returned to the research agenda originally promoted during the mid-1960s by two visionary DARPA program managers, Robert Taylor and J.
C.
R.
Licklider.
It was also bittersweet, although few mentioned it, that despite Doug Engelbart’s tremendous early success in the early 1970s, his project had faltered and fallen out of favor at SRI.
He ended up being shuffled off to a time-sharing company for commercialization, where his project sat relatively unnoticed and underfunded for more than a decade.
The renewed DARPA investment would touch off a wave of commercial innovation—CALO would lead most significantly to Apple’s Siri personal assistant, a direct descendant of the augmentation approach originally pioneered by Engelbart.

Tether’s automotive Grand Challenge drew garage innovators and eager volunteers out of the woodwork.
In military terms it was a “force multiplier,” allowing the agency to get many times the innovation it would get from traditional contracting efforts.
At its heart, however, the specific challenge that Tether chose to pursue had been cooked up more than a decade earlier inside the same university research community that he now disfavored.
The guiding force behind the GM robot SUV that would win the Urban Challenge in 2007 was a Carnegie Mellon roboticist who had been itching to win this prize for more than a decade.

I
n the fall of 2005, Tether’s second robot race through the California desert had just ended at the Nevada border and Stanford University’s roboticists were celebrating.
Stanley, the once crash-prone computerized Volkswagen Touareg, had just pulled off a come-from-behind victory and rolled under a large banner before a cheering audience of several thousand.

Just a few feet away in another tent, however, the atmosphere had the grim quality of a losing football team’s locker room.
The Carnegie Mellon team had been the odds-on favorite, with two robot vehicle entries and a no-nonsense leader, a former marine and rock climber, William L.
“Red” Whittaker.
His team had lost the race due to a damnable spell of bad luck.
Whittaker had barnstormed into the first DARPA race eighteen months earlier with another heavily funded GM Humvee, only to fail when the car placed a wheel just slightly off road on a steep climb.
Trapped in the sand, it was out of the competition.
Up to then, Whittaker’s robot had been head and shoulders above the others.
So when he returned the second time with a two-car fleet and a squad of photo analysts to pore over the course ahead of the competition, he had easily been cast as the odds-on favorite.

BOOK: Machines of Loving Grace
2.6Mb size Format: txt, pdf, ePub
ads

Other books

Legacy of Desire by Anderson, Marina
Bless this Mouse by Lois Lowry
Blue Thunder by Spangaloo Publishing
The Counting-Downers by A. J. Compton
Yield the Night by Annette Marie
London Blues by Anthony Frewin