The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (3 page)

BOOK: The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
12.8Mb size Format: txt, pdf, ePub

I
N
THE
SUMMER
OF
2012, we went for a drive in a car that had no driver.

During a research visit to Google’s Silicon Valley headquarters, we got to ride in one of the company’s autonomous vehicles, developed as part of its Chauffeur project. Initially we had visions of cruising in the back seat of a car that had no one in the front seat, but Google is understandably skittish about putting obviously autonomous autos on the road. Doing so might freak out pedestrians and other drivers, or attract the attention of the police. So we sat in the back while two members of the Chauffeur team rode up front.

When one of the Googlers hit the button that switched the car into fully automatic driving mode while we were headed down Highway 101, our curiosities—and self-preservation instincts—engaged. The 101 is not always a predictable or calm environment. It’s nice and straight, but it’s also crowded most of the time, and its traffic flows have little obvious rhyme or reason. At highway speeds the consequences of driving mistakes can be serious ones. Since we were now part of the ongoing Chauffeur experiment, these consequences were suddenly of more than just intellectual interest to us.

The car performed flawlessly. In fact, it actually provided a boring ride. It didn’t speed or slalom among the other cars; it drove exactly the way we’re all taught to in driver’s ed. A laptop in the car provided a real-time visual representation of what the Google car ‘saw’ as it proceeded along the highway—all the nearby objects of which its sensors were aware. The car recognized all the surrounding vehicles, not just the nearest ones, and it remained aware of them no matter where they moved. It was a car without blind spots. But the software doing the driving was aware that cars and trucks driven by humans
do
have blind spots. The laptop screen displayed the software’s best guess about where all these blind spots were and worked to stay out of them.

We were staring at the screen, paying no attention to the actual road, when traffic ahead of us came to a complete stop. The autonomous car braked smoothly in response, coming to a stop a safe distance behind the car in front, and started moving again once the rest of the traffic did. All the while the Googlers in the front seat never stopped their conversation or showed any nervousness, or indeed much interest at all in current highway conditions. Their hundreds of hours in the car had convinced them that it could handle a little stop-and-go traffic. By the time we pulled back into the parking lot, we shared their confidence.

The New
New Division of Labor

Our ride that day on the 101 was especially weird for us because, only a few years earlier, we were sure that computers would not be able to drive cars. Excellent research and analysis, conducted by colleagues who we respect a great deal, concluded that driving would remain a human task for the foreseeable future. How they reached this conclusion, and how technologies like Chauffeur started to overturn it in just a few years, offers important lessons about digital progress.

In 2004 Frank Levy and Richard Murnane published their book
The New Division of Labor
.
1
The division they focused on was between human and digital labor—in other words, between people and computers. In any sensible economic system, people should focus on the tasks and jobs where they have a comparative advantage over computers, leaving computers the work for which they are better suited. In their book Levy and Murnane offered a way to think about which tasks fell into each category.

One hundred years ago the previous paragraph wouldn’t have made any sense. Back then, computers
were
humans. The word was originally a job title, not a label for a type of machine. Computers in the early twentieth century were people, usually women, who spent all day doing arithmetic and tabulating the results. Over the course of decades, innovators designed machines that could take over more and more of this work; they were first mechanical, then electro-mechanical, and eventually digital. Today, few people if any are employed simply to do arithmetic and record the results. Even in the lowest-wage countries there are no human computers, because the nonhuman ones are far cheaper, faster, and more accurate.

If you examine their inner workings, you realize that computers aren’t just number crunchers, they’re symbols processors. Their circuitry can be interpreted in the language of ones and zeroes, but equally validly as true or false, yes or no, or any other symbolic system. In principle, they can do all manner of symbolic work, from math to logic to language. But digital novelists are not yet available, so people still write all the books that appear on fiction bestseller lists. We also haven’t yet computerized the work of entrepreneurs, CEOs, scientists, nurses, restaurant busboys, or many other types of workers. Why not? What is it about their work that makes it harder to digitize than what human computers used to do?

Computers Are Good at Following Rules . . .

These are the questions Levy and Murnane tackled in
The New Division of Labor
, and the answers they came up with made a great deal of sense. The authors put information processing tasks—the foundation of all knowledge work—on a spectrum. At one end are tasks like arithmetic that require only the application of well-understood rules. Since computers are really good at following rules, it follows that they should do arithmetic and similar tasks.

Levy and Murnane go on to highlight other types of knowledge work that can also be expressed as rules. For example, a person’s credit score is a good general predictor of whether they’ll pay back their mortgage as promised, as is the amount of the mortgage relative to the person’s wealth, income, and other debts. So the decision about whether or not to give someone a mortgage can be effectively boiled down to a rule.

Expressed in words, a mortgage rule might say, “If a person is requesting a mortgage of amount
M
and they have a credit score of
V
or higher, annual income greater than
I
or total wealth greater than
W
, and total debt no greater than
D
, then approve the request.” When expressed in computer code, we call a mortgage rule like this an
algorithm
. Algorithms are simplifications; they can’t and don’t take everything into account (like a billionaire uncle who has included the applicant in his will and likes to rock-climb without ropes). Algorithms do, however, include the most common and important things, and they generally work quite well at tasks like predicting payback rates. Computers, therefore, can and should be used for mortgage approval.
*

. . . But Lousy at Pattern Recognition

At the other end of Levy and Murnane’s spectrum, however, lie information processing tasks that cannot be boiled down to rules or algorithms. According to the authors, these are tasks that draw on the human capacity for pattern recognition. Our brains are extraordinarily good at taking in information via our senses and examining it for patterns, but we’re quite bad at describing or figuring out
how
we’re doing it, especially when a large volume of fast-changing information arrives at a rapid pace. As the philosopher Michael Polanyi famously observed, “We know more than we can tell.”
2
When this is the case, according to Levy and Murnane, tasks can’t be computerized and will remain in the domain of human workers. The authors cite driving a vehicle in traffic as an example of such as task. As they write,

As the driver makes his left turn against traffic, he confronts a wall of images and sounds generated by oncoming cars, traffic lights, storefronts, billboards, trees, and a traffic policeman. Using his knowledge, he must estimate the size and position of each of these objects and the likelihood that they pose a hazard. . . . The truck driver [has] the schema to recognize what [he is] confronting. But articulating this knowledge and embedding it in software for all but highly structured situations are at present enormously difficult tasks. . . . Computers cannot easily substitute for humans in [jobs like driving].

So Much for
That
Distinction

We were convinced by Levy and Murnane’s arguments when we read
The New Division of Labor
in 2004. We were further convinced that year by the initial results of the DARPA Grand Challenge for driverless cars.

DARPA, the Defense Advanced Research Projects Agency, was founded in 1958 (in response to the Soviet Union’s launch of the
Sputnik
satellite) and tasked with spurring technological progress that might have military applications. In 2002 the agency announced its first Grand Challenge, which was to build a completely autonomous vehicle that could complete a 150-mile course through California’s Mojave Desert. Fifteen entrants performed well enough in a qualifying run to compete in the main event, which was held on March 13, 2004.

The results were less than encouraging. Two vehicles didn’t make it to the starting area, one flipped over
in
the starting area, and three hours into the race only four cars were still operational. The “winning” Sandstorm car from Carnegie Mellon University covered 7.4 miles (less than 5 percent of the total) before veering off the course during a hairpin turn and getting stuck on an embankment. The contest’s $1 million prize went unclaimed, and
Popular Science
called the event “DARPA’s Debacle in the Desert.”
3

Within a few years, however, the debacle in the desert became the ‘fun on the 101’ that we experienced. Google announced in an October 2010 blog post that its completely autonomous cars had for some time been driving successfully, in traffic, on American roads and highways. By the time we took our ride in the summer of 2012 the Chauffeur project had grown into a small fleet of vehicles that had collectively logged hundreds of thousands of miles with no human involvement and with only two accidents. One occurred when a person was driving the Chauffeur car; the other happened when a Google car was rear-ended (by a human driver) while stopped at a red light.
4
To be sure, there are still many situations that Google’s cars can’t handle, particularly complicated city traffic or off-road driving or, for that matter, any location that has not already been meticulously mapped in advance by Google. But our experience on the highway convinced us that it’s a viable approach for the large and growing set of everyday driving situations.

Self-driving cars went from being the stuff of science fiction to on-the-road reality in a few short years. Cutting-edge research explaining why they were not coming anytime soon was outpaced by cutting-edge science and engineering that brought them into existence, again in the space of a few short years. This science and engineering accelerated rapidly, going from a debacle to a triumph in a little more than half a decade.

Improvement in autonomous vehicles reminds us of Hemingway’s quote about how a man goes broke: “Gradually and then suddenly.”
5
And self-driving cars are not an anomaly; they’re part of a broad, fascinating pattern. Progress on some of the oldest and toughest challenges associated with computers, robots, and other digital gear was gradual for a long time. Then in the past few years it became sudden; digital gear started racing ahead, accomplishing tasks it had always been lousy at and displaying skills it was not supposed to acquire anytime soon. Let’s look at a few more examples of surprising recent technological progress.

Good Listeners and Smooth Talkers

In addition to pattern recognition, Levy and Murnane highlight
complex communication
as a domain that would stay on the human side in the new division of labor. They write that, “Conversations critical to effective teaching, managing, selling, and many other occupations require the transfer and interpretation of a broad range of information. In these cases, the possibility of exchanging information with a computer, rather than another human, is a long way off.”
6

In the fall of 2011, Apple introduced the iPhone 4S featuring “Siri,” an intelligent personal assistant that worked via a natural-language user interface. In other words, people talked to it just as they would talk to another human being. The software underlying Siri, which originated at the California research institute SRI International and was purchased by Apple in 2010, listened to what iPhone users were saying to it, tried to identify what they wanted, then took action and reported back to them in a synthetic voice.

After Siri had been out for about eight months, Kyle Wagner of technology blog
Gizmodo
listed some of its most useful capabilities: “You can ask about the scores of live games—‘What’s the score of the Giants game?’—or about individual player stats. You can also make OpenTable reservations, get Yelp scores, ask about what movies are playing at a local theater and then see a trailer. If you’re busy and can’t take a call, you can ask Siri to remind you to call the person back later. This is the kind of everyday task for which voice commands can actually be incredibly useful.”
7

The
Gizmodo
post ended with caution: “That actually sounds pretty cool. Just with the obvious Siri criterion:
If it actually works.

8
Upon its release, a lot of people found that Apple’s intelligent personal assistant didn’t work well. It didn’t understand what they were saying, asked for repeated clarifications, gave strange or inaccurate answers, and put them off with responses like “I’m really sorry about this, but I can’t take any requests right now. Please try again in a little while.” Analyst Gene Munster catalogued questions with which Siri had trouble:


Where is Elvis buried?
Responded, “I can’t answer that for you.” It thought the person’s name was Elvis Buried.


When did the movie
Cinderella
come out?
Responded with a movie theater search on Yelp.

Other books

Dying in the Dark by Valerie Wilson Wesley
Firefly Beach by Meira Pentermann
Italian Stallions by Karin Tabke, Jami Alden
Assassination Game by Alan Gratz
The Algebraist by Iain M. Banks