The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (6 page)

BOOK: The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
12.52Mb size Format: txt, pdf, ePub

Innovators reasoned that there is nothing stopping printers from depositing layers one on top of the other. And instead of ink, printers can also deposit materials like liquid plastic that gets cured into a solid by ultraviolet light. Each layer is very thin—somewhere around one-tenth of a millimeter—but over time a three-dimensional object takes shape. And because of the way it is built up, this shape can be quite complicated—it can have voids and tunnels in it, and even parts that move independently of one another. At the San Francisco headquarters of Autodesk, a leading design software company, we handled a working adjustable wrench that was printed as a single part, no assembly required.
40

This wrench was a demonstration product made out of plastic, but 3D printing has expanded into metals as well. Autodesk CEO Carl Bass is part of the large and growing community of additive manufacturing hobbyists and tinkerers. During our tour of his company’s gallery, a showcase of all the products and projects enabled by Autodesk software, he showed us a beautiful metal bowl he designed on a computer and had printed out. The bowl had an elaborate lattice pattern on its sides. Bass said that he’d asked friends of his who were experienced in working with metal—sculptors, ironworkers, welders, and so on—how the bowl was made. None of them could figure out how the lattice was produced. The answer was that a laser had built up each layer by fusing powdered metal.

3D printing today is not just for art projects like Bass’s bowl. It’s used by countless companies every day to make prototypes and model parts. It’s also being used for final parts ranging from plastic vents and housings on NASA’s next-generation Moon rover to a metal prosthetic jawbone for an eighty-three-year-old woman. In the near future, it might be used to print out replacement parts for faulty engines on the spot instead of maintaining stockpiles of them in inventory. Demonstration projects have even shown that the technique could be used to build concrete houses.
41

Most of the innovations described in this chapter have occurred in just the past few years. They’ve taken place in areas where improvement had been frustratingly slow for a long time, and where the best thinking often led to the conclusion that it wouldn’t speed up. But then digital progress became sudden after being gradual for so long. This happened in multiple areas, from artificial intelligence to self-driving cars to robotics.

How did this happen? Was it a fluke—a confluence of a number of lucky one-time advances? No, it was not. The digital progress we’ve seen recently is certainly impressive, but it’s just a small indication of what’s to come. It’s the dawn of the second machine age. To understand why it’s unfolding now, we need to understand the nature of technological progress in the era of digital hardware, software, and networks. In particular, we need to understand its three key characteristics: that it is
exponential
,
digital
, and
combinatorial
. The next three chapters will discuss each of these in turn.

*
In the years leading up to the Great Recession that began in 2007, companies were giving mortgages to people with lower and lower credit scores, income, and wealth, and higher and higher debt levels. In other words, they either rewrote or ignored their previous mortgage approval algorithms. It wasn’t that the old mortgage algorithms stopped working; it was that they stopped being used.

*
To be precise, Trebek reads answers and the contestants have to state the question that would give rise to this answer.

*
Sensorimotor skills are those that involve sensing the physical world and controlling the body to move through it.

“The greatest shortcoming of the human race is our inability to understand the exponential function.”

—Albert A. Bartlett

A
LTHOUGH
HE

S
COFOUNDER
OF
Intel, a major philanthropist, and recipient of the Presidential Medal of Freedom, Gordon Moore is best known for a prediction he made, almost as an aside, in a 1965 article. Moore, then working at Fairchild Semiconductor, wrote an article for
Electronics
magazine with the admirably direct title “Cramming More Components onto Integrated Circuits.” At the time, circuits of this type—which combined many different kinds of electrical components onto a single chip made primarily of silicon—were less than a decade old, but Moore saw their potential. He wrote that, “Integrated circuits will lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.”
1

The article’s most famous forecast, however, and the one that has made Moore a household name, concerned the component cramming of the title:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. . . . Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years.
2

This is the original statement of Moore’s Law, and it’s worth dwelling for a moment on its implications. “Complexity for minimum component costs” here essentially means the amount of integrated circuit computing power you could buy for one dollar. Moore observed that over the relatively brief history of his industry this amount had doubled each year: you could buy twice as much power per dollar in 1963 as you could in 1962, twice as much again in 1964, and twice as much again in 1965.

Moore predicted this state of affairs would continue, perhaps with some change to timing, for at least another ten years. This bold statement forecast circuits that would be more than five hundred times as powerful in 1975 as they were in 1965.
*

As it turned out, however, Moore’s biggest mistake was in being too conservative. His “law” has held up astonishingly well for over four decades, not just one, and has been true for digital progress in areas other than integrated circuits. It’s worth noting that the time required for digital doubling remains a matter of dispute. In 1975 Moore revised his estimate upward from one year to two, and today it’s common to use eighteen months as the doubling period for general computing power. Still, there’s no dispute that Moore’s Law has proved remarkably prescient for almost half a century.
3

It’s Not a Law: It’s a Bunch of Good Ideas

Moore’s Law is very different from the laws of physics that govern thermodynamics or Newtonian classical mechanics. Those laws describe how the universe works; they’re true no matter what we do. Moore’s Law, in contrast, is a statement about the work of the computer industry’s engineers and scientists; it’s an observation about how constant and successful their efforts have been. We simply don’t see this kind of sustained success in other domains.

There was no period of time when cars got twice as fast or twice as fuel efficient every year or two for fifty years. Airplanes don’t consistently have the ability to fly twice as far, or trains the ability to haul twice as much. Olympic runners and swimmers don’t cut their times in half over a generation, let alone a couple of years.

So how has the computer industry kept up this amazing pace of improvement?

There are two main reasons. First, while transistors and the other elements of computing are constrained by the laws of physics just like cars, airplanes, and swimmers, the constraints in the digital world are much looser. They have to do with how many electrons per second can be put through a channel etched in an integrated circuit, or how fast beams of light can travel through fiber-optic cable. At some point digital progress bumps up against its constraints and Moore’s Law must slow down, but it takes awhile. Henry Samueli, chief technology officer of chipmaker Broadcom Corporation, predicted in 2013 that “Moore’s Law is coming to an end—in the next decade it will pretty much come to an end so we have 15 years or so.”
4

But smart people have been predicting the end of Moore’s Law for a while now, and they’ve been proved wrong over and over again.
5
This is not because they misunderstood the physics involved, but because they underestimated the people working in the computer industry. The second reason that Moore’s Law has held up so well for so long is what we might call ‘brilliant tinkering’—finding engineering detours around the roadblocks thrown up by physics. When it became difficult to cram integrated circuits more tightly together, for example, chip makers instead layered them on top of one another, opening up a great deal of new real estate. When communications traffic threatened to outstrip the capacity even of fiber-optic cable, engineers developed wavelength division multiplexing (WDM), a technique for transmitting many beams of light of different wavelengths down the same single glass fiber at the same time. Over and over again brilliant tinkering has found ways to skirt the limitations imposed by physics. As Intel executive Mike Marberry puts it, “If you’re only using the same technology, then in principle you run into limits. The truth is we’ve been modifying the technology every five or seven years for 40 years, and there’s no end in sight for being able to do that.”
6
This constant modification has made Moore’s Law the central phenomenon of the computer age. Think of it as a steady drumbeat in the background of the economy.

Charting the Power of Constant Doubling

Once this doubling has been going on for some time, the later numbers overwhelm the earlier ones, making them appear irrelevant. To see this, let’s look at a hypothetical example. Imagine that Erik gives Andy a tribble, the fuzzy creature with a high reproductive rate made famous in an episode of
Star Trek
. Every day each tribble gives birth to another tribble, so Andy’s menagerie doubles in size each day. A geek would say in this case that the tribble family is experiencing
exponential
growth. That’s because the mathematical expression for how many tribbles there are on day
x
is 2
x
– 1
, where the
x

1
is referred to as an exponent. Exponential growth like this is fast growth; after two weeks Andy has more than sixteen thousand of the creatures. Here’s a graph of how his tribble family grows over time:

FIGURE 3.1
Tribbles over Time: The Power of Constant Doubling

This graph is accurate, but misleading in an important sense. It seems to show that all the action occurs in the last couple of days, with nothing much happening in the first week. But the same phenomenon—the daily doubling of tribbles—has been going on the whole time with no accelerations or disruptions. This steady exponential growth is what’s really interesting about Erik’s ‘gift’ to Andy. To make it more obvious, we have to change the spacing of the numbers on the graph.

The graph we’ve already drawn has standard linear spacing; each segment of the vertical axis indicates two thousand more tribbles. This is fine for many purposes but, as we’ve seen, it’s not great for showing exponential growth. To highlight it better, we’ll change to logarithmic spacing, where each segment of the vertical axis represents a tenfold increase in tribbles: an increase first from 1 to 10, then from 10 to 100, then from 100 to 1,000, and so on. In other words, we scale the axis by powers of 10 or orders of magnitude.

Logarithmic graphs have a wonderful property: they show exponential growth as a perfectly straight line. Here’s what the growth of Andy’s tribble family looks like on a logarithmic scale:

FIGURE 3.2
Tribbles over Time: The Power of Constant Doubling

Other books

The Christmas Violin by Buffy Andrews
Cachet by Shannah Biondine
After a Fashion by Jen Turano
Mo said she was quirky by Kelman, James
KW 09:Shot on Location by Laurence Shames
Lo que esconde tu nombre by Clara Sánchez
War Room by Chris Fabry
El protector by Larry Niven