You are not a Gadget: A Manifesto (19 page)

BOOK: You are not a Gadget: A Manifesto
7.64Mb size Format: txt, pdf, ePub
ads

This is an extremely ambitious vision, because, among other things, it involves the representation of ideas that are usually expressed in natural language (in contracts), and because, at the cloud level, it must reconcile multiple contracts that may often be underspecified and reveal ambiguities and/or contradictions in an emerging system of expressions.

But while these problems will be a headache for software developers, they might also ultimately force financiers to become better at describing what they do. They aren’t artists who should be allowed to make ambiguous, impossible-to-parse creations. The need to interoperate more tightly with the “dumbness” of software could help them undertake their work more clearly and safely.

Furthermore, this sort of transaction representation has already been done internally within some of the more sophisticated hedge funds. Computer science is mature enough to take this problem on.

*
Some of my collaborators in this research include Paul Borrill, Jim Herriot, Stuart Kauffman, Bruce Sawhill, Lee Smolin, and Eric Weinstein.

PART THREE
The Unbearable Thinness of Flatness

 

THREE WARNINGS
have been presented in the previous chapters, conveying my belief that cybernetic totalism will ultimately be bad for spirituality, morality, and business. In my view, people have often respected bits too much, resulting in a creeping degradation of their own qualities as human beings.

This section addresses another kind of danger that can arise from believing in bits too much. Recall that in
Chapter 1
I made a distinction between ideal and real computers. Ideal computers can be experienced when you write a small program. They seem to offer infinite possibilities and an extraordinary sense of freedom. Real computers are experienced when we deal with large programs. They can trap us in tangles of code and make us slaves to legacy—and not just in matters of obscure technological decisions. Real computers reify our philosophies through the process of lock-in before we are ready.

People who use metaphors drawn from computation when they think about reality naturally prefer to think about ideal computers instead of real ones. Thus, the cultural software engineers usually present us with a world in which each cultural expression is like a brand-new tiny program, free to be anything at all.

That’s a sweet thought, but it brings about an unfortunate side effect. If each cultural expression is a brand-new tiny program, then they are all aligned on the same starting line. Each one is created using the same resources as every other one.

This is what I call a “flat” global structure. It suggests a happy world to software technologists, because every little program in a flat global structure is born fresh, offering a renewing whiff of the freedom of tiny code.

Software people know that it’s useless to continue to write tiny programs forever. To do anything useful, you have to take the painful plunge into large code. But they seem to imagine that the domain of tiny, virginal expression is still going to be valid in the spheres of culture and, as I’ll explain, science.

That’s one reason the web 2.0 designs strongly favor flatness in cultural expression. But I believe that flatness, as applied to human affairs, leads to blandness and meaninglessness. And there are analogous problems related to the increasing popularity of flatness in scientific thought. When applied to science, flatness can cause confusion between methodology and expression.

CHAPTER 9
Retropolis

AN ANOMALY IN
popular music trends is examined.

Second-Order Culture

What’s gone so stale with internet culture that a batch of tired rhetoric from my old circle of friends has become sacrosanct? Why can’t anyone younger dump our old ideas for something original? I long to be shocked and made obsolete by new generations of digital culture, but instead I am being tortured by repetition and boredom.

For example: the pinnacle of achievement of the open software movement has been the creation of Linux, a derivative of UNIX, an old operating system from the 1970s. Similarly, the less techie side of the open culture movement celebrates the creation of Wikipedia, which is a copy of something that already existed: an encyclopedia.

There’s a rule of thumb you can count on in each succeeding version of the web 2.0 movement: the more radical an online social experiment is claimed to be, the more conservative, nostalgic, and familiar the result will actually be.

What I’m saying here is independent of whether the typical claims made by web 2.0 and wiki enthusiasts are true. Let’s just stipulate for the sake of argument that Linux is as stable and secure as any historical derivative of UNIX and that Wikipedia is as reliable as other encyclopedias. It’s still strange that generations of young, energetic, idealistic people would perceive such intense value in creating them.

Let’s suppose that back in the 1980s I had said, “In a quarter century, when the digital revolution has made great
progress and computer chips are
millions
of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!” It would have sounded utterly pathetic.

The distinction between first-order expression and derivative expression is lost on true believers in the hive. First-order expression is when someone presents a whole, a work that integrates its own worldview and aesthetic. It is something genuinely new in the world.

Second-order expression is made of fragmentary reactions to first-order expression. A movie like
Blade Runner
is first-order expression, as was the novel that inspired it, but a mashup in which a scene from the movie is accompanied by the anonymous masher’s favorite song is not in the same league.

I don’t claim I can build a meter to detect precisely where the boundary between first-and second-order expression lies. I
am
claiming, however, that the web 2.0 designs spin out gobs of the latter and choke off the former.

It is astonishing how much of the chatter online is driven by fan responses to expression that was originally created within the sphere of old media and that is now being destroyed by the net. Comments about TV shows, major movies, commercial music releases, and video games must be responsible for almost as much bit traffic as porn. There is certainly nothing wrong with that, but since the web is killing the old media, we face a situation in which culture is effectively eating its own seed stock.

Schlock Defended

The more original material that does exist on the open net is all too often like the lowest-production-cost material from the besieged, old-fashioned, copy-written world. It’s an endless parade of “News of the Weird,” “Stupid Pet Tricks,” and
America’s Funniest Home Videos
.

This is the sort of stuff you’ll be directed to by aggregation services like YouTube or Digg. (That, and endless propaganda about the merits of open culture. Some stupefying, dull release of a version of Linux will usually be a top world headline.)

I am not being a snob about this material. I like it myself once in a
while. Only people can make schlock, after all. A bird can’t be schlocky when it sings, but a person can. So we can take existential pride in schlock. All I am saying is that we already had, in the predigital world, all the kinds of schlock you now find on the net. Making echoes of this material in the radical, new, “open” world accomplishes nothing. The cumulative result is that online culture is fixated on the world as it was before the web was born.

By most estimates, about half the bits coursing through the internet originated as television, movie, or other traditional commercial content, though it is difficult to come up with a precise accounting.

BitTorrent, a company that maintains only one of the many protocols for delivering such content, has at times claimed that its users alone are taking up more than half of the bandwidth of the internet. (BitTorrent is used for a variety of content, but a primary motivation to use it is that it is suitable for distributing large files, such as television shows and feature-length movies.)

The internet was, of course, originally conceived during the Cold War to be capable of surviving a nuclear attack. Parts of it can be destroyed without destroying the whole, but that also means that parts can be known without knowing the whole. The core idea is called “packet switching.”

A packet is a tiny portion of a file that is passed between nodes on the internet in the way a baton is passed between runners in a relay race. The packet has a destination address. If a particular node fails to acknowledge receipt of a packet, the node trying to pass the packet to it can try again elsewhere. The route is not specified, only the destination. This is how the internet can hypothetically survive an attack. The nodes keep trying to find neighbors until each packet is eventually routed to its destination.

In practice, the internet as it has evolved is a little less robust than that scenario implies. But the packet architecture is still the core of the design.

The decentralized nature of the architecture makes it almost impossible to track the nature of the information that is flowing through it. Each packet is just a tiny piece of a file, so even if you look at the contents of packets going by, it can sometimes be hard to figure out what the whole file will be when it is reassembled at the destination.

In more recent eras, ideologies related to privacy and anonymity joined a fascination with emerging systems similar to some conceptions of biological evolution to influence engineers to reinforce the opacity of the design of the internet. Each new layer of code has furthered the cause of deliberate obscurity.

Because of the current popularity of cloud architectures, for instance, it has become difficult to know which server you are logging into from time to time when you use particular software. That can be an annoyance in certain circumstances in which latency—the time it takes for bits to travel between computers—matters a great deal.

The appeal of deliberate obscurity is an interesting anthropological question. There are a number of explanations for it that I find to have merit. One is a desire to see the internet come alive as a metaorganism: many engineers hope for this eventuality, and mystifying the workings of the net makes it easier to imagine it is happening. There is also a revolutionary fantasy: engineers sometimes pretend they are assailing a corrupt existing media order and demand both the covering of tracks and anonymity from all involved in order to enhance this fantasy.

At any rate, the result is that we must now measure the internet as if it were a part of nature, instead of from the inside, as if we were examining the books of a financial enterprise. We must explore it as if it were unknown territory, even though we laid it out.

The means of conducting explorations are not comprehensive. Leaving aside ethical and legal concerns, it is possible to “sniff” packets traversing a piece of hardware comprising one node in the net, for instance. But the information available to any one observer is limited to the nodes being observed.

Rage

I well recall the birth of the free software movement, which preceded and inspired the open culture variant. It started out as an act of rage more than a quarter of a century ago.

Visualize, if you will, the most transcendently messy, hirsute, and otherwise eccentric pair of young nerds on the planet. They were in their early twenties. The scene was an uproariously messy hippie apartment
in Cambridge, Massachusetts, in the vicinity of MIT. I was one of these men; the other was Richard Stallman.

Why are so many of the more sophisticated examples of code in the online world—like the page-rank algorithms in the top search engines or like Adobe’s Flash—the results of proprietary development? Why did the adored iPhone come out of what many regard as the most closed, tyrannically managed software-development shop on Earth? An honest empiricist must conclude that while the open approach has been able to create lovely, polished copies, it hasn’t been so good at creating notable originals. Even though the open-source movement has a stinging countercultural rhetoric, it has in practice been a conservative force.

Stallman was distraught to the point of tears. He had poured his energies into a celebrated project to build a radically new kind of computer called the LISP machine. But it wasn’t just a regular computer running LISP, a programming language beloved by artificial intelligence researchers.
*
Instead, it was a machine patterned on LISP from the bottom up, making a radical statement about what computing could be like at every level, from the underlying architecture to the user interface. For a brief period, every hot computer science department had to own some of these refrigerator-size gadgets.

Eventually a company called Symbolics became the primary seller of LISP machines. Stallman realized that a whole experimental subculture of computer science risked being dragged into the toilet if anything bad happened to a little company like Symbolics—and of course everything bad happened to it in short order.

So Stallman hatched a plan. Never again would computer code, and the culture that grew up with it, be trapped inside a wall of commerce and legality. He would develop a free version of an ascendant, if rather dull, software tool: the UNIX operating system. That simple act would blast apart the idea that lawyers and companies could control software culture.

Eventually a young programmer of the next generation named Linus Torvalds followed in Stallman’s footsteps and did something similar, but
using the popular Intel chips. In 1991 that effort yielded Linux, the basis for a vastly expanded free software movement.

But back to that dingy bachelor pad near MIT. When Stallman told me his plan, I was intrigued but sad. I thought that code was important in more ways than politics can ever be. If politically motivated code was going to amount to endless replays of relatively dull stuff like UNIX instead of bold projects like the LISP machine, what was the point? Would mere humans have enough energy to sustain both kinds of idealism?

BOOK: You are not a Gadget: A Manifesto
7.64Mb size Format: txt, pdf, ePub
ads

Other books

Never Alone by C. J. Carpenter
The Gentlewoman by Lisa Durkin
Always a McBride by Linda Turner
Leapholes (2006) by Grippando, James
The Cadence of Grass by Mcguane, Thomas
Painting the Black by Carl Deuker
LZR-1143: Evolution by Bryan James
Your Song by Gina Elle