Authors: Tom Vanderbilt
But there is an opposing selection force as well: when people begin to
not
do something (choose a name, retweet a tweet) because they sense too many other people are doing it. Economists call this “nonfunctional demand,” or everything driving (or reducing) demand that has nothing to do with “
the qualities inherent in the commodity.”
While neutral drift says one choice is not somehow better than another, names often do have some intrinsic value.
As one study showed, certain so-called racial names (for example, Latonya or Tremayne) were less likely to get callbacks on job interviews;
another analysis found that having a German name after World War I made it harder to get a seat on the New York Stock Exchange (and fewer kids were named Wilhelm and Otto). Or they have perceived intrinsic value, like social cachet.
*
4
Names that appear to be neutrally distributed throughout the culture could be under some kind of “weak” selection pressure. Perhaps one parent, having read the novel
We Need to Talk About Kevin
, a mother's tale of a violent son, decides not to give her child that name (thus reducing the chance someone else will copy her) because of a negative connotation that only a few may be aware of.
When I raised this subject with Bentley, he insisted that this was precisely the value of the neutral model: If culture change viewed at the big, population-wide level looks as if random copying were driving everything, then that noisy statistical wallpaper makes an easier backdrop against which to see when selection pressures really
are
at work. When one looks at a crowded, rush-hour highway from above, it seems as if every driver were essentially copying the other; the highway seems to drift along neutrally. But look more closely, and one driver may be following another too closely, applying “selective pressure” that then influences the driver ahead. Taste is like traffic, actuallyâa large complex system with basic parameters and rules, a noisy feedback chamber where one does what others do and vice versa, in a way that is almost impossible to predict beyond that at the end of the day a certain number of cars will travel down a stretch of road, just as a certain number of new songs will be in the Hot 100.
All this leads to one last question. If taste moves along via imitative social learning, whether random or not, whether “biased” or not, what happens when peopleâthanks to the Internetâhave ever more opportunity to see, in ever finer detail, what other people are doing?
When I was a teenager in the 1980s, I tuned one day, by accident, to a station on the far left of the dial and discovered a show playing punk rock and other eclectic forms of music. I felt as if I had walked into a private conversation being spoken in another language: Here were songs I had never heard before (my tastes were admittedly quite conventional) that sounded little like anything I had heard before.
As I quickly became a fan of this strange cacophony, I realized how time-consuming the pursuit was: long hours spent tracking down obscure albums in obscure record stores in obscure parts of town, driving to sweaty all-ages shows in not-quite-up-to-code social halls, talking to the few other kids in my school who knew what I was even talking about, never having a sense of how many people in other towns might like this same music. The whole time, I nursed a conviction that if only more people knew about this music, it would become more popular (leaving aside the awkward question, per optimal distinctiveness, of if my own liking for it would decline
because
more people liked it).
Things are now incredibly different. The Internet means that one click can access most of the world's music; via chat rooms and other forums, fans of the most rarefied genres can find each other; technology has blown open distribution bottlenecks, making it cheaper and easier for anyone to put a recording out into the world. As the Echo Nest showed, entire genres could spring up virtually overnight and find fans. In theory, my teenage hope had come alive: There was little, physically, preventing anyone from listening to anything. Music was
horizontal:
It took no more effort to listen to something obscure than to something popular. Perhaps, as I had imagined, the formerly less popular would become more popular, at the expense of the already popular, which would decline in importance as more people found more things on the “long tail” to listen to. At the very least, the hits on the radio, the ones you quickly grew tired of hearing so often, would turn over faster because of the sheer increase in new material.
This is not necessarily how it turned out, as I learned in speaking to
Chris Molanphy, a music critic and obsessive analyst of the pop charts. “There was this big theory that all this sort of democracy in action, this capturing of people's taste, was going to lead to more turnover, not less,” he said. “In fact, if you watch the chart, it's totally the opposite. The big have gotten bigger.” It is true that music sales as a whole declined in the new digital environment, but it was the albums further down the chartsâfrom 200 to 800âthat fared worst.
Hit songs, meanwhile, gobbled up even more of the overall music market than they did before the Internet.
The curving long tail chart, as he put it, looks more like a right angle. “It's kind of like once the nation has decided that we're all interested in âFancy' by Iggy Azalea or âHappy' by Pharrell”âto name two pop hits of 2014â“we're
all
listening to it.”
He calls these “snowball smashes”: They gather momentum and pick up everything in their wake. With more momentum comes more staying power. The song “Radioactive” by Imagine Dragons lingered on the Hot 100, “
Billboard
's flagship pop chart,” for two years. By contrast, a song like the Beatles' “Yesterday”âas Molanphy notes, “the most covered song of all time”âlasted a mere eleven weeks on the charts.
It is not just that popularity can be self-fulfilling; it is that
not
being popular is even more so. In his classic 1963 book,
Formal Theories of Mass Behavior
, the social scientist William McPhee introduced a theory he called “double jeopardy.” He was struck, looking at things like polls of movie star appeal and the popularity of radio shows, that when some cultural product was less popular, it was not only less well known (and thus less likely to be chosen) but less
chosen
by those who actually knew itâhence the double jeopardy. Did this mean the pop charts worked, that the best rose to the top? Not necessarily. McPhee speculated that the “
lesser known alternative is known to people who know too many
competitive
alternatives.” The favorites, by contrast, “become known to the kind of people who, in making choices, know little
else
to choose from.” In other words, the sorts of people who listen to more obscure music probably like a
lot
of music a little, whereas the most devoted listeners of the Top 10 tend to concentrate their love.
Through sheer statistical distribution, McPhee suggested, a “natural” monopoly emerged.
If this was already the case decades ago, why have things gotten so much more top-heavy, so much more sticky? It could be, as I discussed in chapter 3, that having the world's music in your pocket is too overwhelming,
the blank search box of what to play next too terrifying, and so people take refuge in the exceedingly familiar. Or it could be that the more we know about what people are listening toâvia new routes of social mediaâthe more we are also listening.
This was what the network scientist Duncan Watts and colleagues found in a famous 2006 experiment. Groups of people were given the chance to download songs for free from a Web site after they had listened to and ranked the songs. When the participants could see what previous downloaders had chosen, they were more likely to follow that behaviorâso “popular” songs became more popular, less popular songs became less so. These socially influenced choices were more
unpredictable;
it became harder to tell how a song would fare in popularity from its reported quality. When people made choices on their own, the choices were less unequal and more predictable; people were more likely to simply choose the songs they said were best.
Knowing what other listeners did was not enough to completely reorder people's musical taste. As Watts and his co-author Matthew Salganik wrote, “The âbest' songs never do very badly, and the âworst' songs never do extremely well.” But when others' choices were visible, there was greater chance for the less good to do better, and vice versa. “
When individual decisions are subject to social influence,” they write, “markets do not simply aggregate pre-existing individual preference.” The pop chart, in other words, just like taste itself, does not operate in a vacuum.
The route to the top of the charts has in theory gotten more democratic, less top-down, more unpredictable: It took a viral video to help make Pharrell's “Happy” a hit a year after the fact. But the hierarchy of popularity at the top, once established, is steeper than ever.
In 2013, it was estimated that the top 1 percent of music acts took home 77 percent of
all
music income.
While record companies still try to engineer popularity, Molanphy argues it is “the general public infecting each other who now decide if something is a hit.” The inescapable viral sensation “Gangnam Style,” he notes, was virtually
forced
onto radio, where it became the number 12 song in the United States (without even factoring in YouTube, where it was mostly played). “Nobody manipulated that into being; that was clearly the general public being charmed by this goofy video and telling each other, âYou've got to watch this video.'â” The snowball effect, he suggests, is reflected in radio. “Blurred Lines,” the most played song of
2013 in the United States, was played
twice
as much as the most played song of 2003.
This is in sharp contrast to the 1970s, the period in which I did my most obsessive Top 40 listening, when it was an industry truism that, as the veteran radio consultant Sean Ross put it to me, after what could seem an unendurably long wait, “you heard your favorite song and you turned off the radioâyour mission was accomplished.”
Molanphy suggests that if radio then had the access to sales and listening data that it does now, it would have played those favorite songs much more than it actually did, and a song like “Yesterday” would have spent more time on the charts. What ever-sharper, real-time data about people's actual listening behavior do is more strongly reinforce the feedback loop. “We always knew that people liked the familiar,” he says. “Now we know exactly when they flip the station and, wow, if they don't already know a song, they
really
flip the station.” There is an almost desperate attempt to convert, as fast as possible, the new into the familiar.
Pop songs have always been fleeting affairs. What about baby names, which are presumably more organic and enduring? Here, popularity
has
become more evenly distributed. As the researchers Todd Gureckis and Robert Goldstone point out, the name Robert was the “snowball smash” of 1880: Nearly one in ten baby boys was named Robert. By contrast, Jacob, 2007's top name, only reached 1.1 percent of boys. The most popular names, they note, have lost “market share.” But something else changed over those years. At the turn of the twentieth century, the names at the top rather randomly fluctuated, because, one might imagine, more families with fathers named Robert had boys that year.
In the last few decades, however, a statistical pattern emerged in which the direction a name was headed in one year tended to predictâat a level greater than chanceâwhere it was going the next year. If Tom was falling this year, Tom was likely to keep falling next year. Names acquired
momentum
. As naming lost the weight of cultural tradition, where did people look when making their choice? To each other. In 1880, even if names were freely chosen, it would have taken a while for name popularity to spread. But now, as parents-to-be visit data-heavy baby name Web sites or try out suggestive names on Facebook, they
seem to be able to mystically divine where a name is headed and can latch on to a rising name (as long as it is not rising too quickly, for that is taken as a negative signal of faddishness) and stray from one that is falling. It is like trying to buy long-term stocks amid the noise of short-term volatility.
Something similar is happening in both pop music and naming. Things have at once become more horizontalâthere are ever more songs to hear, ever more possible names to choose fromâand more “spiky,” as if, in the face of all that choice, people gravitate toward what others seem to be doing. Social learning has become hyper-social learning. In his famous 1930 tract,
The Revolt of the Masses
, the Spanish philosopher José Ortega y Gasset described how “
the world had suddenly grown larger.” Thanks to modern media, he noted, “each individual habitually lives the life of the whole world.” People in Seville could follow, as he described, “what was happening to a few men near the North Pole.” We also had vastly increased access to things: “The range of possibilities opened out before the present-day purchaser has become practically limitless.” There was a “leveling” among social classes, which opened up “vital possibilities,” but also a “strange combination of power and insecurity which [had] taken up its abode in the soul of modern man.” He feels, he wrote, “lost in his own abundance.”
Ortega's vision seems quaint now. Simply to live in a large city like New York is to dwell among a maelstrom of options:
There are said to beâby many orders of magnitudeâmore choices of things to buy in New York than there are recorded species on the planet. As Bentley put it to me, “By my recent count there were 3,500 different laptops on the market. How does anyone make a âutility-maximizing' choice among all those?” The cost of learning which one is truly best is almost beyond the individual; there may, in fact, actually be little that separates them in terms of quality, so any one purchase over another might simply reflect random copying (here is the “neutral drift” at work again, he argues). It is better to sayâhere he borrows the line from
When Harry Met Sally
â“I'll have what she's having.”