The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World (26 page)

BOOK: The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World
13.58Mb size Format: txt, pdf, ePub
ads

Signal and background

Particle physics, since it is powered by quantum mechanics, is a lot like coin flipping: The best we can do is predict probabilities. At the LHC, we smash protons together and predict the probability of different interactions occurring. For the particular case of the Higgs search, we consider different “channels,” each of which is specified by the particles that are captured by the detector: There’s the two-photon channel, the two-lepton channel, the four-lepton channel, the two-jets-plus-two-leptons channel, and so on. In each case, we add up the total energy of the outgoing particles, and the machinery of quantum field theory (aided by actual measurements) allows us to predict how many events we expect to see at every energy, typically forming a smooth curve.

That’s the null hypothesis—what we expect without any Higgs boson. If there is a Higgs at some specific mass, its main effect is to give a boost to the number of events we expect at the corresponding energy: A 125 GeV mass Higgs creates some extra particles with a total energy of 125 GeV, and so on. Creating a Higgs and letting it decay provides a mechanism (in addition to all the non-Higgs processes) to produce particles that typically have the same total energy as the Higgs mass, leading to a few additional events over the background. So we go “bump hunting”—is there a noticeable deviation from the smooth curve we would see if the Higgs wasn’t there?

Predicting what the expected background is supposed to be is by no means an easy task. We know the Standard Model, of course, but just because we know what the theory is doesn’t mean it’s easy to make a prediction. (The Standard Model also describes the earth’s atmosphere, but it’s not easy to predict the weather.) Powerful computer programs do their best to simulate the most likely outcomes of the proton collisions, and those results are run through a simulation of the detectors themselves. Even so, we readily admit that some rates are easier to measure than to predict. So it is often best to do a “blind” analysis—use some method to disguise the actual data of interest, by adding fake data to it or simply not looking at certain events, then making every effort to understand the boring data in other regions, and only once the best possible understanding is achieved do we “open the box” and look at the data where our particle might be lurking. A procedure like this helps to ensure that we don’t see things just because we want to see them; we only see them when they’re really there.

It wasn’t always so. In his book
Nobel Dreams
, journalist Gary Taubes tells the story of Carlo Rubbia’s work in the early 1980s that discovered the W and Z bosons and won him a Nobel Prize, as well as his less successful attempts to win a second Nobel by finding physics beyond the Standard Model. One of the tools that Rubbia’s team used in their analysis was the Megatek, a computer system that could display the data from particle collisions and let the user rotate the view in three dimensions by operating a joystick. Rubbia’s lieutenants, American James Rohlf and Englishman Steve Geer, became masters of the Megatek. They were able to glance at an event, twirl it a bit, pick out the important particle tracks, and declare with confidence that they were seeing a W or a Z or a tau. “You have all this computing,” in Rubbia’s words, “but the purpose of all this tremendous data analysis, the one fundamental bottom line, is to be able to let the human being give the final answer. It’s James Rohlf looking at the f***ing event who will decide whether this is a Z or not.” No longer—we have a lot more data now, but the only way to really understand what you’re seeing is to hand it over to the computer.

Whenever there is some excitement about a purported experimental result, your first instinct should be to ask, “How many sigma?” Within particle physics, an informal standard has arisen over the years, according to which a three-sigma deviation is considered “evidence for” something going on, while a five-sigma deviation is needed to claim “discovery of” that something. That might seem unduly demanding, since a three-sigma result is already something that only happens 0.3 percent of the time. But the right way to think about it is, if you look at three hundred different measurements, one of them is likely to be a three-sigma anomaly just by chance. So sticking to five sigma is a good idea.

At the December 2011 seminars, the peak near 125 GeV had a significance of 3.6 sigma in the ATLAS data, and 2.6 sigma in the CMS data (which are completely independent). Suggestive, but not enough to claim a discovery. Speaking against the significance of the result was the “look-elsewhere effect”; the simple fact that, as we just alluded to, large deviations are likely if you look at many different possible measurements, which the two LHC experiments were certainly doing. But at the same time, the fact that the two experiments saw bumps in the same place was extremely suggestive. Taken all together, the sense of the community was that the experiments probably were on the right track, and we probably were seeing the first glimpses of the Higgs—but only more data would tell for sure.

When the predictions you are testing involve probabilities, the importance of collecting more data cannot be overemphasized. Think back to our coin-flipping example. If we had only flipped the coin five times instead of a hundred, the biggest possible deviation from the expected value would have been to get all heads (or all tails). But the chance of that happening is more than 6 percent. So even for a completely unfair coin, we wouldn’t be able to claim as much as a two-sigma deviation from fairness. On
Cosmic Variance
, a group blog I contribute to that is hosted by
Discover
magazine, I put up a post on the day before the CERN seminars, entitled “Not Being Announced Tomorrow: Discovery of the Higgs Boson.” It’s not that I had any inside information; it’s just that we all knew how much data the LHC had produced up to that time, and it simply wasn’t enough to claim a five-sigma discovery of the Higgs. That would have to wait for more data.

The bear is caught

The general feeling among physicists was that if the 2011 hints were signs of something real, the data collected in 2012 would be enough to reach the magical five-sigma threshold necessary to declare a discovery. We knew how many collisions were happening at the LHC, and the feeling worldwide was that we would be able to declare discovery (or crushing disappointment) a year later, in December 2012.

After its yearly winter shutdown, the LHC resumed collecting data in February. The International Conference on High Energy Physics (ICHEP) in Melbourne was planned for early July, and both experiments anticipated giving updates of their progress at that meeting. Conditions in 2012 were somewhat different from those in 2011, so it wasn’t immediately obvious how quickly progress could be made. They were running at a higher energy—8 TeV rather than 7 TeV—and also at a higher luminosity, so they were getting more events per second. Both of those sound like improvements, which they are, but they are also challenges. Higher energy means slightly different interaction rates, which means slightly different numbers of background events, which means you have to calibrate the new data separately from the old data. Higher luminosity means more collisions, but many of those collisions are happening simultaneously in the detector. This leads to “pileup”—you see a bunch of particle tracks but have to work hard to separate which ones came from which collisions. It’s a nice problem to have; but it’s still a problem you have to solve, and that takes time.

The ICHEP is a major international event, and a logical venue at which to provide an update on the progress of the Higgs search after the new data had started coming in at higher energies. What people expected to hear was that the machine was doing great, and ideally that the statistical significance of the December hints was growing rather than shrinking. The LHC was scheduled to pause in its data collection in early June for routine maintenance purposes, and that was chosen as a natural point at which to look at the data carefully and see what they had.

Both experiments were analyzing their data blind. The “box” containing the true data in the region of interest was opened on June 15, leaving about three weeks for the experimentalists to figure out what they had and how to present it in Melbourne.

Almost immediately the rumors started flying. They were a little bit more vague than they had been in December, which is understandable; the experimenters themselves were scrambling to figure out what it was that they had. In the end, I don’t know of any rumors that got the final result precisely correct. But the general tenor was unmistakable: They were seeing something big.

What they were seeing, of course, was a new particle—the Higgs, or something near enough. Even a glance at the data was enough to see that. The stakes were immediately raised; a simple update wasn’t going to be an appropriate tack to take when the results were presented to the public. You either have a discovery, or you don’t; and if you do, you don’t bury the lede, you trumpet it to the world.

As subgroups within the experiments frantically analyzed the data in the various different channels, higher-ups debated how best to deploy the trumpets. On the one hand, both experiments were scheduled to give updates in Melbourne, and it would seem petty to pull out. On the other, there were hundreds of physicists at CERN who weren’t going to fly around the world, and this day belonged to them as much as to anyone. In the end, a compromise was reached: Each experiment would give a seminar on the day the conference opened, but the seminars would be located in Geneva and simulcast in Australia.

If that weren’t enough to convince people on the outside that important news was coming, word quickly spread that CERN was inviting big names to be present at the seminars. Peter Higgs, now age eighty-three, was at a summer school in Sicily at the time; he was scheduled to fly back to Edinburgh, his travel insurance had run out, and he didn’t have any Swiss francs with him. But he changed his plans after John Ellis, the eminent theorist at CERN and longtime Higgs boson aficionado, left him a phone message: “Tell Peter that if he doesn’t come to CERN on Wednesday, he will very probably regret it.” He came, as did François Englert, Gerald Guralnik, and Carl Hagen, other theorists who had helped pioneer the Higgs mechanism.

In December 2011, I was back in California and slept right through the seminars, which started at five a.m. Pacific time. But in July 2012, I managed to book a flight to Geneva and was there at CERN for the big day. I and many others were running from building to building at the lab, scrambling to get the proper credentials. At one point I had to sweet-talk my way past a security guard to get back into a building from which I had just exited, and explained that I was kind of short on time. “Why is everybody in a hurry today?” he asked.

As in December, hundreds of people (mostly younger folks) had camped out overnight to get good seats in the auditorium. Gianotti once again gave the talk reporting results from ATLAS, but Tonelli’s term as CMS spokesperson had run out and the CMS talk was given by his successor, Joe Incandela, from the University of California, Santa Barbara. Incandela and Gianotti had both cut their teeth working together on UA2, one of the detectors at CERN’s previous hadron collider, and they had searched for Higgs bosons in the data from that experiment. Now they were about to see their long-standing quest come to fruition.

Everyone in the room knew that all this fuss wouldn’t be happening if the signal had gone away. The primary question was, how many sigma? Between rumors and back-of-the-envelope estimations, the prevailing opinion seemed to favor the idea that each experiment would reach four-sigma significance, but not quite five. Combining the two, however, might bump us over the five-sigma threshold. But combining data from two different experiments is much trickier than it sounds, and it didn’t seem feasible that it could have been done over just the past three weeks. There was more than a little worry that we were going to be tantalized once more, but not quite able to claim a discovery.

We needn’t have worried. Incandela, who spoke first, went through the different channels that had been analyzed by CMS one by one. Two-photon events came first, and they displayed a noticeable peak just where we were hoping, at 125 GeV. The significance was 4.1 sigma—more than in the previous year, but not a discovery. Then came events with four charged leptons, which result from the Higgs decaying into two Z bosons. Another peak, in the same place, this time with 3.2 sigma significance. On his sixty-fourth PowerPoint slide, Incandela revealed what you get when you combine these two channels together: 5.0 sigma. The wait was over. We found it.

Gianotti, like Incandela, went out of her way to praise the hard work of everyone who helped keep the LHC running, and she emphasized the care the ATLAS collaboration went through to analyze their data. When she turned to the two-photon results, there was once again an evident peak at 125 GeV. This time the significance was 4.5 sigma. The four-lepton results also fell into line: a tiny peak, but discernible, with a significance of 3.4 sigma. Combining them gave an overall significance of exactly 5.0 sigma. At the end of her talk, Gianotti thanked nature for putting the Higgs where the LHC could find it.

ATLAS found a Higgs mass of 126.5 GeV, while CMS got 125.3 GeV, but the measurements are within the expected uncertainty of each other. CMS analyzed more channels in addition to two photons and four leptons, and as a result their final significance ended up dropping just a tiny amount, to 4.9 sigma. But again, that’s consistent with the overall picture. The agreement between the two experiments was amazing, and crucially important. If the LHC had only one detector looking for the Higgs, the physics community would be much more hesitant to take the results at face value. As it was, hesitancy was thrown to the wind. This was a discovery.

After the seminars were over, Peter Higgs became emotional. He later explained, “During the talks I was still distancing myself from it all, but when the seminar ended, it was like being at a football match when the home team had won. There was a standing ovation for the people who gave the presentation, cheers and stamping. It was like being knocked over by a wave.” In the pressroom afterward, reporters tried to get more comments from him, but he demurred, saying that the focus on a day like this should be on the experimenters.

BOOK: The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World
13.58Mb size Format: txt, pdf, ePub
ads

Other books

Unravelled by Cheryl S. Ntumy
The Ambassador by Edwina Currie
Friendship Dance by Titania Woods
Darcy and Anne by JUDITH BROCKLEHURST
A Man Lay Dead by Ngaio Marsh
Merger by Miles, Heather
The Emerald Duchess by Barbara Hazard
Kissing with Fangs by Ashlyn Chase