There has been a lot of recent buzz surrounding a result indicating that neutrinos may travel faster than light, rewriting Einstein. Well, as promised, I’ve tracked down the original paper, and written up a layperson translation/summary/commentary on Google Docs, which can be found here.

1. bsm117532 says:

Just a couple comments (I am a theoretical physicist, and have worked at CERN): You quote the statistical definition of six sigma as 99.999999903179%. This is only true if the errors are exactly Gaussian (= Normal = bell curve), which they never are. The “six sigma” quoted by the experiment is the size of the one-sigma error bar (68%) multiplied by six. What happens is that in the vicinity of the best-fit, the errors are indistinguishable from a Gaussian (this is provable mathematically). However outside that the errors are always flatter than a Gaussian. Away from the best fit point, a Gaussian falls exponentially — all experimental errors fall more slowly than exponential, away from the best fit. Most importantly, we do not know the shape of the error function way out there. To claim the probability P you quote, one would need to have performed more than 1/(1-P)=10 million experiments, which is totally impossible (they only have ~ 16000 data points). The “five sigma” definition of a discovery in particle physics is not a statistical statement, but an ad-hoc rule of thumb, because it is impossible to evaluate the statistics correctly. I think any reasonable person would agree that three sigma = 99.7% is a good definition for discovery, but usually, even that probability is impossible to evaluate quantitatively. If one is sufficiently perverse, it’s possible to construct an experiment with a sufficiently flat error function that it will give a false “5-sigma” discovery. We see “3-sigma” and “4-sigma” errors come and go quite frequently. Most physicists (and certainly all of the media) get this fact wrong too, sadly.

Second, I do not think your proposal 3 under your commentary is reasonable. It takes only 2.4 milliseconds for the neutrinos to cross the distance from CERN to Gran Sasso. The window of time in which the neutrinos are sent is only 10 microseconds long. So even if some mechanism could delay them (which would probably take the form of a long-lived particle getting stuck in the rock for 6s at CERN) the neutrinos then have to be emitted both in the correct direction, but also within the tiny 10 microsecond window of the next proton bunch. Not so likely. This is one of the reasons for choosing such short windows like 10 microseconds — it reduces any background such as cosmic rays or non-beam neutrinos by a factor roughly 6s/10us =~ 600,000.

Finally, regarding your comment on the helium chamber where the pions/kaons decay: they have to decay somewhere in the 1095m chamber, but the pions/kaons are also moving as a beam at basically the speed of light (their energy is ~50 GeV or so, while their mass is 10000 times smaller than that). So this does introduce some uncertainty, but it’s miniscule. The key point is that they decay in-flight (without stopping). The difference between a pion/kaon that decays at the beginning of the chamber vs. one that decays at the end results in a time-of-flight difference of 0.01 ns, by my calculation. So, much smaller than the nanosecond resolution of the measurement.

Nice write up! It’s good to see people reading the primary source! ;)

• Yeah, I did oversimplify the statistics in the probabilities. The main point of the comment is that this isn’t like cold fusion, or the 21 gram human soul, but something that can’t be written off by ignoring uncertainties. Miscalculating uncertainties is possible (and, in my opinion, the most likely explanation) but it isn’t a case of crackpots screwing up.

As for the proposal in three, that was an example that’s easy to conceive, but anything that produces a six second delay in neutrino transit as they move through 730km of any random material would be drastically new physics.

For the helium chamber, I agreed that the distance was insignificant, but wanted to point out the uncertainty in travel distance is more than 20cm. That’s not nearly enough to account for the effect they see, though.

At the end of the day, (potentially multiple) systemic uncertainty miscalculations and/or underestimations are the most likely cause of the issues. If the systemics hold, this is dramatically new physics of one form or another.

• “Dramatically new physics” would be very cool, but your “likely” explanation will almost certainly be the correct one.

2. Kiersten says:

I always knew we’d find a way to travel faster than the speed of light…

3. rickyjames says:

Excellent writeup, Blaine. I read the original article too as soon as it was on ArXiv.

This is a complicated measurement but basically they’ve got to do two things: accurately measure an interval of time and accurately measure a distance.

The time is the tricky one because it involves triggering on the start instant, triggering on the stop instant, and accounting for all the inevitable delays in their measurement process. The majority of the paper talks about this intricate process. Still, to me the key is that both sites have an antenna on the roof simultaneously looking at the same GPS satellite and thus the same atomic clock. They’re smart people with state of the art electronics, they’re not gonna lose 60 ns in a setup like that. That’s a huge amount of time for electronics, enough for 150 clock cycles in a 2.5GHz microprocessor. Intel doesn’t lose that time on a chip and the best scientists in the world aren’t going to lose it in a lab.

In contrast, measuring the distance was covered in three short paragraphs at the beginning of section 4 and those few sentences had a lot of trust-us handwaving if you ask me. Nowhere near the detail as was given to the consideration of the time measurement. Everything else after the first three paragraph deals with distance measurement CONSISTENCY and not distance measurement INITIAL ACCURACY.

I did some research into the ETRF2000 geodetic reference frame that is at the core of the distance measurement and my gut is telling me that the problem with the experiment lies in there. Basically it makes the assumption you know the positions of the GPS satellites exactly based on the math equation of their orbits, then you map reception points on earth into distances to those satellites, then you come up with a model that minimizes consistency error considering all the ground estimate data points you’ve got, then you modify that model every few years based on additional new measurement points. The original GPS survey was done in 1989 and there has been eight updates since then as detailed in reference 25 in the arXiv paper. The transformation coordinates to go from one year’s distance model parameters to another have between one and three significant digits to the right of the decimal place. The difference between 730KM and 730KM plus another 60 feet (the distance light travels in their 60ns discrepancy time) is a factor of 1.00005 with the fifth digit to the right of the decimal place being significant.

Now, obviously you can’t just directly compare number of significant digits between geodesy model input parameters and required output distance accuracy and make any conclusion at all. But if I were on a red team examining this experiment, I would dig all the way down into the guts of the math in the ETRF2000 standard and see if making second third and fourth significant digit changes in the model coefficents in Ref 25 could yield a 60 foot difference in distance between two points 730 km apart. If the answer is yes, then you’ve got a problem.

The only accuracy goal (until now) for all of these global models based on GPS was to make sure the CEP (circular error probable – a real term, Wikipedia it) of a nuclear warhead did not exceed its fireball. If you are aiming for the Borovitskaya gate at the Kremlin and you hit the Vodovzvodnaya gate instead with a 20 megaton H-bomb, you’re not going to lose too much sleep over inaccurate GPS models. Having a 60 foot error in the current GPS geodetic model is fine for fighting a nuclear war and may very well have been lost in the noise down in the guts of the math over the past twent years. For finding supraluminal neutrinos, no so much. See

My money is on GPS geodetic error.