There may be no significant deviation from the Standard Model after all.
The evidence that there is a big CP violation to explain, however, has faded as more data are collected, although I would be more guarded than Woit in dismissing the apparent irregularity as nothing. The D0 finding of a deviation of three standard deviations from what was expected in the standard model was notable because it appeared to be corroborated by a two standard deviation from the expected result finding in the same direction by a separate group of experimenters. But, with more data, the apparent deviation from the expected value in the second experiment is now consistent with the Standard Model. An anomaly in the B meson decay in the first experiment, is now contradicted by the data gathered by the second group, rather than supported by it, so it now the anomaly in the D0 findings seem much less likely to be anything more than a statistical fluke or a slight tweak to the relevant physical constants.
Keep in mind that quantum mechanics gives statistical predictions about how many particles of what kind will end up where, rather than specific predictions of where a particle will go under certain circumstances. The Standard Model might predict, for example, that on average 60 muons end up at detector one, that 20 muons end up at detector two, and that 20 muons end up at detector three, in the first two hundred trials that produce muons.
Experiments have to figure out what kind of random process is most likely to have generated the data they observed. In this case, the Standard Model prediction was based on multiple different observations about B meson decay, along with another constant whose value is much better known, and the physical constants were based on a point within the margin of error of all of the observation sets.
All of the question marks left in the Standard Model involve rare phenomena (in this case, the decay of short lived, rarely created B mesons), and the new experiments involve an especially rare subset of that data (B mesons which include as one component, the fairly rare s quark), so very large numbers of particle collisions have to be done to get enough data to be statistically significant. Experimenters have to balance the desire to get more stable results by including as much old data as possible with the need to make sure that the experiments themselves are exactly consistent with each other.
One reason that a three standard deviation difference from the expected Standard Model prediction in the current run draws yawns from the big names in physics, particularly if it is not replicated, is that new experiments are cumulative with a large number of experiments that were done to create the model in the first place. What looks very statistically significant when viewed in isolation, may not look very notable when pooled with all the data ever collected in the past about the same thing (in this case, B meson decay properties).
If there are new physics, a Higgs doublet is unlikely to be its cause
The notion that a CP violation would be due to the presence of a family of new Higgs bosons, even if it is present, would be far down the list of likely causes for the observed effect even if there really is a significant deviation from the Standard Model. A supersymmetric extension of the Standard Model, is an exceedingly unparsimonious way to deal with the issue. It calls for all manner of new particles and interactions for which we have no evidence in order to fine tune the model.
The Standard Model already predicts a CP violation in B meson decay, which is what the new experiments have also observed, and the physical constants used to determine the Standard Model prediction (they are part of the CDK matrix) come from experimental estimates of two properties of B meson decay observed in prior experiments.
The logical place to start trying to reconcile the Standard Model with the new B meson decay data would be to try to reset the physical constant in the Standard Model based on B meson decay experiments to reflect the new data, rather than to hypothesize five new particles.
I'm not the only one to doubt supersymmetry is the cause of anything odd going on in the B meson decay data. An article by Maxime Imbeaulta, Seungwon Baekb, and David Londona, in the issue of Physics Letters B that came out earlier this month explains that:
At present, there are discrepancies between the measurements of several observables in B→πK decays and the predictions of the Standard Model (the “B→πK puzzle”). Although the effect is not yet statistically significant—it is at the level of 3σ—it does hint at the presence of new physics. In this Letter, we explore whether supersymmetry (SUSY) can explain the B→πK puzzle. In particular, we consider the SUSY model of Grossman, Neubert and Kagan (GNK). We find that it is extremely unlikely that GNK explains the B→πK data. We also find a similar conclusion in many other models of SUSY. And there are serious criticisms of the two SUSY models that do reproduce the B→πK data. If the B→πK puzzle remains, it could pose a problem for SUSY models.
Physics publication moves fast. Six articles have already cited this one, which was first available in pre-print form back in April. One of those citing articles suggests that the a tweak to how the interactions of quarks impacts B meson decay is the key to explaining the experimental data. Calculating quark interactions directly from the theory is very difficult, so this is an obvious point where a theoretical prediction from the Standard Model from which deviation is measured, could be wrong.
Of course, there is nothing wrong with hypothesizing all sorts of wacky or unlikely theories in theoretical physics. Dozens, if not hundreds of such papers that do just that are posted every month and people have been regularly publishing SUSY theories for several decades, and a lack of evidence to contradict them has kept the flow coming. But, regular users of the pre-print database recognize who speculative they are and how easy it is to come up with one. The fact that reputable physicists come up with a theory and publish it does not mean that it is likely to be true.
Since it is widely known that some tweaks of the Standard Model are necessary anyway for a variety of reasons: a failure to make a definitive Higgs boson observation, the confirmation neutrinos have mass, and that lack of a clear dark matter candidate, for example. So, the physics vultures have been swarming in to update it. Professionally, putting a theory in play is a bit like putting a chip on a roulette table. Nobody knows what the final result will be, but you can't win a Nobel Prize if you don't have a theory in play, and if you are right, you look like a genius. But, the less clear it is how things will turn out, and the closer one gets to finding out who wins, or at least, who loses (which the Large Hadron Collider may do), the more pressure there is to put theories on the table.
1 comment:
The Five Higgs pre-print has been revised to weaken the claim that it is supported by the new data.
Post a Comment