We're doomed. According to Paul Krugman, we are heading headlong into the Third Depression, which will be deep and last for years, because world leaders and domestic ones have responded with austerity measures, instead of stimulus.
I hope that he's wrong and I'm sure that he does too. There are certainly plenty of economic indicators that seem to show that we have begun a slow rebound from a deep and long recession already. The recovery may be fragile, certainly it is not irreversible. But, apart from the little tweaks that have come from the end of temporary Census jobs and the housing market rush that came with its demise followed by an inevitable slump, there are few signs that we are headed for a double dip recession.
Stimulus may make a recession less painful, but ultimately what matters is that the structural problems that existed before a recession have been addressed so that they will not recur. The consensus reached on financial reform that is likely to become law this week will be one major step towards resolving that problem. A bigger worry is that there may still be shoes left to drop in the housing crisis as shadow inventories return to the market, real estate investors give up trying to avoid taking losses, and delayed foreclosures start to run their course.
Weakened stimulus spending is still a concern, one that is driving a billion dollar plus hole in the Colorado general fund budget that Governor Ritter and Joint Budget Committee Chairman Mark Ferrandino have the unenviable job of trying to fix. But, if the economy catches its stride again, some of the cuts that they will be forced to propose can be withdrawn as revenue estimates return to normal levels.
29 June 2010
Motl Melts Down
A PhD and some physics publications is no proof against becoming a crank. Lubos Motl is a case in point. He is melting down. In a effort to criticize non-string theories of quantum gravity, he attacks the fact that a seminar with both string and non-string theory representatives is being held in Mexico. Why?
It reminds me of the people who claim that general relativity is a Jewish conspiracy that must be wrong for that reason.
Earlier in the same week, he is ranting against efforts to give elementary school students positive female role models in physics, because "the dominant association of science with maleness is . . . an objective fact and a consequence of some patterns in biology."
One would think that a physicist of all people, could focus on the ideas involved, and the merits of the scientific questions, but one would be wrong.
The Americans and Europeans who are working on "non-stringy quantum gravity" are just practitioners of this Mexican-level science. They're the Mexicans' true peers, so it shouldn't be surprising that they're third-rate scientists in the U.S. context. There's no string theory group in Mexico - another fact that shouldn't be shocking given Mexico's average IQ around 85. The IQ increment needed to go from non-stringy quantum gravity to string theory is around 20.
It reminds me of the people who claim that general relativity is a Jewish conspiracy that must be wrong for that reason.
Earlier in the same week, he is ranting against efforts to give elementary school students positive female role models in physics, because "the dominant association of science with maleness is . . . an objective fact and a consequence of some patterns in biology."
One would think that a physicist of all people, could focus on the ideas involved, and the merits of the scientific questions, but one would be wrong.
28 June 2010
Is the PCAOB A Sleeper Case?
The Freedom Foundation v. PCAOB case decided today is held that a President cannot constitutionally be separated by two layers of tenure from PCAOB board members and ruled that they were employees at will of the SEC, rather than having tenure protections vis the SEC.
The ruling also provided that it does not apply to civil servants, and in traditional executive branch departments it wouldn't, because the line leadership in those departments are removable at will by the President.
But, what about civil servants in independent agencies like the SEC, FBI, NLRB and so on.
The logic of the Court's ruling applies to them, even if its terms do not, and the justification for no applying to two layers of tenure rule to them, that they do not make quasi-legislative rulings, could become a principle of administrative law: that two layers of tenure removed from the President officials of the United States government may not make quasi-legislative rulings without the involvement of a superior.
It could alternately be used to abolish tenure protections for rank and file workers in independent agencies, although this seems a less likely outcome.
Either way, the case could have deep results beyond the very modest actual holding of the case, for the structure of the federal government bureaucracy.
The ruling also provided that it does not apply to civil servants, and in traditional executive branch departments it wouldn't, because the line leadership in those departments are removable at will by the President.
But, what about civil servants in independent agencies like the SEC, FBI, NLRB and so on.
The logic of the Court's ruling applies to them, even if its terms do not, and the justification for no applying to two layers of tenure rule to them, that they do not make quasi-legislative rulings, could become a principle of administrative law: that two layers of tenure removed from the President officials of the United States government may not make quasi-legislative rulings without the involvement of a superior.
It could alternately be used to abolish tenure protections for rank and file workers in independent agencies, although this seems a less likely outcome.
Either way, the case could have deep results beyond the very modest actual holding of the case, for the structure of the federal government bureaucracy.
SCOTUS Redux
The U.S. Supreme Court ended its term with four decisions today. It is Justice Steven's last day on the bench, at least publicly. The theme for today as the Court decided the hard questions reserved for the end of the term was that the decisions were minimalist, deciding as little as possible to resolve the cases before it without setting clear guidelines to resolve the follow up questions that inevitably flow from its decisions.
The decisions:
1. In McDonald v. Chicago, the Court holds that the Second Amendment applies to the states to the same extent that it applies to the federal government; the standard of review is not determined. The decision is 5-4 with the usual conservative majority suspects; the minority votes not to incorporate the Second Amendment. A privileges and immunities clause expansion is rejected. The scope of Heller, which revived the right to possess a handgun for self-defense as an individual Second Amendment right subject to reasonable restriction of some sort, has not been fully established. Dicta in Heller which has been seized upon by the federal courts, suggest that almost all current federal gun control laws are reasonable restrictions on the Second Amendment right to bear arms, and presumably many state law restrictions would pass muster as well. But, there are many gray areas.
By clearly loosening the standard for incorporation of constitutional rights under the 14th Amendment Due Process Clause, the case also leaves open a slightly greater chance that some of the remaining few rights under the Bill of Rights that apply only to the Federal Government might be incorporated or might be incorporated more fully to apply to the states. The unincorporated rights include the right to indictment by a grand jury, the right to a civil jury trial, and the right to a unanimous criminal jury.
2. In Christian Legal Society v. Martinez, a requirement that all organizations be open to all students in order to receive funding through student fees at a public college is upheld. Some of the harder issues in the case were avoided because they were not preserved in litigation over facts that were stipulated.
3. In Bilski v. Kappos, the availability of business patents is narrowed, but they are not completely eliminated as they would have been under a ruling from the en banc U.S. Court of Appeals for the Federal Circuit. The method of hedging risk in the energy industry patented was invalidated on the basis that it was an abstract idea under existing precedents in a way that could be applied to many business patents. The precise scope of business method patents in future cases was not defined.
There were four votes to end business method patents entirely that did not prevail. But, those votes signal hostility on the court to a broad reading of business method patent scope.
4. In Free Enterprise Fund v. PCAOB, the Court holds that "the Public Company Accounting Oversight Board, was created as part of a series of accounting reforms in the Sarbanes-Oxley Act of 2002 . . . composed of five members appointed by the Securities and Exchange Commission" is unconstitutional as constituted, because "the SEC . . . cannot remove Board members at will, but only “for good cause shown,” “in accordance with” specified procedures . . . [and] the Commissioners, in turn, cannot themselves be re-moved by the President except for “ ‘inefficiency, neglect of duty, or malfeasance in office.’” Two layers of tenure remove the PCAOB to far from Presidential control. The remedy, the Court determines is to allow the PCAOB members to be removed at will by the SEC. The PCAOB otherwise continues undisturbed. The four liberal Justices, in dissent would have upheld the tenure protections for PCAOB members from the SEC as well.
Solicitor General Kagan has been nominated to replace Justice Stevens on the Court and is expected to be confirmed sometime this summer.
The decisions:
1. In McDonald v. Chicago, the Court holds that the Second Amendment applies to the states to the same extent that it applies to the federal government; the standard of review is not determined. The decision is 5-4 with the usual conservative majority suspects; the minority votes not to incorporate the Second Amendment. A privileges and immunities clause expansion is rejected. The scope of Heller, which revived the right to possess a handgun for self-defense as an individual Second Amendment right subject to reasonable restriction of some sort, has not been fully established. Dicta in Heller which has been seized upon by the federal courts, suggest that almost all current federal gun control laws are reasonable restrictions on the Second Amendment right to bear arms, and presumably many state law restrictions would pass muster as well. But, there are many gray areas.
By clearly loosening the standard for incorporation of constitutional rights under the 14th Amendment Due Process Clause, the case also leaves open a slightly greater chance that some of the remaining few rights under the Bill of Rights that apply only to the Federal Government might be incorporated or might be incorporated more fully to apply to the states. The unincorporated rights include the right to indictment by a grand jury, the right to a civil jury trial, and the right to a unanimous criminal jury.
2. In Christian Legal Society v. Martinez, a requirement that all organizations be open to all students in order to receive funding through student fees at a public college is upheld. Some of the harder issues in the case were avoided because they were not preserved in litigation over facts that were stipulated.
3. In Bilski v. Kappos, the availability of business patents is narrowed, but they are not completely eliminated as they would have been under a ruling from the en banc U.S. Court of Appeals for the Federal Circuit. The method of hedging risk in the energy industry patented was invalidated on the basis that it was an abstract idea under existing precedents in a way that could be applied to many business patents. The precise scope of business method patents in future cases was not defined.
There were four votes to end business method patents entirely that did not prevail. But, those votes signal hostility on the court to a broad reading of business method patent scope.
4. In Free Enterprise Fund v. PCAOB, the Court holds that "the Public Company Accounting Oversight Board, was created as part of a series of accounting reforms in the Sarbanes-Oxley Act of 2002 . . . composed of five members appointed by the Securities and Exchange Commission" is unconstitutional as constituted, because "the SEC . . . cannot remove Board members at will, but only “for good cause shown,” “in accordance with” specified procedures . . . [and] the Commissioners, in turn, cannot themselves be re-moved by the President except for “ ‘inefficiency, neglect of duty, or malfeasance in office.’” Two layers of tenure remove the PCAOB to far from Presidential control. The remedy, the Court determines is to allow the PCAOB members to be removed at will by the SEC. The PCAOB otherwise continues undisturbed. The four liberal Justices, in dissent would have upheld the tenure protections for PCAOB members from the SEC as well.
Solicitor General Kagan has been nominated to replace Justice Stevens on the Court and is expected to be confirmed sometime this summer.
Earliest Writing Education Started Late
The first written language, Sumerian, was used in Mesopotamia by scribes about 3,500 B.C.E. When did scribes start their training?
Some curious new evidence suggests that it started at around age 12-13, around the time that kids are in the 7th or 8th grade today.
Some curious new evidence suggests that it started at around age 12-13, around the time that kids are in the 7th or 8th grade today.
25 June 2010
Smart and Dumb Criminals
A former federal prosecutor comments on the people that he prosecuted:
It is true, beyond a shadow of a doubt, that lots of not very bright people commit lots of serious and not so serious crimes. High school dropouts are vastly over represented in prison populations, and high school dropouts (particularly male high school dropouts) overwhelmingly performed very poorly academically for many years before dropping out. In contrast, people who have graduated from high school and had even some college education, without actually getting a degree, are vastly underrepresented in prisons.
The hard question is how much this represents differences in actual crime rates, and how much this represents different levels of involvement in the criminal justice system.
We know that race and socio-economic class has an immense impact on the likelihood that someone will face criminal sanctions for drug use. Is that true in other crimes?
Fiction frequently makes the assumption that there are indeed two classes of criminals, stupid criminals who routinely cycle through the criminal justice system, and smart criminals who very rarely get caught. But, is there really a class of smart criminals who routinely evade capture out there?
My intuition is that the reality varies a great deal by type of crime. There probably is a fairly large class of people who routinely commit white collar crimes like willful tax evasion, securities fraud, identity theft, document forgery, and consumer fraud who routinely escape criminal sanctions. White collar crime can pay well, serious penalties have been rare until recently, enforcement is usually a low priority, and acquittals are relatively common in gray areas where criminal intent may be factually present but hard to prove. Often victims of these crimes are blamed for being gullible or failing to take proper care.
But, there is probably not a very large class of people who routinely steal or extort tangible property who routinely escape criminal sanctions. Theft, robbery and burglary typically involve monetary values that are quite low. Even if the average offender commits a couple of hundred offenses for each time that he or she is caught, this isn't a very lucrative way to make a living. Bank robbery, for example, nets an average of about $5,000 per offense and individuals rarely get away with more than half a dozen offenses before they are caught. And the facts that offenders who are convicted for these offenses are extremely likely to be caught reoffending, and that a large share of those convicted of these offenders have prior criminal records, supports the inference that a large share of the pool of people who commit these crimes are in the stupid criminals who get caught class. Perhaps there are some stolen property brokers (a.k.a. fences) who manage to make a living from it, and a few people making a living stealing scrap materials from abandoned houses in places like Detroit, but there seem to be few professional thieves who consistently manage to escape punishment.
The only places where theft seems to be a profitable way of life is in places like Somolia, where all law enforcement has broken down, allowing piracy and banditry to prevail.
Violent crime seems to come in several subtypes. A lot of violent crime is committed between non-family members who are known to each other, more or less impulsively, often when the offender is drunk. This is hard to keep out of the criminal justice net consistently, although there are probably some communities that favor self-help over a resort to the law. Some violent crime is committed between family members and this is often chronic (e.g. domestic violence, incest). This may evade detection for a long period of time, particularly if the economic costs of punishing the offender or leaving the family are high and if it is possible to avoid the scrutiny of neighbors (for example, for single family homes on large lots as opposed to crowded apartment buildings).
A lot of violent crime seems to involve "enforcers" for criminal gangs that are engaged in vice, mostly against other gangs or people believed to be connected with other gangs. Clearance rates for these crimes by police are much lower than for ordinary violent crime and the connection to the gang can make this kind of activity profitable. While the criminal justice system may not catch many of these enforcers, however, long profitable careers doing this seem to be rare. It is dangerous work, and gangs rarely have many middle aged members. Gang members in places where there is a lot of gang related violent crime tend to live lives that are nasty, brutish and short. Anecdotally, there appears to be considerable evidence that gangs who hire outside "professional" hit men to kill people that cross the gang often kill their own hit men when they come to collect their payment or shortly before or after that point, in order to cover up evidence of their connection to the killing. The large proportion of prison inmates with a gang history suggests that gangs aren't very good at keeping their members from ending up in prison eventually for something. If there is a class of hit men or enforcers out there who lead long profitable careers without getting caught, they seem to be doing a very good job of staying invisible.
On the other hand, some of the most frightening crime is carried out by psychopaths for their own enjoyment. Many serial killers and serial rapists are not captured for decades and are only captured after that have committed many serious crimes. Psychopaths make up a minority of murderers, but a very large share of people who commit premeditated murders. Psychopaths are unlikely to feel an urge to confess to their crimes. Recidivism is relatively rare, because once caught they are usually sentenced to very long terms in prison. The upper limit on how many of these predators can be out there is fairly low, because there aren't all that many unsolved serious violent crimes with no indication of who the perpetrator is that don't seem likely to involve gang violence. Psychopathic violent criminals tend to have day jobs and engage in crime as a hobby rather than as a vocation, so it doesn't need to be profitable for them. If many of these crimes are committed by repeat offenders, then the number of possible crimes attributable to them has to be divided by the number of offenses per psychopath to determine how many violent evil criminals are out there committing crimes and not getting caught. At an order of magnitude level, we are talking about a few violent psychopaths per million people, tops, in an world where hundreds of thousands of men per million in some social circumstances will ultimately end up with felony criminal records. In all likelihood, far more people are caught in the crossfire or killed due to mistaken identity in gang warfare than are killed by depraved psychopaths. Gangs of individuals who together engage in serial killings for pleasure as opposed to gang warfare, however, appear to be almost unprecedented.
In contrast, individuals who suddenly snap and kill large numbers of people rarely seem to have much of a criminal record, if any, usually live to commit only one incident before dying in the process or facing a long prison term, and often appear to have thinking so disorganized that they cannot rightly be grouped with the "smart" criminals.
Vice may look more like white collar crime in terms of the possibility of sustaining a criminal class. Known drug cartels manage to persist for decades. A large share of all offenses go unpunished. Prosecutors not infrequently are lenient to mid-level offenders who turn state's evidence. Prostitution arrest rates are very low and a few arrests rarely prevents someone from continuing to be a prostitute, because punishments are typically mild. And, vice generally pays much better than theft or violent crime. Whores make better money than hit men do.
Taken together, there is a decent case that there is a class of smart criminals who largely avoid punishment for their actions out there. But, this probably consists overwhelmingly of economically motivated white collar criminals and vice participants, rather than violent criminals and thieves of tangible personal property. There may also be hundreds of, or even a few thousand psychopaths out there committing crimes, most of whom appear to operate independently of each other. And, there are probably hundreds of thousands, if not millions of families out there with chronic histories of domestic crimes that totally escape the criminal justice system.
So many of the criminals I prosecuted were, let’s say, not very bright. They were so not bright that catching and convicting them was relatively easy. (These were mostly the ones that resulted in guilty pleas.) One thing I always found troubling as a prosecutor was that, in a way, we were fooling the public. We let the public think that law enforcement and prosecutors were doing a brilliant job of keeping the streets safe, or maintaining the integrity of the financial markets, or ferreting out public corruption. Sure, this was true some of the time. But a lot of the time, we were arresting low-hanging fruit, the bad guys who were so naïve they didn’t realize they were leaving a trail behind them that practically glowed in the dark. In short, in a world where there are smart criminals and not-so-smart criminals, we were disproportionately getting the latter.
It is true, beyond a shadow of a doubt, that lots of not very bright people commit lots of serious and not so serious crimes. High school dropouts are vastly over represented in prison populations, and high school dropouts (particularly male high school dropouts) overwhelmingly performed very poorly academically for many years before dropping out. In contrast, people who have graduated from high school and had even some college education, without actually getting a degree, are vastly underrepresented in prisons.
The hard question is how much this represents differences in actual crime rates, and how much this represents different levels of involvement in the criminal justice system.
We know that race and socio-economic class has an immense impact on the likelihood that someone will face criminal sanctions for drug use. Is that true in other crimes?
Fiction frequently makes the assumption that there are indeed two classes of criminals, stupid criminals who routinely cycle through the criminal justice system, and smart criminals who very rarely get caught. But, is there really a class of smart criminals who routinely evade capture out there?
My intuition is that the reality varies a great deal by type of crime. There probably is a fairly large class of people who routinely commit white collar crimes like willful tax evasion, securities fraud, identity theft, document forgery, and consumer fraud who routinely escape criminal sanctions. White collar crime can pay well, serious penalties have been rare until recently, enforcement is usually a low priority, and acquittals are relatively common in gray areas where criminal intent may be factually present but hard to prove. Often victims of these crimes are blamed for being gullible or failing to take proper care.
But, there is probably not a very large class of people who routinely steal or extort tangible property who routinely escape criminal sanctions. Theft, robbery and burglary typically involve monetary values that are quite low. Even if the average offender commits a couple of hundred offenses for each time that he or she is caught, this isn't a very lucrative way to make a living. Bank robbery, for example, nets an average of about $5,000 per offense and individuals rarely get away with more than half a dozen offenses before they are caught. And the facts that offenders who are convicted for these offenses are extremely likely to be caught reoffending, and that a large share of those convicted of these offenders have prior criminal records, supports the inference that a large share of the pool of people who commit these crimes are in the stupid criminals who get caught class. Perhaps there are some stolen property brokers (a.k.a. fences) who manage to make a living from it, and a few people making a living stealing scrap materials from abandoned houses in places like Detroit, but there seem to be few professional thieves who consistently manage to escape punishment.
The only places where theft seems to be a profitable way of life is in places like Somolia, where all law enforcement has broken down, allowing piracy and banditry to prevail.
Violent crime seems to come in several subtypes. A lot of violent crime is committed between non-family members who are known to each other, more or less impulsively, often when the offender is drunk. This is hard to keep out of the criminal justice net consistently, although there are probably some communities that favor self-help over a resort to the law. Some violent crime is committed between family members and this is often chronic (e.g. domestic violence, incest). This may evade detection for a long period of time, particularly if the economic costs of punishing the offender or leaving the family are high and if it is possible to avoid the scrutiny of neighbors (for example, for single family homes on large lots as opposed to crowded apartment buildings).
A lot of violent crime seems to involve "enforcers" for criminal gangs that are engaged in vice, mostly against other gangs or people believed to be connected with other gangs. Clearance rates for these crimes by police are much lower than for ordinary violent crime and the connection to the gang can make this kind of activity profitable. While the criminal justice system may not catch many of these enforcers, however, long profitable careers doing this seem to be rare. It is dangerous work, and gangs rarely have many middle aged members. Gang members in places where there is a lot of gang related violent crime tend to live lives that are nasty, brutish and short. Anecdotally, there appears to be considerable evidence that gangs who hire outside "professional" hit men to kill people that cross the gang often kill their own hit men when they come to collect their payment or shortly before or after that point, in order to cover up evidence of their connection to the killing. The large proportion of prison inmates with a gang history suggests that gangs aren't very good at keeping their members from ending up in prison eventually for something. If there is a class of hit men or enforcers out there who lead long profitable careers without getting caught, they seem to be doing a very good job of staying invisible.
On the other hand, some of the most frightening crime is carried out by psychopaths for their own enjoyment. Many serial killers and serial rapists are not captured for decades and are only captured after that have committed many serious crimes. Psychopaths make up a minority of murderers, but a very large share of people who commit premeditated murders. Psychopaths are unlikely to feel an urge to confess to their crimes. Recidivism is relatively rare, because once caught they are usually sentenced to very long terms in prison. The upper limit on how many of these predators can be out there is fairly low, because there aren't all that many unsolved serious violent crimes with no indication of who the perpetrator is that don't seem likely to involve gang violence. Psychopathic violent criminals tend to have day jobs and engage in crime as a hobby rather than as a vocation, so it doesn't need to be profitable for them. If many of these crimes are committed by repeat offenders, then the number of possible crimes attributable to them has to be divided by the number of offenses per psychopath to determine how many violent evil criminals are out there committing crimes and not getting caught. At an order of magnitude level, we are talking about a few violent psychopaths per million people, tops, in an world where hundreds of thousands of men per million in some social circumstances will ultimately end up with felony criminal records. In all likelihood, far more people are caught in the crossfire or killed due to mistaken identity in gang warfare than are killed by depraved psychopaths. Gangs of individuals who together engage in serial killings for pleasure as opposed to gang warfare, however, appear to be almost unprecedented.
In contrast, individuals who suddenly snap and kill large numbers of people rarely seem to have much of a criminal record, if any, usually live to commit only one incident before dying in the process or facing a long prison term, and often appear to have thinking so disorganized that they cannot rightly be grouped with the "smart" criminals.
Vice may look more like white collar crime in terms of the possibility of sustaining a criminal class. Known drug cartels manage to persist for decades. A large share of all offenses go unpunished. Prosecutors not infrequently are lenient to mid-level offenders who turn state's evidence. Prostitution arrest rates are very low and a few arrests rarely prevents someone from continuing to be a prostitute, because punishments are typically mild. And, vice generally pays much better than theft or violent crime. Whores make better money than hit men do.
Taken together, there is a decent case that there is a class of smart criminals who largely avoid punishment for their actions out there. But, this probably consists overwhelmingly of economically motivated white collar criminals and vice participants, rather than violent criminals and thieves of tangible personal property. There may also be hundreds of, or even a few thousand psychopaths out there committing crimes, most of whom appear to operate independently of each other. And, there are probably hundreds of thousands, if not millions of families out there with chronic histories of domestic crimes that totally escape the criminal justice system.
Disordered Locality
One of the most interesting ideas of loop quantum gravity is that the structure of space-time may be disordered and lack the same kind of locality found in macroscopic classical physics. A paper exploring the concept from September 2009 by Chanda Prescod-Weinstein and Lee Smolin is found here.
The basic idea is that fundamentally, space-time may be made up of a bunch of points that are connected to some other points. Points may be two, three, four, or, for example, five hundred points away from each other by the shortest route. But, it might be possible that two points that are by hundreds of other routes, no less than several hundred hops apart from each other are also connected to each other directly.
If macroscopic distance is defined to be the average number of hops that it takes to get from point A to point B by all possible routes, then non-local links are those that involve many fewer links than average. Prescod-Weinstein and Smolin's paper defines the concept in terms of volume rather than length, but the differing expressions express the same basic concept.
In this kind of world, macroscopic distance is only an emergent property of fundamental quantum distance.
A non-local connection in this theory is a bit like the science fiction concept of a wormhole, a direct connection between two macroscopically non-adjacent points. Only unlike the science fiction idea of a wormhole, each one is only big enough to fit a single quantum particle. There can be (and indeed should be) gillions of them, but if they tend to become more rare with macroscopic distance relative to local connections, then they become almost invisible in classical physics.
By analogy, in classical thermodynamics there is some probability that all of the oxygen molecules in a room will end up entirely on the North side of the room turning the South side of the room into a vacuum for a moment. But, the longer the period of time involved, and the more oxygen molecules there are involved, the less likely this is to happen. And when the probability of this happening is less than say 50% in 15 billion years (the age of the Universe) it becomes effectively impossible.
One way to interpret the probability that a photon will travel at above or below the speed of light which is a product of 1/I (in which I is the space-time distance between two points), is that it is a measure of the extent to which space-time is non-local over a distance I (we would expect non-locality to average out over greater distances in any disordered space-time model that appears to be local at the macroscopic level).
This could also help to explain a fundamental cause of the uncertainty principle. Perhaps it is impossible to know location and momentum with more than a certain degree of specificity because the concept of distance becomes increasingly ill defined as one looks closer and closer. Indeed, folks like Stephen Hawkings have used this notion of disorder to explain why one could have a "Big Bang" without having any particular moment that would clearly be the beginning of everything.
This sounds crazy and weird. And it is weird, sort of. But, not that weird.
Have you every heard of the concept "six degrees of separation"? It is a social networking idea. You might know somebody personally, and that is one degree of separation. If you know somebody who knows somebody, you are separated by two degrees of separation from that other person. The theory goes that almost no two people are separated by more than six degrees of separation.
People who are close to each other physically, for example, in the same small town or neighborhood, or who go to the same school or work at the same business, are likely to have few degrees of separation between them. People who live in worlds that are very different geographically and culturally, for example, someone on a Wisconsin farm and someone in a Papua New Guinea village with few ties to the outside world, are likely to have many degrees of separation between them.
You could probably draw a map ordered by degrees of social separation that would look very much like a map of degrees of physical separation. The linkage between social separation and physical separation would have been even stronger in pre-modern times, when travel and long distance communication were much more rare.
By analogy, fundamental distance might be analogous to our degrees of social separation based map, while apparent distance might be analogous to our physical separation map.
So, if locality is disordered, then the fundamental nature of space-time is that if you look really closely, at a point by point level, you can't line the points up neatly on a number line in any dimension of space or in time.
It gets worse. In this kind of disordered locality scenario, it may be that the number of dimensions in space-time itself is not well defined. There may be several different paths you can take to get from point A to point B, but the choices in those paths may not be clearly identified with up and down, left and right, forward and back, future and past. Four directions may eventually provide a pretty good approximation of the network, but you may need a fairly sizeable network of points for those directions to become well defined.
Thus, rather than the idea from string theory that we may have four macroscopic dimensions and six really tiny curled up dimensions, in loop quantum gravity, at small scales, the concept of the number of space-time dimensions may simply become ill defined. Both approaches are pretty weird, but the loop quantum gravity approach is "natural" by comparison; one goes from a very simple notion of space-time as points connected by lines, to a very familiar one (the four dimensional universe), without having to find a way to make six clearly predicted dimensions of space-time become invisible.
This also sounds weird, but the idea of non-integer dimensions which are emergent actually has solid mathematical precedent. It is the fundamental idea behind the concept of a "fractal dimension." The definition of a fractal dimension takes a relationship between two quantities that happen to be "1" on a line, "2" in a flat space, "3" in a three dimensionally flat space, and so on, and finds a way to generalize it so that a form that is squiggly in a certain kind of way at all scales (e.g. a shoreline) can be assigned a fractional dimension that is not an integer. It also happens that fractal dimensions are quite important in explaining a non-obviously related concept called "chaos" which is a kind of ordered near randomness that can arise when a deterministic equation is very sensitive to initial conditions. Chaotic functions tend to produce fractal shapes when plotted.
The definitions of distance and volume in a disordered locality model are quite similar in concept to the definition of dimension used in the mathematics of fractals. So, it turns out that there is already a mathematics well crafted to describing how a network could become emergently four dimensional in a mathematically consistent way.
A system that has disordered locality at a small scale is interestingly mathematically even if it isn't an accurate description of the universe. In the advanced calculus classes called "real analysis" and "complex analysis", one learned that "continuity" of a number line made up of points is a central and key assumption upon which most of advanced mathematics relies. Even discrete mathematicians, who deal with number lines where there are gaps, normally deal with far simpler systems than that approximately continuous at large scales quantum space-time that loop quantum gravity is considering.
And, there are important macroscale differences between the two approaches. For example, one of the major conclusions of the theory of social networks is that a very small number of non-local links dramatically reduce the number of degrees of separation between very large groups of people. Similarly, it doesn't take many wormholes to dramatically reduce the shortest distance between two points in space. Even a small number of non-local links can have a great effect on the behavior of the entire system.
In the paper linked above, for example, it is demonstrated that it would take only one non-local link in every box of 100km on each side to produce the dark energy effect observed in the universe, even though the effect is equivalent to about 70% of the mass-energy in the universe. A few wormholes can have immense effects.
The basic idea is that fundamentally, space-time may be made up of a bunch of points that are connected to some other points. Points may be two, three, four, or, for example, five hundred points away from each other by the shortest route. But, it might be possible that two points that are by hundreds of other routes, no less than several hundred hops apart from each other are also connected to each other directly.
If macroscopic distance is defined to be the average number of hops that it takes to get from point A to point B by all possible routes, then non-local links are those that involve many fewer links than average. Prescod-Weinstein and Smolin's paper defines the concept in terms of volume rather than length, but the differing expressions express the same basic concept.
In this kind of world, macroscopic distance is only an emergent property of fundamental quantum distance.
A non-local connection in this theory is a bit like the science fiction concept of a wormhole, a direct connection between two macroscopically non-adjacent points. Only unlike the science fiction idea of a wormhole, each one is only big enough to fit a single quantum particle. There can be (and indeed should be) gillions of them, but if they tend to become more rare with macroscopic distance relative to local connections, then they become almost invisible in classical physics.
By analogy, in classical thermodynamics there is some probability that all of the oxygen molecules in a room will end up entirely on the North side of the room turning the South side of the room into a vacuum for a moment. But, the longer the period of time involved, and the more oxygen molecules there are involved, the less likely this is to happen. And when the probability of this happening is less than say 50% in 15 billion years (the age of the Universe) it becomes effectively impossible.
One way to interpret the probability that a photon will travel at above or below the speed of light which is a product of 1/I (in which I is the space-time distance between two points), is that it is a measure of the extent to which space-time is non-local over a distance I (we would expect non-locality to average out over greater distances in any disordered space-time model that appears to be local at the macroscopic level).
This could also help to explain a fundamental cause of the uncertainty principle. Perhaps it is impossible to know location and momentum with more than a certain degree of specificity because the concept of distance becomes increasingly ill defined as one looks closer and closer. Indeed, folks like Stephen Hawkings have used this notion of disorder to explain why one could have a "Big Bang" without having any particular moment that would clearly be the beginning of everything.
This sounds crazy and weird. And it is weird, sort of. But, not that weird.
Have you every heard of the concept "six degrees of separation"? It is a social networking idea. You might know somebody personally, and that is one degree of separation. If you know somebody who knows somebody, you are separated by two degrees of separation from that other person. The theory goes that almost no two people are separated by more than six degrees of separation.
People who are close to each other physically, for example, in the same small town or neighborhood, or who go to the same school or work at the same business, are likely to have few degrees of separation between them. People who live in worlds that are very different geographically and culturally, for example, someone on a Wisconsin farm and someone in a Papua New Guinea village with few ties to the outside world, are likely to have many degrees of separation between them.
You could probably draw a map ordered by degrees of social separation that would look very much like a map of degrees of physical separation. The linkage between social separation and physical separation would have been even stronger in pre-modern times, when travel and long distance communication were much more rare.
By analogy, fundamental distance might be analogous to our degrees of social separation based map, while apparent distance might be analogous to our physical separation map.
So, if locality is disordered, then the fundamental nature of space-time is that if you look really closely, at a point by point level, you can't line the points up neatly on a number line in any dimension of space or in time.
It gets worse. In this kind of disordered locality scenario, it may be that the number of dimensions in space-time itself is not well defined. There may be several different paths you can take to get from point A to point B, but the choices in those paths may not be clearly identified with up and down, left and right, forward and back, future and past. Four directions may eventually provide a pretty good approximation of the network, but you may need a fairly sizeable network of points for those directions to become well defined.
Thus, rather than the idea from string theory that we may have four macroscopic dimensions and six really tiny curled up dimensions, in loop quantum gravity, at small scales, the concept of the number of space-time dimensions may simply become ill defined. Both approaches are pretty weird, but the loop quantum gravity approach is "natural" by comparison; one goes from a very simple notion of space-time as points connected by lines, to a very familiar one (the four dimensional universe), without having to find a way to make six clearly predicted dimensions of space-time become invisible.
This also sounds weird, but the idea of non-integer dimensions which are emergent actually has solid mathematical precedent. It is the fundamental idea behind the concept of a "fractal dimension." The definition of a fractal dimension takes a relationship between two quantities that happen to be "1" on a line, "2" in a flat space, "3" in a three dimensionally flat space, and so on, and finds a way to generalize it so that a form that is squiggly in a certain kind of way at all scales (e.g. a shoreline) can be assigned a fractional dimension that is not an integer. It also happens that fractal dimensions are quite important in explaining a non-obviously related concept called "chaos" which is a kind of ordered near randomness that can arise when a deterministic equation is very sensitive to initial conditions. Chaotic functions tend to produce fractal shapes when plotted.
The definitions of distance and volume in a disordered locality model are quite similar in concept to the definition of dimension used in the mathematics of fractals. So, it turns out that there is already a mathematics well crafted to describing how a network could become emergently four dimensional in a mathematically consistent way.
A system that has disordered locality at a small scale is interestingly mathematically even if it isn't an accurate description of the universe. In the advanced calculus classes called "real analysis" and "complex analysis", one learned that "continuity" of a number line made up of points is a central and key assumption upon which most of advanced mathematics relies. Even discrete mathematicians, who deal with number lines where there are gaps, normally deal with far simpler systems than that approximately continuous at large scales quantum space-time that loop quantum gravity is considering.
And, there are important macroscale differences between the two approaches. For example, one of the major conclusions of the theory of social networks is that a very small number of non-local links dramatically reduce the number of degrees of separation between very large groups of people. Similarly, it doesn't take many wormholes to dramatically reduce the shortest distance between two points in space. Even a small number of non-local links can have a great effect on the behavior of the entire system.
In the paper linked above, for example, it is demonstrated that it would take only one non-local link in every box of 100km on each side to produce the dark energy effect observed in the universe, even though the effect is equivalent to about 70% of the mass-energy in the universe. A few wormholes can have immense effects.
24 June 2010
Nacchio Sentence Tweaked
Joe Nacchio's securities violation conviction was reduced by just two months to 70 months (of which 18 months have been served so far), although the restitution award in his case was reduced by $7 million.
The ruling basically sides with the government to the greatest extent possible under the 10th Circuit decision reversing the original sentences.
The ruling basically sides with the government to the greatest extent possible under the 10th Circuit decision reversing the original sentences.
Big Air Force Cuts Considered
The U.S. military is seriously considering retiring all 66 of its Reagan era B1-B bombers. The B-1B has a far larger payload than the B-2 stealth bomber and the B-52, can travel at supersonic speeds and is highly maneuverable. The trouble is that the U.S. military no longer needs those capabilities.
For missions where any aircraft or anti-aircraft fire are possible, the F-22 stealth fighter, B-2 stealth bomber and armed drone aircraft are better suited to the task.
For missions where opposition force threats are unlikely, the B-1B's big payload doesn't matter when the amount of bombs that you need to drop has declined dramatically.
- USAF, Reaching Globally, Reaching Powerfully: The United States Air Force in the Gulf War (Sept. 1991), p. 55.
From here.
The twenty-four ton payload of a B-52 (vs. 60 tons for a B-1B) is plenty to drop smart bombs from high altitudes in an environment where no opposition forces are capable of shooting back and the bombs dropped are almost certain to hit their target without spending much money to keep the planes going.
Despite being an average of 47 years old, the average B-52 has flown just 16,000 hours of its 28,000 flight hour useful life. Its simple mission doesn't take much training (unlike air to air combat training for fighter pilots) and unlike transport planes it doesn't need to fly in peacetime.
By the time that the B-52 fleet has enough hours to be retired, it may make sense to replace a plane with such a simple mission (now that its more difficult missions are handled by other aircraft in the fleet) with a drone aircraft rather than a new manned bomber.
Fighter Aircraft Cuts
From here.
For missions where any aircraft or anti-aircraft fire are possible, the F-22 stealth fighter, B-2 stealth bomber and armed drone aircraft are better suited to the task.
For missions where opposition force threats are unlikely, the B-1B's big payload doesn't matter when the amount of bombs that you need to drop has declined dramatically.
In World War II it could take 9,000 bombs to hit a target the size of an aircraft shelter. In Vietnam, 300. Today we can do it with one laser-guided munition from an F-117.
- USAF, Reaching Globally, Reaching Powerfully: The United States Air Force in the Gulf War (Sept. 1991), p. 55.
D]uring the peak six years of the Vietnam war, 6.7 million tons of bombs were dropped. That was the same rate they were dropped during the major bombing campaigns of World War II. But in eight years of fighting in Afghanistan and Iraq, only 42,000 tons have been dropped. Thus, while in the past, a million tons were dropped a year, for the war on terror, less than 6,000 tons a year were dropped. That means a reduction of over 99 percent. Even when you adjust for the different number of U.S. troops involved, that's still over 97 percent fewer bombs dropped. . . .
[D]uring Vietnam the average bomb size was close to 1,000 pounds, now it's less than half that. Weapons like the hundred pound Hellfire missile are more popular with the ground troops, than the 2,000 pound bomb that was so often used in Vietnam.
Most of the bombing is now being done in Afghanistan. In Iraq, less than a ton of bombs a month are being dropped. In Afghanistan, it's over 100 tons a month. In Afghanistan, this tonnage has declined nearly 40 percent in the last year. Partly due to the greater use of smaller bombs and missiles, and partly due to the greater use of civilians as human shields by the Taliban. . . . About a hundred civilians are killed each month in Afghanistan. Most are killed by the Taliban, but 10-20 percent are killed by American smart bombs, missiles and shells.
From here.
The twenty-four ton payload of a B-52 (vs. 60 tons for a B-1B) is plenty to drop smart bombs from high altitudes in an environment where no opposition forces are capable of shooting back and the bombs dropped are almost certain to hit their target without spending much money to keep the planes going.
Despite being an average of 47 years old, the average B-52 has flown just 16,000 hours of its 28,000 flight hour useful life. Its simple mission doesn't take much training (unlike air to air combat training for fighter pilots) and unlike transport planes it doesn't need to fly in peacetime.
By the time that the B-52 fleet has enough hours to be retired, it may make sense to replace a plane with such a simple mission (now that its more difficult missions are handled by other aircraft in the fleet) with a drone aircraft rather than a new manned bomber.
Fighter Aircraft Cuts
Force structure cuts might also extend to the air arm’s much cherished but currently under-utilized fighter force. The service already plans to early retire 250 fighters this year, Air Force Secretary Michael Donley said last month; gone are 112 F-15s, 134 F-16s, and 3 A-10s.
From here.
23 June 2010
Bikes Belong On Sidewalks
Denver law requires bicycles to ride on streets like cars, rather than allowing them to ride on sidewalks like pedestrians, in most circumstances ("it's OK to ride your bike on a sidewalk in Denver if you are within a block of parking your bike and are traveling 6 mph or less. It's also OK if you are on a street that is part of a designated bike route, such as 20th Street between Coors Field and the Lower Highland neighborhood.") Denver police are enforcing this law at the moment.
The law in this case is an ass. In much of Denver, in places without wide bike lanes, it is unfathomably more dangerous for a bicyclist to ride on the street than it is for a bicyclist to ride on a sidewalk. Both parked cars opening their doors and cars driving by are a constant hazard. In collisions between cars and bicycles, the bicyclist almost always losses.
Many of the people I've known who bike to work regularly, Sam Van Why who works at the College for Financial Planning in the Denver Tech Center, for example, have been hit and seriously injured in the process of biking on the streets. Almost no regular urban bicyclist has never had some sort of mishap trying to bike on city streets.
Even when a bicyclist isn't hit, trying to have bicycles and cars share the road is a traffic hazard. A bicycle almost always goes much slower than the speed limit and holds up traffic if it is not passed. But, normal traffic lanes aren't wide enough to allow a car to pass a bicycle a safe distance away from the bike when there is oncoming traffic, something that is perpetual on busy city streets.
Often the driver of the car either makes a dangerous passing maneuver out of impatience, often risking a head on two car collision, or drives closer to the bicyclist than the driver should, often causing the bicyclist to have collide with something else while avoiding the near miss with the car.
In places that have designated bike paths (like most of Colorado's resort towns), they are precisely like sidewalks. Ideal bicycle lanes look like sidewalks, not roads. Bicyclists and pedestrians likewise manage to co-exist just fine on the very sidewalk like Cherry Creek bike path and Platte River bike path through Denver.
Injuries from bicycle-pedestrian collisions on sidewalks are almost always less serious and in much of the city, sidewalks are empty most of the time.
The better rule is to allow bicyclists to choose. They can follow the rules that apply to pedestrians and keep to sidewalks and crosswalks, or they can follow the rules that apply to cars and drive on the street, and allow them to switch regimes at any time when it can be done without causing a collision or confusion.
If Denver is really committed to multi-modal transportation it needs to focus on building safe places to bike, not on ticketing bicyclists who use sidewalks.
Last month, police began a monthly "focused enforcement" day, where four officers on bikes, and a supervisor, hit the streets to cite cyclists for breaking laws. More than 40 bicyclists have received $60 tickets for such offenses as riding on the sidewalk.
The law in this case is an ass. In much of Denver, in places without wide bike lanes, it is unfathomably more dangerous for a bicyclist to ride on the street than it is for a bicyclist to ride on a sidewalk. Both parked cars opening their doors and cars driving by are a constant hazard. In collisions between cars and bicycles, the bicyclist almost always losses.
Many of the people I've known who bike to work regularly, Sam Van Why who works at the College for Financial Planning in the Denver Tech Center, for example, have been hit and seriously injured in the process of biking on the streets. Almost no regular urban bicyclist has never had some sort of mishap trying to bike on city streets.
Even when a bicyclist isn't hit, trying to have bicycles and cars share the road is a traffic hazard. A bicycle almost always goes much slower than the speed limit and holds up traffic if it is not passed. But, normal traffic lanes aren't wide enough to allow a car to pass a bicycle a safe distance away from the bike when there is oncoming traffic, something that is perpetual on busy city streets.
Often the driver of the car either makes a dangerous passing maneuver out of impatience, often risking a head on two car collision, or drives closer to the bicyclist than the driver should, often causing the bicyclist to have collide with something else while avoiding the near miss with the car.
In places that have designated bike paths (like most of Colorado's resort towns), they are precisely like sidewalks. Ideal bicycle lanes look like sidewalks, not roads. Bicyclists and pedestrians likewise manage to co-exist just fine on the very sidewalk like Cherry Creek bike path and Platte River bike path through Denver.
Injuries from bicycle-pedestrian collisions on sidewalks are almost always less serious and in much of the city, sidewalks are empty most of the time.
The better rule is to allow bicyclists to choose. They can follow the rules that apply to pedestrians and keep to sidewalks and crosswalks, or they can follow the rules that apply to cars and drive on the street, and allow them to switch regimes at any time when it can be done without causing a collision or confusion.
If Denver is really committed to multi-modal transportation it needs to focus on building safe places to bike, not on ticketing bicyclists who use sidewalks.
22 June 2010
Rezoned
Denver completely overhauled its zoning code Monday night. Some places are more easily developed now, some face more restrictions on development. Many neighborhoods that are basically similar in zoning require more nuanced adherence to established design standards. The movement to legalize Granny flats made considerable progress, although not as much as supporters had hoped.
The change was the biggest overhaul since 1956 and leaves Mayor Hickenlooper with that major task accomplished.
The change was the biggest overhaul since 1956 and leaves Mayor Hickenlooper with that major task accomplished.
Gluon Mass, QCD Developments and More
About Gluons and QCD
A gluon is one of a set of eight particles that transmits the strong nuclear force from quark to quark containing two color charges in the Standard Model of Particle Physics, in a manner analogous to the way that photons transmit the electro-magnetic force. Gluons differ from each other only in color, usually expressed as R, G, B, anti-R, anti-G, and anti-B. One would think that three by three possibilities would create nine gluons, but the math of the theory rules on combination out.
Gluons are always confined to color neutral composite particles made up of three quarks (baryons such as the proton and neutron), or two quarks (called mesons), although it is theoretically possible to have larger color neutral composite particles made of quarks and the search is on for them.
Generally gluons are described as having zero mass and no charge, a conclusion theoretically limited by the reality of confinement.
A set of equations called Yang-Mills theory is strongly believed to exactly describe the behavior of quarks and gluons via the strong nuclear force. This area of physics is called quantum chromodynamics (QCD), because it involves color charges, but it is very hard to apply mathematically and experimental results are more precise than any of its predictions. QCD interactions do not appear to interact with leptons (i.e. various flavors of electrons and neutrinos) or with other bosons (i.e. photons and the particles that transmit the weak nuclear force called the W+, the W- (which is the antiparticle of the W+), and the Z.
The Case For Gluon Mass
Despite the conventional description of gluons as massless, in the 1980s, research on the Yang-Mills theory suggested that gluons do have an effective mass of 600 MeV/c^2 +/- 25% and that this influences the way that unstable baryons and mesons decay because the high mass of a gluon disfavors decay paths that include "gluon intermediate states."
Two recent investigations that are independent both from each other, and from the original researcher in the 1980s (although, of course, they read each other publications), have proposed a gluon effective mass of 500 MeV/c^2 (which is consistent with the original estimate) by applying Yang-Mills theory in different ways.
Experimental evidence also suggests that "in the low energy limit, glue carries no spin. Rather, true excitations of the Yang-Mills field are some kind of colorless states that makes the spectrum and having the lower state with a massive glueball that can also be seen in labs."
Thus, while gluons are ordinarily thought of as zero mass, spin-1 particles, which accurately summarizes their apparent behavior in the high energy limit, but in the low energy limit gluons decouple from quarks and form glue balls that act like 500 MeV/c^2 mass, spin-0 particles that are colorless.
For comparison's sake, the currently measures values of the other particles in the Standard Model are as approximately as follows (in MeV/c^2 units):
Electron Neutrino: Less than 0.000007
Electron: 0.511
Up Quark: 2-8
Down Quark: 5-15
Muon: 105.7
Muon Neutrino: Less than 0.27
Charm Quark: 1,000-1,600
Strange Quark: 100-300
Tau: 1777
Tau Neutrino: Less than 31
Top Quark: 168,000-192,000
Bottom Quark: 4,100-4,500
W Particles: 80,398
Z Particle: 91,188
Photon: Zero
The predicted mass value for the Higgs Boson tends to hover around 129, with about 114 to 154 being the range where it is not strongly excluded by experimental evidence.
Thus, a low energy state gluon would appear to have more mass than any kind of neutrino, a photon, an electron, a muon, an up quark, a down quark, a strange quark, or a Higgs boson. But, a low energy state gluon has less mass than a charm quark, a top quark, a bottom quark, a W particle, or a Z particle.
Indeed, a low energy state gluon with spin zero and a 500 MeV/c^2 mass, looks quite a bit like one of the electrical charge neutral, colorless Higgs bosons found in supersymmetry theories.
Free gluons appear in the low energy limit decoupled from their quarks. Thus, QCD, and gluons in particular, looks quite different in the low energy limit than in the high energy limit.
For those of you who haven't been paying attention, this is quite remarkable.
It is a rare day when a Standard Model particle mass, let alone eight of them, are determined purely from theory which is different from the pre-existing conventional wisdom. There are, after all, only 37 particles in the Standard Model (including anti-particles, but ignoring chirality and polarization): 12 quarks type particles, 6 electron type particles, 6 neutrino type particles, 1 photon, 3 weak force particles, 8 gluons, and the Higgs boson, and all but the Higgs boson have been experimentally observed for more than a decade.
Note that the six neutrinos has just been confirmed to have mass for less than a year.
Both of these results precede the arrival of any Large Hadron Collider data.
This now leaves the photon as the only massless particle in the Standard Model, which relies on the undiscovered Higgs boson to create mass in the first place.
We now have good reason to believe that there are fourteen more particles that interact with the Higgs field to obtain mass than we did when the Higgs process was originally contemplated, and this is without having discovered any new particles.
The massless photon result is likely to stand, however, as there are deep reasons related to the calculations that determine how photons move in quantum mechanics, and deep reasons in general relativity, for the photon to be massless, and the photon's behavior is understood with more theoretical precision than any other Standard Model particle.
Beyond Gluons
Research, both theoretical and experimental, also favors a model in which particles made of quarks exchange not just gluons, but also pions, which are two quark mesons (made of up or down flavor quarks) that come in three varieties, have spin zero, and act in a manner similar to a boson that exchanges forces between particles like the photon and the gluon in the low energy limit of the theory. These are called "pseudo-Goldstone bosons."
The experiments involved measure proton spin statistics and attempt to reconcile them with the theory, an effort that has been underway and reached no resolution since Feynman discussed it in his QED lectures in the 1980s, so far without success, although results are now tantalizingly close.
Millenium Problem in QCD Solved?
One of the grand prizes in mathematics is the Millenium Prize, which awards $1,000,000 to the first person to solve one of seven very hard unsolved problems in applied mathematics proposed in the year 2000. One was solved in March of this year (the Poincaré conjecture, a problem in topology related to what is necessary to prove that a shape is topologically equivalent to a three dimensional sphere). Six remain unsolved after a decade.
A problem allegedly solved now relates to quantum chromodynamics, but a proposed solution from a Russian physicist at The Ohio State University, has not yet been evaluated to determine if it is correct.
The problem requires proof that:
The proposed proof, if true, would establish that the Standard Model is mathematically "well behaved," rather than a mere "dirty trick" of physicists that is conceptually flawed.
This doesn't require proof, however, that Yang-Mills theory actually describes nature, only proof that if it did, that it would be a mathematically consistent way of dong so. In the case of Quantum Electrodynamics (QED) the evidence that the theory describes reality is overwhelming. But, scientific understanding of the strong and weak nuclear forces is still being refined.
Implications
For most practical purposes, QCD is little more than a tool for classifying and predicting the properties of vast numbers of odd, unstable composite particles made up of quarks that are exceedingly rarely observed outside of particle accelerators. One can support an exceedingly technologically sophisticated society with a quite simple proton, neutron model of atomic nuclei and empirical data on how those atomic nuclei behave which is systematized in the period table, tables of atomic isotypes and tables of atomic binding energy each kind of atomic nucleus that can be derived from classical physics. The interesting parts of nuclear physics usually relate to the weak force.
But, if you are a scientist, rather than an engineer, these developments are highly relevant.
From an experimenter's perspective, the results permit high energy physicists to do more accurate experiments without new equipment. Typically, high energy physicists looking at phenomena like CP violations must first separate the interesting part of their data from well understood background information explained by the strong force. The most precisely the strong force is understood, the more accurately other results of a high energy physics experiment can be discerned.
And, both experimenters and theoretical physicists now need to be consider the question of whether massive particle effects observed at the Large Hadron Collider are Higgs bosons or glueballs, or whether they are in fact different names for the same things.
From a theoretical physicists perspective, glueballs raise all sorts of questions about how mass arises. Also, many theoretical models require gluons that are always massless, something that has not been a problem until now. Because gluons interact at very short distances from each other, massive gluons could also be influences in a non-negligible way by gravity, which grows in strength relative to the strong nuclear force as quarks get closer together to each other.
The Mystery of Mass in Physics
Mass is a really mysterious and confounding problem in modern physics. Almost everything that is ugly or odd about modern physics is deeply connected to mass. Somehow or other, we are all missing some deeply important piece of the puzzle necessary to understand it.
Most of the unexplained fundamental constants in the Standard Model involve particle masses, or appear to be intimately related to mass like the gravitational constant or the propensity of a particle to undergo beta decay by particular paths.
The deepest divide in quantum mechanics scholarship, between string theorists and proponents of loop quantum gravity, involves the question of whether quantum gravity is a force mediated by a graviton or is a product of the geometry of space-time. The only undiscovered particle in the Standard Model exists to impart mass to particles in the theory.
The separation of the mass imparting Higgs boson from the gravity producing graviton in supergravity theories (including Sting Theory) oddly unwinds the equivalence between inertial mass and gravitational mass that is so central to general relativity, making the equivalence seem merely coincidental.
The weak nuclear force transforms fermions (i.e. quarks, neutrinos and electron-like particles) into identical particles of higher or lower mass. These flavor (a.k.a. generation) transformations were totally unexpected when they are first discovered. Higher mass versions of quarks, neutrinos and electrons are exceedingly unstable. Generally speaking, the heavier a fundamental particle is, the less stable it is. Sixteen of the twenty four fermions in the Standard Model are unstable higher generation versions of their stable cousins.
All observed charge parity symmetry violations (CP violations) involve decays of at least one higher generation particle. Why does the weak interaction so intertwined with mass (e.g. in decay rates), maximally violate parity of all things, despite the fact that parity symmetry appears to be valid for all reactions involving electromagnetism and strong interactions (despite a theoretical possibility of CP violation in strong interactions with the existing equations)? Why are the probabilities of electrons and neutrinos behaving in certain ways in weak interactions so deeply intertwined with the probability that corresponding quarks will do the same and the the mass ratios of the respective particles?
We know that mass and energy are related in a strictly observed conservation relationship, but conservation of mass itself is apparently grossly disregarded in both strong and weak nuclear interactions. Beta decay, and now apparently, the low energy limit of QCD, routinely produce particles that weigh much more than the particles from which they arose. As the mass of different flavors of quarks show, even the relative relationships of one kind of quark of a given flavor to another kind of quark of the same flavor in terms of relative mass is not stable.
There is nothing inconsistent or theoretically improper about systems having changing masses. The E=mc^2 relationship neatly converts energy into matter and visa versa. But, that doesn't make these systems any less maddening. Conservation of matter is deeply ingrained in human intuition and the fact that it isn't observed in certain circumstances outside our normal perception and experience is as a result jarring.
Our intuition yearns to find some sort of intermediate structure, preons or strings excitations, for example, between raw mass-energy and the particles of the Standard Model. There could very well be some. But, modern quantum mechanics and general relativity provide this to us without sugar coating or explanation. It tells us precisely the likelihood that a muon will all of the sudden become an electron, but it doesn't very little to illuminate why this happens.
Electromagnetism doesn't generally change a particle's rest mass by itself. An electron remains an electron. Photons have differing energies, but remain massless at all times. Rarely, an electron and positron condense out of a high energy electron, but these instances are brief and the conditions that produce them are exceptional.
The strong nuclear force is at least discrete about it. It hides its massive glueballs in low energy states within confined systems where they can only be observed indirectly.
The weak nuclear force, in contrast, flaunts itself, spitting out impossibly large products from impossibly small packages. The weak force is also a tease. One can be forgiven for thinking that a down quark is simply a composite particle made up of the up quark, electron and electron neutrino which provide its most common channel of beta decay. The combined masses on both sides of the reaction are very similar. Yet, nothing in quantum mechanics comes out and says that particles are composite, and the intuition that down type quarks are composite is far less solid when at the next flavor, a strange quark decays into a charm quark, a muon and a muon neutrino, with combined masses far in excess of that of the strange quark.
While weak force branching ratio grids strongly suggest that there are precisely three generations of fermions, the reasons that the universe comes in precisely three flavors of four kinds of particles, one of which, the neutrino, appears to come only left handed varieties, is obscure.
All hypothetical particles with spins other than 1/2 or 1 are deeply connected with gravity and mass in their respective theories.
We know that a particle's speed relative to the speed of light affects its relativistic linear momentum, which looks very similar to its mass, because in Newtonian circumstances and in the low speed limit of special relativity, linear momentum is simply mass times velocity. The analog to Newtonian mass in general relativity that induces gravity, called the stress-energy tensor, in addition to rest mass-energy density, also includes linear momentum, linear momentum flux, energy flux, sheer stress, and pressure.
Missing mass, called dark matter, whose apparent effects are observed, but whose nature is not known, appears to make up about 30% of the stuff in the universe and may act differently than ordinary matter. At galactic scales, dark matter effects and "dark energy" effects (which also can be found in the mass driven equations of gravity as the "cosmological constant" and in quantum theory as the vacuum effective value of the Higgs field estimated to be about 256 MeV/c^2) totally overwhelm the Newtonian gravitational simplification of general relativity that we expect to see in most circumstances.
The key parameter of loop quantum gravity is deduced primarily from the physics of supermassive black holes. While we normally experience gravity as the simplest of fundamental forces, behaving in its Newtonian form GMm/r^2 in almost all circumstances we encounter, the experimentally well established General Relativity expressions of gravity are more complex than those of any of the other fundamental forces of physics.
A gluon is one of a set of eight particles that transmits the strong nuclear force from quark to quark containing two color charges in the Standard Model of Particle Physics, in a manner analogous to the way that photons transmit the electro-magnetic force. Gluons differ from each other only in color, usually expressed as R, G, B, anti-R, anti-G, and anti-B. One would think that three by three possibilities would create nine gluons, but the math of the theory rules on combination out.
Gluons are always confined to color neutral composite particles made up of three quarks (baryons such as the proton and neutron), or two quarks (called mesons), although it is theoretically possible to have larger color neutral composite particles made of quarks and the search is on for them.
Generally gluons are described as having zero mass and no charge, a conclusion theoretically limited by the reality of confinement.
A set of equations called Yang-Mills theory is strongly believed to exactly describe the behavior of quarks and gluons via the strong nuclear force. This area of physics is called quantum chromodynamics (QCD), because it involves color charges, but it is very hard to apply mathematically and experimental results are more precise than any of its predictions. QCD interactions do not appear to interact with leptons (i.e. various flavors of electrons and neutrinos) or with other bosons (i.e. photons and the particles that transmit the weak nuclear force called the W+, the W- (which is the antiparticle of the W+), and the Z.
The Case For Gluon Mass
Despite the conventional description of gluons as massless, in the 1980s, research on the Yang-Mills theory suggested that gluons do have an effective mass of 600 MeV/c^2 +/- 25% and that this influences the way that unstable baryons and mesons decay because the high mass of a gluon disfavors decay paths that include "gluon intermediate states."
Two recent investigations that are independent both from each other, and from the original researcher in the 1980s (although, of course, they read each other publications), have proposed a gluon effective mass of 500 MeV/c^2 (which is consistent with the original estimate) by applying Yang-Mills theory in different ways.
Experimental evidence also suggests that "in the low energy limit, glue carries no spin. Rather, true excitations of the Yang-Mills field are some kind of colorless states that makes the spectrum and having the lower state with a massive glueball that can also be seen in labs."
Thus, while gluons are ordinarily thought of as zero mass, spin-1 particles, which accurately summarizes their apparent behavior in the high energy limit, but in the low energy limit gluons decouple from quarks and form glue balls that act like 500 MeV/c^2 mass, spin-0 particles that are colorless.
For comparison's sake, the currently measures values of the other particles in the Standard Model are as approximately as follows (in MeV/c^2 units):
Electron Neutrino: Less than 0.000007
Electron: 0.511
Up Quark: 2-8
Down Quark: 5-15
Muon: 105.7
Muon Neutrino: Less than 0.27
Charm Quark: 1,000-1,600
Strange Quark: 100-300
Tau: 1777
Tau Neutrino: Less than 31
Top Quark: 168,000-192,000
Bottom Quark: 4,100-4,500
W Particles: 80,398
Z Particle: 91,188
Photon: Zero
The predicted mass value for the Higgs Boson tends to hover around 129, with about 114 to 154 being the range where it is not strongly excluded by experimental evidence.
Thus, a low energy state gluon would appear to have more mass than any kind of neutrino, a photon, an electron, a muon, an up quark, a down quark, a strange quark, or a Higgs boson. But, a low energy state gluon has less mass than a charm quark, a top quark, a bottom quark, a W particle, or a Z particle.
Indeed, a low energy state gluon with spin zero and a 500 MeV/c^2 mass, looks quite a bit like one of the electrical charge neutral, colorless Higgs bosons found in supersymmetry theories.
Free gluons appear in the low energy limit decoupled from their quarks. Thus, QCD, and gluons in particular, looks quite different in the low energy limit than in the high energy limit.
For those of you who haven't been paying attention, this is quite remarkable.
It is a rare day when a Standard Model particle mass, let alone eight of them, are determined purely from theory which is different from the pre-existing conventional wisdom. There are, after all, only 37 particles in the Standard Model (including anti-particles, but ignoring chirality and polarization): 12 quarks type particles, 6 electron type particles, 6 neutrino type particles, 1 photon, 3 weak force particles, 8 gluons, and the Higgs boson, and all but the Higgs boson have been experimentally observed for more than a decade.
Note that the six neutrinos has just been confirmed to have mass for less than a year.
Both of these results precede the arrival of any Large Hadron Collider data.
This now leaves the photon as the only massless particle in the Standard Model, which relies on the undiscovered Higgs boson to create mass in the first place.
We now have good reason to believe that there are fourteen more particles that interact with the Higgs field to obtain mass than we did when the Higgs process was originally contemplated, and this is without having discovered any new particles.
The massless photon result is likely to stand, however, as there are deep reasons related to the calculations that determine how photons move in quantum mechanics, and deep reasons in general relativity, for the photon to be massless, and the photon's behavior is understood with more theoretical precision than any other Standard Model particle.
Beyond Gluons
Research, both theoretical and experimental, also favors a model in which particles made of quarks exchange not just gluons, but also pions, which are two quark mesons (made of up or down flavor quarks) that come in three varieties, have spin zero, and act in a manner similar to a boson that exchanges forces between particles like the photon and the gluon in the low energy limit of the theory. These are called "pseudo-Goldstone bosons."
The experiments involved measure proton spin statistics and attempt to reconcile them with the theory, an effort that has been underway and reached no resolution since Feynman discussed it in his QED lectures in the 1980s, so far without success, although results are now tantalizingly close.
Millenium Problem in QCD Solved?
One of the grand prizes in mathematics is the Millenium Prize, which awards $1,000,000 to the first person to solve one of seven very hard unsolved problems in applied mathematics proposed in the year 2000. One was solved in March of this year (the Poincaré conjecture, a problem in topology related to what is necessary to prove that a shape is topologically equivalent to a three dimensional sphere). Six remain unsolved after a decade.
A problem allegedly solved now relates to quantum chromodynamics, but a proposed solution from a Russian physicist at The Ohio State University, has not yet been evaluated to determine if it is correct.
The problem requires proof that:
the quantum field theory underlying the Standard Model of particle physics, called Yang-Mills theory, satisfies the standard of rigor that characterizes contemporary mathematical physics, i.e. constructive quantum field theory. The winner must also prove that the mass of the smallest particle predicted by the theory be strictly positive, i.e., the theory must have a mass gap. . . .
beyond a certain scale, known as the QCD scale (more properly, the confinement scale, as this theory is devoid of quarks), the color charges are connected by chromodynamic flux tubes leading to a linear potential between the charges. (In string theory, this potential is the product of a string's tension with its length.) Hence free color charge and free gluons cannot exist. In the absence of confinement, we would expect to see massless gluons, but since they are confined, all we see are color-neutral bound states of gluons, called glueballs. If glueballs exist, they are massive, which is why we expect a mass gap.
The proposed proof, if true, would establish that the Standard Model is mathematically "well behaved," rather than a mere "dirty trick" of physicists that is conceptually flawed.
This doesn't require proof, however, that Yang-Mills theory actually describes nature, only proof that if it did, that it would be a mathematically consistent way of dong so. In the case of Quantum Electrodynamics (QED) the evidence that the theory describes reality is overwhelming. But, scientific understanding of the strong and weak nuclear forces is still being refined.
Implications
For most practical purposes, QCD is little more than a tool for classifying and predicting the properties of vast numbers of odd, unstable composite particles made up of quarks that are exceedingly rarely observed outside of particle accelerators. One can support an exceedingly technologically sophisticated society with a quite simple proton, neutron model of atomic nuclei and empirical data on how those atomic nuclei behave which is systematized in the period table, tables of atomic isotypes and tables of atomic binding energy each kind of atomic nucleus that can be derived from classical physics. The interesting parts of nuclear physics usually relate to the weak force.
But, if you are a scientist, rather than an engineer, these developments are highly relevant.
From an experimenter's perspective, the results permit high energy physicists to do more accurate experiments without new equipment. Typically, high energy physicists looking at phenomena like CP violations must first separate the interesting part of their data from well understood background information explained by the strong force. The most precisely the strong force is understood, the more accurately other results of a high energy physics experiment can be discerned.
And, both experimenters and theoretical physicists now need to be consider the question of whether massive particle effects observed at the Large Hadron Collider are Higgs bosons or glueballs, or whether they are in fact different names for the same things.
From a theoretical physicists perspective, glueballs raise all sorts of questions about how mass arises. Also, many theoretical models require gluons that are always massless, something that has not been a problem until now. Because gluons interact at very short distances from each other, massive gluons could also be influences in a non-negligible way by gravity, which grows in strength relative to the strong nuclear force as quarks get closer together to each other.
The Mystery of Mass in Physics
Mass is a really mysterious and confounding problem in modern physics. Almost everything that is ugly or odd about modern physics is deeply connected to mass. Somehow or other, we are all missing some deeply important piece of the puzzle necessary to understand it.
Most of the unexplained fundamental constants in the Standard Model involve particle masses, or appear to be intimately related to mass like the gravitational constant or the propensity of a particle to undergo beta decay by particular paths.
The deepest divide in quantum mechanics scholarship, between string theorists and proponents of loop quantum gravity, involves the question of whether quantum gravity is a force mediated by a graviton or is a product of the geometry of space-time. The only undiscovered particle in the Standard Model exists to impart mass to particles in the theory.
The separation of the mass imparting Higgs boson from the gravity producing graviton in supergravity theories (including Sting Theory) oddly unwinds the equivalence between inertial mass and gravitational mass that is so central to general relativity, making the equivalence seem merely coincidental.
The weak nuclear force transforms fermions (i.e. quarks, neutrinos and electron-like particles) into identical particles of higher or lower mass. These flavor (a.k.a. generation) transformations were totally unexpected when they are first discovered. Higher mass versions of quarks, neutrinos and electrons are exceedingly unstable. Generally speaking, the heavier a fundamental particle is, the less stable it is. Sixteen of the twenty four fermions in the Standard Model are unstable higher generation versions of their stable cousins.
All observed charge parity symmetry violations (CP violations) involve decays of at least one higher generation particle. Why does the weak interaction so intertwined with mass (e.g. in decay rates), maximally violate parity of all things, despite the fact that parity symmetry appears to be valid for all reactions involving electromagnetism and strong interactions (despite a theoretical possibility of CP violation in strong interactions with the existing equations)? Why are the probabilities of electrons and neutrinos behaving in certain ways in weak interactions so deeply intertwined with the probability that corresponding quarks will do the same and the the mass ratios of the respective particles?
We know that mass and energy are related in a strictly observed conservation relationship, but conservation of mass itself is apparently grossly disregarded in both strong and weak nuclear interactions. Beta decay, and now apparently, the low energy limit of QCD, routinely produce particles that weigh much more than the particles from which they arose. As the mass of different flavors of quarks show, even the relative relationships of one kind of quark of a given flavor to another kind of quark of the same flavor in terms of relative mass is not stable.
There is nothing inconsistent or theoretically improper about systems having changing masses. The E=mc^2 relationship neatly converts energy into matter and visa versa. But, that doesn't make these systems any less maddening. Conservation of matter is deeply ingrained in human intuition and the fact that it isn't observed in certain circumstances outside our normal perception and experience is as a result jarring.
Our intuition yearns to find some sort of intermediate structure, preons or strings excitations, for example, between raw mass-energy and the particles of the Standard Model. There could very well be some. But, modern quantum mechanics and general relativity provide this to us without sugar coating or explanation. It tells us precisely the likelihood that a muon will all of the sudden become an electron, but it doesn't very little to illuminate why this happens.
Electromagnetism doesn't generally change a particle's rest mass by itself. An electron remains an electron. Photons have differing energies, but remain massless at all times. Rarely, an electron and positron condense out of a high energy electron, but these instances are brief and the conditions that produce them are exceptional.
The strong nuclear force is at least discrete about it. It hides its massive glueballs in low energy states within confined systems where they can only be observed indirectly.
The weak nuclear force, in contrast, flaunts itself, spitting out impossibly large products from impossibly small packages. The weak force is also a tease. One can be forgiven for thinking that a down quark is simply a composite particle made up of the up quark, electron and electron neutrino which provide its most common channel of beta decay. The combined masses on both sides of the reaction are very similar. Yet, nothing in quantum mechanics comes out and says that particles are composite, and the intuition that down type quarks are composite is far less solid when at the next flavor, a strange quark decays into a charm quark, a muon and a muon neutrino, with combined masses far in excess of that of the strange quark.
While weak force branching ratio grids strongly suggest that there are precisely three generations of fermions, the reasons that the universe comes in precisely three flavors of four kinds of particles, one of which, the neutrino, appears to come only left handed varieties, is obscure.
All hypothetical particles with spins other than 1/2 or 1 are deeply connected with gravity and mass in their respective theories.
We know that a particle's speed relative to the speed of light affects its relativistic linear momentum, which looks very similar to its mass, because in Newtonian circumstances and in the low speed limit of special relativity, linear momentum is simply mass times velocity. The analog to Newtonian mass in general relativity that induces gravity, called the stress-energy tensor, in addition to rest mass-energy density, also includes linear momentum, linear momentum flux, energy flux, sheer stress, and pressure.
Missing mass, called dark matter, whose apparent effects are observed, but whose nature is not known, appears to make up about 30% of the stuff in the universe and may act differently than ordinary matter. At galactic scales, dark matter effects and "dark energy" effects (which also can be found in the mass driven equations of gravity as the "cosmological constant" and in quantum theory as the vacuum effective value of the Higgs field estimated to be about 256 MeV/c^2) totally overwhelm the Newtonian gravitational simplification of general relativity that we expect to see in most circumstances.
The key parameter of loop quantum gravity is deduced primarily from the physics of supermassive black holes. While we normally experience gravity as the simplest of fundamental forces, behaving in its Newtonian form GMm/r^2 in almost all circumstances we encounter, the experimentally well established General Relativity expressions of gravity are more complex than those of any of the other fundamental forces of physics.
21 June 2010
The Case For Supersymmetry
Luboš Motl is a character in the science blogosphere. He is strident, utterly self-confident and not one to suffer fools kindly. His political stances and Internet manner can make a stir even when he is not talking about science. But, today, he is (mostly) talking about science.
He is an ardent supporter of string theory and supersymmetry and makes a particularly bold show of support in the wake of the possibility that new physics including a scenario with five Higgs bosons could have experimental support as he explains in a post today.
A key point regarding his support of supersymmetry is that it goes hand in hand with string theory. If string theory is right, supersymmetry is the most plausible version of string theory that could fit the real world data.
He has acknowledged that "Fermilab's D0 Collaboration claimed the evidence for a new source of CP-violation in their comparison of muon-muon and antimuon-antimuon final states . . . [is] a huge claim." And, he acknowledges that it may not end up being true. Indeed, he's proud that physicists have such high standards that a finding that is statistically 95% likely to be true doesn't mean much, unlike other disciplines with lower standards. Likewise he notes that supersymmetry "may or may not be seen by the LHC." This is all extremely ordinary and boring. No reputable physicist on the planet is saying anything different.
But, the frame into which Motl puts the finding is remarkable, not so much for its content (many physicists have invested immense amounts of time on the same theories, essentially betting their careers on them) as for its confident style.
He first hits the basics. He explains how a single Higgs boson would fill out the Standard Model by giving particles masses, and how a supersymmetric model would require five Higgs bosons.
Essentially, the mathematical structure of the Standard Model would need four Higgs bosons except for the fact that the three particles that give rise to the nuclear weak force serve the same role leaving the Standard Model just one particle short of what it needs for all of its mathematics to be work. The Higgs is also needed in the Standard Model to give quarks mass.
He then explains that a five Higgs boson model is necessary for a supersymmetric theory in order to make the more elaborate mathematics behind a theory with far more particles work, you need two doublets (i.e. 8) Higgs bosons less three weak force particles that serve the same role in the model.
Thus, the Standard Model with one Higgs boson, and a supersymmetric extension of it with five, both make the math work and produce a theoretical structure that could explain what we observe.
In other words, supersymmetry flows from string theory, could explain dark matter, and could motivate the properties of the basic laws of nature that the Standard Model doesn't even try to explain.
This is particularly bold because, if the LHC doesn't discover anything but one Higgs boson and one graviton, and there is so far only the faintest hint outside that mathematics that it might. Further, the make or break moment when some of the predicted particles will either be discovered or ruled out is a just a few years away.
If the LHC doesn't find anything new (and even if it finds only one Higgs particle) then supersymmetry, and by implication string theory, becomes much less plausible as a theory. The showdown is approaching and Motl has accurately shown that we may soon have some strong pointers towards whether the main thrust of theoretical physics for the last few decades is right or is likely to be wrong.
He is an ardent supporter of string theory and supersymmetry and makes a particularly bold show of support in the wake of the possibility that new physics including a scenario with five Higgs bosons could have experimental support as he explains in a post today.
A key point regarding his support of supersymmetry is that it goes hand in hand with string theory. If string theory is right, supersymmetry is the most plausible version of string theory that could fit the real world data.
He has acknowledged that "Fermilab's D0 Collaboration claimed the evidence for a new source of CP-violation in their comparison of muon-muon and antimuon-antimuon final states . . . [is] a huge claim." And, he acknowledges that it may not end up being true. Indeed, he's proud that physicists have such high standards that a finding that is statistically 95% likely to be true doesn't mean much, unlike other disciplines with lower standards. Likewise he notes that supersymmetry "may or may not be seen by the LHC." This is all extremely ordinary and boring. No reputable physicist on the planet is saying anything different.
But, the frame into which Motl puts the finding is remarkable, not so much for its content (many physicists have invested immense amounts of time on the same theories, essentially betting their careers on them) as for its confident style.
He first hits the basics. He explains how a single Higgs boson would fill out the Standard Model by giving particles masses, and how a supersymmetric model would require five Higgs bosons.
Essentially, the mathematical structure of the Standard Model would need four Higgs bosons except for the fact that the three particles that give rise to the nuclear weak force serve the same role leaving the Standard Model just one particle short of what it needs for all of its mathematics to be work. The Higgs is also needed in the Standard Model to give quarks mass.
He then explains that a five Higgs boson model is necessary for a supersymmetric theory in order to make the more elaborate mathematics behind a theory with far more particles work, you need two doublets (i.e. 8) Higgs bosons less three weak force particles that serve the same role in the model.
Thus, the Standard Model with one Higgs boson, and a supersymmetric extension of it with five, both make the math work and produce a theoretical structure that could explain what we observe.
Supersymmetry is a highly constrained property of the Universe that . . . predicts some new phenomena but the tightly organized system controlling all the new particles and interactions actually makes supersymmetric extensions the most conservative models of new physics that you may add to the Standard Model.
That's not enough to feel confident that the LHC should observe the supersymmetry.
But if you're competent, if you understand how supersymmetry (at some scale) follows from (i.e. is predicted by) all realistic vacua of string theory, if you appreciate how it helps to produce the dark matter candidate particles and to preserve the gauge coupling unification, and how it helps to solve the hierarchy problem (the puzzle why the Higgs doesn't want to become as heavy as the Planck scale), you may start to understand why it seems more likely than not that the LHC should actually observe SUSY.
In other words, supersymmetry flows from string theory, could explain dark matter, and could motivate the properties of the basic laws of nature that the Standard Model doesn't even try to explain.
This is particularly bold because, if the LHC doesn't discover anything but one Higgs boson and one graviton, and there is so far only the faintest hint outside that mathematics that it might. Further, the make or break moment when some of the predicted particles will either be discovered or ruled out is a just a few years away.
If the LHC doesn't find anything new (and even if it finds only one Higgs particle) then supersymmetry, and by implication string theory, becomes much less plausible as a theory. The showdown is approaching and Motl has accurately shown that we may soon have some strong pointers towards whether the main thrust of theoretical physics for the last few decades is right or is likely to be wrong.
Word of the Day: "Flowersnake"
One of the boasts of Anglophiles is that the English language has more concepts which have been reduced to words than any other.
An exception to that claim is the Korean expression which is translated literally as "flower snake" (꽃뱀), possibly from the 1938 poetry collection and poem of that name by Midang So Chong-Ju, one of Korea's leading poets (but possibly a pre-existing expression used most famously by the poet). It doesn't have a simple English equivalent with the same meaning despite expressing a concept that is not particular to Korean or Asian culture. The Asian snake of that name Elaphe Moellendorfi is a large, colorful rat snake (and while rather aggressive is not actually venomous).
This source defines a flowersnake as a woman who tempts a man who looks rich into adultery and then asks for money in exchange for keeping the secret, in a country where adultery remains a crime and divorce was considered a major blow to a woman's reputation, and dates the term to cabarets frequented by married women during a modern boom for Korean construction companies in the Middle East (leaving men away from home for prolonged periods). The male equivalent is a "swallow."
It is seriously insulting to describe a woman as a flowersnake (the insult has more bite and a more specific meaning than "whore," or "temptress" for example), and sometimes semantically translated as "playgirl" but, is more accurately "venomous seductress." It refers to the kind of woman who would, for example, put a man in a situation that appears compromising and then blackmail him with it. The sense is almost psychopathic. The Sarah Michelle Gellar character in the film Cruel Intentions (1999), for example, might be the kind of person who would become a flowersnake.
I came across it watching "Country Princess" (2003) in translation (Episode 4 features an accusation that a woman is a flowersnake leading to a run in with the police in Seoul), and looked around the internet for some corroboration regarding its meaning.
17 June 2010
Autoimmune Diseases Linked Genetically
A new study has found that defects in a gene that encodes an enzyme called sialic acid acetylesterase or SIAE, which regulates the activity of the immune system’s antibody-producing B cells, is linked to autoimmune disorders in 2 percent to 3 percent of autoimmune disorder cases.
What is remarkable is that so many different autoimmune disorders are involved, including type 1 diabetes, multiple sclerosis, lupus, Crohn disease and rheumatoid arthritis.
The implication is that it may be possible to develop a generalized autoimmune disease drug. The finding also supports the general idea that autoimmune diseases have similar causes.
What is remarkable is that so many different autoimmune disorders are involved, including type 1 diabetes, multiple sclerosis, lupus, Crohn disease and rheumatoid arthritis.
The implication is that it may be possible to develop a generalized autoimmune disease drug. The finding also supports the general idea that autoimmune diseases have similar causes.
Ignore The Last Two Years Of The NLRB
This morning’s [U.S. Supreme Court] decision in New Process Steel v. National Labor Relations Board, No. 08-1457 . . . held that for a two-year period that ended only recently, the Board had been acting without statutory authority in adjudicating nearly six hundred cases because vacancies had left the Board without a quorum.
From SCOTUS blog.
The Court summed up its ruling as follows:
[The law] as it currently exists, does not authorize the Board to create a tail that would not only wag the dog, but would continue to wag after the dog died.
The NLRB is the national five person commission that issues private sector union-management law regulations, supervises union recognition elections and adjudicates union-management disputes as a quasi-judicial body.
It went almost two years with three vacancies on the board. In part due to a pitched Senate fight over the nominations driven by Republican concerns that the nominees would adopt the substance of the Employee Free Choice Act via regulations. The standoff was temporarily ended with recess appointments from President Obama in March.
A lack of subject-matter jurisdiction (i.e. authority to make a judicial ruling) normally invalidates a judicial decision, even if no objection is made at the time, so the ruling appears to invalidate every judgment in every private sector union-management dispute in the United States for the last two years. This may also invalidate two years of labor law precedents from those cases.
About three-quarters of the NLRB cases were unfair labor practices disputes and the rest were representation cases (e.g. determinations of whether a union should be recognized).
Presumably, the ruling will not reach cases that were settled in the process prior to an NLRB decision, and it isn't clear what the status quo will be in cases that were decided but not appealed because the parties accepted the results. This is important, because most NLRB cases are resolved without going to the Board itself. According to the NLRB:
Following an investigation, approximately 65 percent of all unfair labor practice charges are dismissed or voluntarily withdrawn for lack of merit. Of the remaining charges, every effort is made to resolve the case through an appropriate settlement. We have been successful in achieving settlements in such cases 86 percent of the time.
No merit decisions are made by a regional director and appealed to the General Counsel's Office of Appeals in Washington, DC. Unfair labor practices cases with merit are decided by an NLRB Administrative Law Judge and only reach the Board if they are appealed. There is a good argument that the status quo left in place when a case has been appealed to the full NLRB Board is the Administrative Law Judge's ruling, rather than the status quo before the NLRB took any action.
Even if there is a legal right to ignore a decision, the parties may choose not to avail themselves of that right.
It is also unclear if appellate court precedents reviewing NLRB decisions made without jurisdiction will themselves loose force as precedents. If the appellate court precedents retain legal force, and the NLRB decisions simply affirmed administrative law judge rulings, and the appellate court determined that the ALJ rulings were correct (or erroneous), then the parties may choose not to relitigate cases whose outcomes are foregone conclusions.
When, if ever, does a deadline to appeal a decision made the interim period start, now that the board is properly constituted?
Were deadlines to file cases with the NLRB equitably tolled while there was no NLRB with jurisdiction in existence? Normally, the statute of limitations to file an unfair labor practices case is just six months.
An affirmation of old case resolutions by the NLRB now that it has jurisdiction might, at least, start the clock ticking again, but only if the NLRB is legally allowed to rely on the record of the proceedings created when the NLRB lacked jurisdiction. Presumably, Administrative Law Judge hearing records would not be impaired by the lack of a Board quorum.
The Chair of the NLRB is disappointed and promises that board members will "do our best to rectify the situation."
The end result may not be what GOP opponents to the nominations intended.
[T]he term of the board’s lone Republican member, Peter Carey Schaumber, is set to expire in August. So, by summer’s end, the NLRB will be controlled by an all-Democratic majority with no Republican to file dissents or voice objections during debates on board action.
And things could grow still worse for the corporate world. By fall, Obama will have an opportunity to replace the board’s general counsel, currently a George W. Bush appointee.
From here.
Republicans are also unhappy about changes to the Department of Labor website that make it easier to get the full history of a company's safety violations and infractions, making it easier for those suing companies to show a pattern and practice of safety violations.
I'd be very curious to see what my more strongly labor law oriented peers in the blogosphere think about this result. Administratively, it is obviously a mess, but without knowing how well the two remaining board members did their jobs it is hard to know what to think about the substance of their rulings in the past two years.
One of the two people on the Board during the lack of quorum period was a Democrat and the other was a Republican. The rump NLRB apparently decided only easy cases:
"The only cases they are getting out are the pure vanilla cases, where it's abundantly clear the case should go one way," said former board Chairman Robert Battista, a Bush appointee who now is an attorney in private practice.
Presumably, almost all of the plain vanilla cases affirmed ALJ rulings, so the status quo may be left unchanged in these cases.
More than fifty contentious cases upon which the Board members could not agree and all regulatory action stopped during the interim period.
Hello Algeria!
The most recent visitor to this blog as I write this post was in Algeria. Greeting from Denver. I can't find a translation into Arabic, so I'll have to settle for French.
La salutation de Denver, où nos montagnes sont plus grandes et nos dunes de sable sont plus petites qu'elles sont en Algérie. Ayez un beau jour.
La salutation de Denver, où nos montagnes sont plus grandes et nos dunes de sable sont plus petites qu'elles sont en Algérie. Ayez un beau jour.
16 June 2010
Washington Tragedy Revisited
A little more than a year ago I wrote about the unfortunate case of Carissa Marie Daniels, a teen mother in Washington State who was sentenced to 195 months in prison in the wake of the following events:
The case was appealed to the U.S. Supreme Court, on procedural grounds. The ultimate resolution of the case came this week:
The result is still the wrong one, but it is worth following stories through to see how they end.
When petitioner Carissa Marie Daniels was seventeen years old, she gave birth to a son. At the time, she was a high school student in Washington with no criminal record. She lived with her twenty-two-year-old boyfriend, Clarence Weatherspoon, who was not the father of the child. Weatherspoon did not have a job, so he often stayed at home to watch the baby while petitioner went to school or work.
Hardly a week after birth, the baby began to have health problems. Petitioner promptly took him to the emergency room. The doctor, however, found nothing wrong with the baby. Over the following eight weeks, as her baby continued to ail, petitioner took him to his regular pediatrician or the emergency room on seven additional occasions seeking treatment. But he did not get better. Nine weeks after the baby’s birth, he died. On the morning of his death, petitioner had left him in the care of her boyfriend. After receiving a call from her boyfriend that afternoon saying that her baby looked ill, petitioner came home, found her child limp, and called 911. When paramedics arrived, they determined that the child was dead.
As is common when a baby dies unexpectedly, an autopsy was performed. The autopsy suggested that shaken baby syndrome or blunt head trauma could have caused the child’s death. Those inspecting the baby could not be sure of the cause of death, though, because it typically is “really hard to tell” whether a baby’s ill health is caused by traumas or something congenital, or even “the flu.”
The case was appealed to the U.S. Supreme Court, on procedural grounds. The ultimate resolution of the case came this week:
Superior Court Judge Vicki Hogan on Thursday sentenced Daniels to 12 years, one month in prison at the recommendation of deputy prosecutor John Neeb and defense attorney Clayton Dickinson.
Hogan also gave Daniels credit for the nearly eight years she’s already served.
Dickinson told Hogan that his client was 17 at the time of Damon’s death and had been suffering from extreme depression and stress brought on by being a teenage mother with financial troubles.
The result is still the wrong one, but it is worth following stories through to see how they end.
Terrorism in Glendale, Colorado
The Animal Liberation Front has claimed responsibility over the Internet for the April 30 arson of the Sheepskin Factory in Glendale, Colorado (less than a block from my office on Colorado Boulevard).
A security camera captured a late night arsonist at the Sheepskin Factory, but the hoodie that the person was wearing and the distance from the camera made the person basically unidentifiable.
Given the track record of the Animal Liberation Front, I'm inclined to believe it.
Life is stranger than fiction.
The website for the activists' magazine, Bite Back, posted a comment from "ALF Lone Wolf," who also claimed responsibility for a May 31 attack on a leather store in Salt Lake City.
"The arson at the Sheepskin Factory in Denver was done in defense and retaliation for all the innocent animals that have died cruelly at the hands of human oppressors," the post stated. "Be warned that making a living from the use and abuse of animals will not be tolerated. Also be warned that leather is every bit as evil as fur. As demonstrated in my recent arson against the Leather Factory in Salt Lake City. Go vegan!"
A security camera captured a late night arsonist at the Sheepskin Factory, but the hoodie that the person was wearing and the distance from the camera made the person basically unidentifiable.
Given the track record of the Animal Liberation Front, I'm inclined to believe it.
Life is stranger than fiction.
Five Higgs Bosons Found? Probably Not.
A May 23 pre-print explaining how a family of five Higgs bosons could produce a significantly larger than expected CP violation has triggered a flood of articles on new physics, derived from a Fermilab affiliated newsletter on the possibility. But, this is unlikely to be the case.
There may be no significant deviation from the Standard Model after all.
The evidence that there is a big CP violation to explain, however, has faded as more data are collected, although I would be more guarded than Woit in dismissing the apparent irregularity as nothing. The D0 finding of a deviation of three standard deviations from what was expected in the standard model was notable because it appeared to be corroborated by a two standard deviation from the expected result finding in the same direction by a separate group of experimenters. But, with more data, the apparent deviation from the expected value in the second experiment is now consistent with the Standard Model. An anomaly in the B meson decay in the first experiment, is now contradicted by the data gathered by the second group, rather than supported by it, so it now the anomaly in the D0 findings seem much less likely to be anything more than a statistical fluke or a slight tweak to the relevant physical constants.
Keep in mind that quantum mechanics gives statistical predictions about how many particles of what kind will end up where, rather than specific predictions of where a particle will go under certain circumstances. The Standard Model might predict, for example, that on average 60 muons end up at detector one, that 20 muons end up at detector two, and that 20 muons end up at detector three, in the first two hundred trials that produce muons.
Experiments have to figure out what kind of random process is most likely to have generated the data they observed. In this case, the Standard Model prediction was based on multiple different observations about B meson decay, along with another constant whose value is much better known, and the physical constants were based on a point within the margin of error of all of the observation sets.
All of the question marks left in the Standard Model involve rare phenomena (in this case, the decay of short lived, rarely created B mesons), and the new experiments involve an especially rare subset of that data (B mesons which include as one component, the fairly rare s quark), so very large numbers of particle collisions have to be done to get enough data to be statistically significant. Experimenters have to balance the desire to get more stable results by including as much old data as possible with the need to make sure that the experiments themselves are exactly consistent with each other.
One reason that a three standard deviation difference from the expected Standard Model prediction in the current run draws yawns from the big names in physics, particularly if it is not replicated, is that new experiments are cumulative with a large number of experiments that were done to create the model in the first place. What looks very statistically significant when viewed in isolation, may not look very notable when pooled with all the data ever collected in the past about the same thing (in this case, B meson decay properties).
If there are new physics, a Higgs doublet is unlikely to be its cause
The notion that a CP violation would be due to the presence of a family of new Higgs bosons, even if it is present, would be far down the list of likely causes for the observed effect even if there really is a significant deviation from the Standard Model. A supersymmetric extension of the Standard Model, is an exceedingly unparsimonious way to deal with the issue. It calls for all manner of new particles and interactions for which we have no evidence in order to fine tune the model.
The Standard Model already predicts a CP violation in B meson decay, which is what the new experiments have also observed, and the physical constants used to determine the Standard Model prediction (they are part of the CDK matrix) come from experimental estimates of two properties of B meson decay observed in prior experiments.
The logical place to start trying to reconcile the Standard Model with the new B meson decay data would be to try to reset the physical constant in the Standard Model based on B meson decay experiments to reflect the new data, rather than to hypothesize five new particles.
I'm not the only one to doubt supersymmetry is the cause of anything odd going on in the B meson decay data. An article by Maxime Imbeaulta, Seungwon Baekb, and David Londona, in the issue of Physics Letters B that came out earlier this month explains that:
Physics publication moves fast. Six articles have already cited this one, which was first available in pre-print form back in April. One of those citing articles suggests that the a tweak to how the interactions of quarks impacts B meson decay is the key to explaining the experimental data. Calculating quark interactions directly from the theory is very difficult, so this is an obvious point where a theoretical prediction from the Standard Model from which deviation is measured, could be wrong.
Of course, there is nothing wrong with hypothesizing all sorts of wacky or unlikely theories in theoretical physics. Dozens, if not hundreds of such papers that do just that are posted every month and people have been regularly publishing SUSY theories for several decades, and a lack of evidence to contradict them has kept the flow coming. But, regular users of the pre-print database recognize who speculative they are and how easy it is to come up with one. The fact that reputable physicists come up with a theory and publish it does not mean that it is likely to be true.
Since it is widely known that some tweaks of the Standard Model are necessary anyway for a variety of reasons: a failure to make a definitive Higgs boson observation, the confirmation neutrinos have mass, and that lack of a clear dark matter candidate, for example. So, the physics vultures have been swarming in to update it. Professionally, putting a theory in play is a bit like putting a chip on a roulette table. Nobody knows what the final result will be, but you can't win a Nobel Prize if you don't have a theory in play, and if you are right, you look like a genius. But, the less clear it is how things will turn out, and the closer one gets to finding out who wins, or at least, who loses (which the Large Hadron Collider may do), the more pressure there is to put theories on the table.
There may be no significant deviation from the Standard Model after all.
The evidence that there is a big CP violation to explain, however, has faded as more data are collected, although I would be more guarded than Woit in dismissing the apparent irregularity as nothing. The D0 finding of a deviation of three standard deviations from what was expected in the standard model was notable because it appeared to be corroborated by a two standard deviation from the expected result finding in the same direction by a separate group of experimenters. But, with more data, the apparent deviation from the expected value in the second experiment is now consistent with the Standard Model. An anomaly in the B meson decay in the first experiment, is now contradicted by the data gathered by the second group, rather than supported by it, so it now the anomaly in the D0 findings seem much less likely to be anything more than a statistical fluke or a slight tweak to the relevant physical constants.
Keep in mind that quantum mechanics gives statistical predictions about how many particles of what kind will end up where, rather than specific predictions of where a particle will go under certain circumstances. The Standard Model might predict, for example, that on average 60 muons end up at detector one, that 20 muons end up at detector two, and that 20 muons end up at detector three, in the first two hundred trials that produce muons.
Experiments have to figure out what kind of random process is most likely to have generated the data they observed. In this case, the Standard Model prediction was based on multiple different observations about B meson decay, along with another constant whose value is much better known, and the physical constants were based on a point within the margin of error of all of the observation sets.
All of the question marks left in the Standard Model involve rare phenomena (in this case, the decay of short lived, rarely created B mesons), and the new experiments involve an especially rare subset of that data (B mesons which include as one component, the fairly rare s quark), so very large numbers of particle collisions have to be done to get enough data to be statistically significant. Experimenters have to balance the desire to get more stable results by including as much old data as possible with the need to make sure that the experiments themselves are exactly consistent with each other.
One reason that a three standard deviation difference from the expected Standard Model prediction in the current run draws yawns from the big names in physics, particularly if it is not replicated, is that new experiments are cumulative with a large number of experiments that were done to create the model in the first place. What looks very statistically significant when viewed in isolation, may not look very notable when pooled with all the data ever collected in the past about the same thing (in this case, B meson decay properties).
If there are new physics, a Higgs doublet is unlikely to be its cause
The notion that a CP violation would be due to the presence of a family of new Higgs bosons, even if it is present, would be far down the list of likely causes for the observed effect even if there really is a significant deviation from the Standard Model. A supersymmetric extension of the Standard Model, is an exceedingly unparsimonious way to deal with the issue. It calls for all manner of new particles and interactions for which we have no evidence in order to fine tune the model.
The Standard Model already predicts a CP violation in B meson decay, which is what the new experiments have also observed, and the physical constants used to determine the Standard Model prediction (they are part of the CDK matrix) come from experimental estimates of two properties of B meson decay observed in prior experiments.
The logical place to start trying to reconcile the Standard Model with the new B meson decay data would be to try to reset the physical constant in the Standard Model based on B meson decay experiments to reflect the new data, rather than to hypothesize five new particles.
I'm not the only one to doubt supersymmetry is the cause of anything odd going on in the B meson decay data. An article by Maxime Imbeaulta, Seungwon Baekb, and David Londona, in the issue of Physics Letters B that came out earlier this month explains that:
At present, there are discrepancies between the measurements of several observables in B→πK decays and the predictions of the Standard Model (the “B→πK puzzle”). Although the effect is not yet statistically significant—it is at the level of 3σ—it does hint at the presence of new physics. In this Letter, we explore whether supersymmetry (SUSY) can explain the B→πK puzzle. In particular, we consider the SUSY model of Grossman, Neubert and Kagan (GNK). We find that it is extremely unlikely that GNK explains the B→πK data. We also find a similar conclusion in many other models of SUSY. And there are serious criticisms of the two SUSY models that do reproduce the B→πK data. If the B→πK puzzle remains, it could pose a problem for SUSY models.
Physics publication moves fast. Six articles have already cited this one, which was first available in pre-print form back in April. One of those citing articles suggests that the a tweak to how the interactions of quarks impacts B meson decay is the key to explaining the experimental data. Calculating quark interactions directly from the theory is very difficult, so this is an obvious point where a theoretical prediction from the Standard Model from which deviation is measured, could be wrong.
Of course, there is nothing wrong with hypothesizing all sorts of wacky or unlikely theories in theoretical physics. Dozens, if not hundreds of such papers that do just that are posted every month and people have been regularly publishing SUSY theories for several decades, and a lack of evidence to contradict them has kept the flow coming. But, regular users of the pre-print database recognize who speculative they are and how easy it is to come up with one. The fact that reputable physicists come up with a theory and publish it does not mean that it is likely to be true.
Since it is widely known that some tweaks of the Standard Model are necessary anyway for a variety of reasons: a failure to make a definitive Higgs boson observation, the confirmation neutrinos have mass, and that lack of a clear dark matter candidate, for example. So, the physics vultures have been swarming in to update it. Professionally, putting a theory in play is a bit like putting a chip on a roulette table. Nobody knows what the final result will be, but you can't win a Nobel Prize if you don't have a theory in play, and if you are right, you look like a genius. But, the less clear it is how things will turn out, and the closer one gets to finding out who wins, or at least, who loses (which the Large Hadron Collider may do), the more pressure there is to put theories on the table.
15 June 2010
Neutrino v. Anti-Neutrino Discrepencies?
Fermilab investigators working on the MINOS experiment have found a 40% difference between neutrino and anti-neutrino oscillations, with a two standard deviation significance, in a result announced yesterday (in a press release).
There are reasons for skepticism. A difference would seem to imply that neutrinos and anti-neutrinos of the same flavor have different masses. Yet, every other known particle has precisely the same weight as its anti-particle. In short, it is probably a fluke. While a 5% chance of a statistical fluke wouldn't bother a social scientist, physicists expect about four standard deviations of certainty, particularly when a result is at odds with empirically well established prior theory.
But, since every anomalous result starts out being statistically insignificant, it is worth noting. If true, it will be one of those "who ordered that?" moments in the history of science.
Most plausibly it would involve some sort of CP violation. This would allow for a scenario in which neutrinos and anti-neutrinos have the same mass, but their oscillation frequencies are not the same because anti-matter would be disfavored relative to matter in some way.
There are reasons for skepticism. A difference would seem to imply that neutrinos and anti-neutrinos of the same flavor have different masses. Yet, every other known particle has precisely the same weight as its anti-particle. In short, it is probably a fluke. While a 5% chance of a statistical fluke wouldn't bother a social scientist, physicists expect about four standard deviations of certainty, particularly when a result is at odds with empirically well established prior theory.
But, since every anomalous result starts out being statistically insignificant, it is worth noting. If true, it will be one of those "who ordered that?" moments in the history of science.
Most plausibly it would involve some sort of CP violation. This would allow for a scenario in which neutrinos and anti-neutrinos have the same mass, but their oscillation frequencies are not the same because anti-matter would be disfavored relative to matter in some way.
Fish Oil Pills (i.e. Omega-3s) Are Useless
The story, and the story of how reporters covering the study came up with headlines that said exactly the opposite, can be found here.
The idea that fish oil improves the concentration of school kids was actually been debunked by the study, although fault lies mostly with deceptive language in the abstract, rather than with the reporters.
The idea that fish oil improves the concentration of school kids was actually been debunked by the study, although fault lies mostly with deceptive language in the abstract, rather than with the reporters.
Arbitration Bad For Employees
A study of a complete data set of California employment arbitrations by a Cornell professor reveals that employee do worse in arbitration of employment disputes than they do in litigation, particularly vis-a-vis employers that arbitrate multiple employment cases:
From here.
The Arbitration Fairness Act, pending in Congress, would prohibit pre-dispute arbitration agreements in non-union employment contacts.
The study analyzes 3,945 arbitration cases, of which 1,213 were decided by an award after a hearing, filed and reaching disposition between January 1, 2003 and December 31, 2007. This includes all the employment arbitration cases administered nationally by the AAA during this time period that derived from employer-promulgated arbitration procedures. Key findings include:
(1) the employee win rate amongst the cases was 21.4%, which is lower than employee win rates reported in employment litigation trials;
(2) in cases won by employees, the median award amount was $36,500 and the mean was $109,858, both of which are substantially lower than award amounts reported in employment litigation;
(3) mean time to disposition in arbitration was 284.4 days for cases that settled and 361.5 days for cases decided after a hearing, which is substantially shorter than times to disposition in litigation;
(4) mean arbitration fees were $6,340 per case overall, $11,070 for cases disposed of by an award following a hearing, and in 97 percent of these cases the employer paid 100 percent of the arbitration fees beyond a small filing fee, pursuant to AAA procedures;
(5) in 82.4 percent of the cases, the employees involved made less than $100,000 per year; and
(6) the mean amount claimed was $844,814 and 75 percent of all claims were greater than $36,000. . . .
The results provide strong evidence of a repeat employer effect in which employee win rates and award amounts are significantly lower where the employer is involved in multiple arbitration cases. . . . The results also indicate the existence of a significant repeat employer-arbitrator pairing effect in which employees on average have lower win rates and receive smaller damage awards where the same arbitrator is involved in more than one case with the same employer[.]
From here.
The Arbitration Fairness Act, pending in Congress, would prohibit pre-dispute arbitration agreements in non-union employment contacts.
14 June 2010
Structural v. Cyclical Unemployment
Some unemployment is a result of the fact that certain industries slow down in recessions and speed up in economic booms. This is called "cyclical unemployment." An example is air transportation.
At other times, unemployment is due to jobs disappearing in one industry, e.g., print journalism, while they are being created in another industry, e.g., health care. This is called "structural unemployent." Recessions with a lot of structural unemployment can be thought of by their other name, "corrections" because they involve the economy rapidly redesigning itself to fit its new needs. Structural unemployment can be a deeper social problem, because those who lose jobs and those who get jobs have different skill sets; even though it structure unemployment can also change economies in ways that further long term economic growth, instead of simply representing the inherent bumpy nature of growth in capitalist economies.
Put another way, structural unemployment indicates a problem with the structure of the "real economy," while cyclical unemployment indicates a problem with an isolated key factor of production like investment capital or global oil prices.
For better or for worse, unemployment in the financial crisis has been more structural rather than cyclical, but not exeptionally so, according to researchers at the Federal Reserve Bank of Atlanta based on the industry specific mix of job losses and gains.
By their measure, the percentage of unemployment that was structural "in the 1974–75 recession and the recessions of the early 1980s . . . was around 50 percent. That share increased to 57 percent for the 1990–91 recession and rose sharply to 79 percent for the 2001 recession." Their replication of results from prior recessions (based on prior research) found the 2001 recession to be 81% structural. Unemployment in the current Financial Crisis has been 65% structural.
At other times, unemployment is due to jobs disappearing in one industry, e.g., print journalism, while they are being created in another industry, e.g., health care. This is called "structural unemployent." Recessions with a lot of structural unemployment can be thought of by their other name, "corrections" because they involve the economy rapidly redesigning itself to fit its new needs. Structural unemployment can be a deeper social problem, because those who lose jobs and those who get jobs have different skill sets; even though it structure unemployment can also change economies in ways that further long term economic growth, instead of simply representing the inherent bumpy nature of growth in capitalist economies.
Put another way, structural unemployment indicates a problem with the structure of the "real economy," while cyclical unemployment indicates a problem with an isolated key factor of production like investment capital or global oil prices.
For better or for worse, unemployment in the financial crisis has been more structural rather than cyclical, but not exeptionally so, according to researchers at the Federal Reserve Bank of Atlanta based on the industry specific mix of job losses and gains.
By their measure, the percentage of unemployment that was structural "in the 1974–75 recession and the recessions of the early 1980s . . . was around 50 percent. That share increased to 57 percent for the 1990–91 recession and rose sharply to 79 percent for the 2001 recession." Their replication of results from prior recessions (based on prior research) found the 2001 recession to be 81% structural. Unemployment in the current Financial Crisis has been 65% structural.
SATs and grades in math and physics
There is clearly something different about the physics and math GPA vs SAT distributions compared to all of the other majors we looked at . . . . In the other majors (history, sociology, etc.) it appears that hard work can compensate for low SAT score. But that is not the case in math and physics.
From here.
An SAT-M score below 600 almost insures a cumulative GPA of below 3.5 in math and physics. An SAT-M score above 670 almost insures a cumulative GPA of 3.5 or more in math and physics.
The investigators suggest that this supports the possibility that there are cognitive thresholds in physics and mathematics. But, their finding that "the probability of doing well in any particular quarter of introductory physics may be linear with SAT-M, but the probability of having a high cumulative GPA in physics or math is very non-linear in SAT-M," suggests a simpler alternative: physics and mathematics are cumulative, while most other college disciplines are less cumulative.
The study is based on five years of University of Oregon data. Other conclusions by the same researchers with the same data include:
1. SATs predict upper GPA with correlations in the 0.35 -- 0.50
range.
2. Overachievers exist in most majors, with low SAT scores but
very high GPAs. These overachievers are disproportionately female.
3. Underachievers exist in all majors, with high SAT scores but
very low GPAs. These underachievers are disproportionately male.
The overachiever/underachiever disconnect is suggestive of a possible ADHD effect, as its manifestation is sex linked and captures many of the factors that figure into grades that are not captured in an SAT score.
11 June 2010
Homelessness Down In Denver
[A]fter five years of concerted effort under the leadership of the Denver mayor and Democratic gubernatorial hopeful [John Hickenlooper], Denver has seen a 60 percent decrease in chronic homelessness.
From here.
Black-White Wealth Gap Remains
In 1984, the median white family had a net worth (excluding home equity) of a little over $20,000. As of 2007, it was closer to $100,000. But, for the entire time period, the median black family has had a net worth over under $5,000, remaining essentially flat.
The non-home equity wealth of the highest income third of African-American families has actually fallen from 1984 levels to $18,000 in 2007. White in the middle third of the income distribution have seen their wealth increase to $74,000 in 2007, and the wealth of white families in the top third of the income distribution has increased even more, to about $240,000.
About 40% of African-American families have no net worth or negative net worth. Less than 10% of white families have no net worth or negative net worth (the 10th percentile net worth was $100 for white families).
The wealth gap by race is large even controlling for income.
The non-home equity wealth of the highest income third of African-American families has actually fallen from 1984 levels to $18,000 in 2007. White in the middle third of the income distribution have seen their wealth increase to $74,000 in 2007, and the wealth of white families in the top third of the income distribution has increased even more, to about $240,000.
About 40% of African-American families have no net worth or negative net worth. Less than 10% of white families have no net worth or negative net worth (the 10th percentile net worth was $100 for white families).
The wealth gap by race is large even controlling for income.
Clear Creek Sheriff Arrests Hero
The Clear Creek Sheriff is deeply mistaken if he thinks that a jury in Idaho Springs will convict an experienced river guide of a crime for jumping into a river and saving the life of a thirteen year old girl who fell out of a raft he was guiding.
From here.
The Sheriff may have thought that jumping in to rescue the girl was a bad idea, when all ends well in a matter of life and death, and no one had bad intentions, you let hindsight inform your earlier judgment rather than trying to puff up your authority.
Ryan Daniel Snodgrass, a 28-year-old guide with Arkansas Valley Adventures rafting company, was charged with "obstructing government operations," said Clear Creek Sheriff Don Krueger.
"He was told not to go in the water, and he jumped in and swam over to the victim and jeopardized the rescue operation," said Krueger, noting that his office was deciding whether to file similar charges against another guide who was at the scene just downstream of Kermitts Roadhouse on U.S. 6.
Duke Bradford, owner of Arkansas Valley Adventures, said Snodgrass did the right thing by contacting the 13-year-old Texas girl immediately and not waiting for the county's search and rescue team to assemble ropes, rafts and rescuers. . . .
Snodgrass' raft flipped on the runoff-swelled Clear Creek around noon Thursday and the girl swam from the raft. Krueger said the girl was missing for 30 to 45 minutes while Snodgrass searched for her. He said she swam a half mile from the spot where the raft capsized.
Since it had been so long, Krueger said, it was no longer the rafting company's rescue.
"They should involve themselves up to a point. They lost contact. Whether they want to say they were trying to rescue their customer, when they had lost visual contact and had no idea where their customer has been for 30 to 45 minutes, then it becomes our issue."
From here.
The Sheriff may have thought that jumping in to rescue the girl was a bad idea, when all ends well in a matter of life and death, and no one had bad intentions, you let hindsight inform your earlier judgment rather than trying to puff up your authority.
Friday Physics
* Physicists are still valiantly trying to figure out a unified set of rules governing the universe. A recent example is this one:
"Unification of gravity, gauge fields, and Higgs bosons", A. Garrett Lisi, Lee Smolin, and Simone Speziale (April 28, 2010)
In the theory, "g" is a number related in a simple way to the the Yang-Mills coupling constant, which is the constant that governs the strength of interactions in the strong force. There is one problem. The predictions of the equations "are clearly far from observed values[.]"
The result is all too typical in its rough elegance but ultimate failure. But, nevertheless, it is worth noting that physicists are getting better and better at creating equations that have all of the mathematical features necessary to replicate all of the laws of nature in a quantum mechanical way (including dark energy), even if they are struggling to get some of the details right.
* The nuclear strong force that holds atomic nuclei together is in theory perfectly explained by quantum chromodynamics (QCD). But, the math is too hard to make predictions that are as accurate as the experimental results that we observe. Two particular problems in calculating with QCD: "Chiral symmetry breaking and confinement." Neuberger from Rutgers briefly sums up the state of QCD at a technical level.
Confinement refers to the fact that the the strong interaction forces go from being weaker than electro-magnetic forces at short distances to being confining at long distances. As a result of confinement, given time to act, quarks always come in twos or threes, bound by the strong force (the top quark, which is the heaviest of the quarks, decays into a lighter quark, almost always the bottom quark, before it has time to bond with other quarks).
In addition to making it hard to understand the strong force, our lack of understanding of them adds uncertainty to calculations at particle accelerators in experiments designed to discover new particles and figure out the nature of the time symmetry violations that we observe in certain weak force (i.e. beta decay) events. Understanding chirality in this context is important, because we need to know how much of the left handed particle, right handed particle asymmetries that we observe coming out of these giant atom smashers is due to the strong force, which we thing we fully understand in theory, and how much is coming from the weak force, where there are still some mysteries.
* The noose continues to narrow around the possible characteristics of the Higgs boson, which is the highest priority for researchers at our atom smashers to identify. It is the only fundamental particle in standard model of particle physics that has not yet been discovered, and it is critical, because it imparts mass to everything else.
Experiments conducted to date have been narrowing in on how much a standard model Higgs boson could weight (almost all of its other characteristics are already determined by the theory). If it exists, these experiments show that it is very likely that its mass is in the region 114-157 GeV. Experiements have largely ruled out lighter Higgs bosons and heavier ones if the standard model is correct. It should be possible to determine if there is a Higgs boson of this mass by about 2013 if experiments in the works proceed as planned.
If it isn't found in this range, than theories dependent upon a standard model Higgs particle or something similar need to be refurbished to find some other way to give particles mass.
* In the less flashy, but vaguely reassuring department, Dutch physicist Th. M. Nieuwenhuize is making the case from careful observation of dark matter profiles in one of the best understood galaxies that observed dark matter (which is inferred from astronomical observations that show that gravity according to general relativity does not accurately describe the motion of celestial objects if only visible matter exists), could simply be a bunch of fast moving neutrinos of a mass of about 1.5 eV which is close to the theoretical prediction from other sources. This is called a "hot dark matter" theory (because the particles are moving at fast, although not relativistic speeds), which its proponent recognizes is controversial:
A neutrino explanation of dark matter would also have the effect of throwing cold water on various extensions of the standard model that call for new stable particles, and on modified gravity regimes that would explain dark matter phenomena, because they would no longer be needed to explain physical phenomena.
"Unification of gravity, gauge fields, and Higgs bosons", A. Garrett Lisi, Lee Smolin, and Simone Speziale (April 28, 2010)
We consider a diffeomorphism invariant theory of a gauge field valued in a Lie algebra that breaks spontaneously to the direct sum of the spacetime Lorentz algebra, a Yang-Mills algebra, and their complement. Beginning with a fully gauge invariant action – an extension of the Plebanski action for general relativity – we recover the action for gravity, Yang-Mills, and Higgs fields. The low-energy coupling constants, obtained after symmetry breaking, are all functions of the single parameter present in the initial action and the vacuum expectation value of the Higgs. . . .
All physical constants – including Newton’s constant, the cosmological constant, the Yang-Mills coupling and Higgs parameters – derive solely from g and the Higgs vev [vacuum expectation value].
In the theory, "g" is a number related in a simple way to the the Yang-Mills coupling constant, which is the constant that governs the strength of interactions in the strong force. There is one problem. The predictions of the equations "are clearly far from observed values[.]"
The result is all too typical in its rough elegance but ultimate failure. But, nevertheless, it is worth noting that physicists are getting better and better at creating equations that have all of the mathematical features necessary to replicate all of the laws of nature in a quantum mechanical way (including dark energy), even if they are struggling to get some of the details right.
* The nuclear strong force that holds atomic nuclei together is in theory perfectly explained by quantum chromodynamics (QCD). But, the math is too hard to make predictions that are as accurate as the experimental results that we observe. Two particular problems in calculating with QCD: "Chiral symmetry breaking and confinement." Neuberger from Rutgers briefly sums up the state of QCD at a technical level.
Confinement refers to the fact that the the strong interaction forces go from being weaker than electro-magnetic forces at short distances to being confining at long distances. As a result of confinement, given time to act, quarks always come in twos or threes, bound by the strong force (the top quark, which is the heaviest of the quarks, decays into a lighter quark, almost always the bottom quark, before it has time to bond with other quarks).
In addition to making it hard to understand the strong force, our lack of understanding of them adds uncertainty to calculations at particle accelerators in experiments designed to discover new particles and figure out the nature of the time symmetry violations that we observe in certain weak force (i.e. beta decay) events. Understanding chirality in this context is important, because we need to know how much of the left handed particle, right handed particle asymmetries that we observe coming out of these giant atom smashers is due to the strong force, which we thing we fully understand in theory, and how much is coming from the weak force, where there are still some mysteries.
* The noose continues to narrow around the possible characteristics of the Higgs boson, which is the highest priority for researchers at our atom smashers to identify. It is the only fundamental particle in standard model of particle physics that has not yet been discovered, and it is critical, because it imparts mass to everything else.
Experiments conducted to date have been narrowing in on how much a standard model Higgs boson could weight (almost all of its other characteristics are already determined by the theory). If it exists, these experiments show that it is very likely that its mass is in the region 114-157 GeV. Experiements have largely ruled out lighter Higgs bosons and heavier ones if the standard model is correct. It should be possible to determine if there is a Higgs boson of this mass by about 2013 if experiments in the works proceed as planned.
If it isn't found in this range, than theories dependent upon a standard model Higgs particle or something similar need to be refurbished to find some other way to give particles mass.
* In the less flashy, but vaguely reassuring department, Dutch physicist Th. M. Nieuwenhuize is making the case from careful observation of dark matter profiles in one of the best understood galaxies that observed dark matter (which is inferred from astronomical observations that show that gravity according to general relativity does not accurately describe the motion of celestial objects if only visible matter exists), could simply be a bunch of fast moving neutrinos of a mass of about 1.5 eV which is close to the theoretical prediction from other sources. This is called a "hot dark matter" theory (because the particles are moving at fast, although not relativistic speeds), which its proponent recognizes is controversial:
[These] predictions pose a firm confrontation to conclusions based on more intricate cosmological theories, such as the cold-dark-matter model with cosmological constant (ΛCDM model, concordance model). Indeed, our findings are in sharp contradiction with present cosmological understanding, where neutrinos are believed to be ruled out as major dark-matter source. Studies like WMAP5 arrive at bounds of the type mνe +mνμ +mντ 0.5 eV. They start from the CDM paradigm, or from a mixture of CDM and neutrinos, the reason for this being indirect, namely that without CDM the CMB peaks have found no explanation. But the CDM particle has not been detected, so other paradigms, such as neutrino dark matter, cannot be dismissed at forehand. The CDM assumption has already questioned and it is also concluded that WIMP dark matter has an eV mass.
A neutrino explanation of dark matter would also have the effect of throwing cold water on various extensions of the standard model that call for new stable particles, and on modified gravity regimes that would explain dark matter phenomena, because they would no longer be needed to explain physical phenomena.
Subscribe to:
Posts (Atom)