31 August 2009

Ancient Gypsy and Jewish History

Wikipedia has a nice article on the history of the Romani people, sometimes called the Gypsies (a name based on an incorrect belief that they were a people who came to Europe from Egypt) or Roma (some use the term "Roma" to refer only to one group within the Romani people, I use it, perhaps incorrectly, but in line with some mass media conventions on the subject, to refer to all Romani people in this post). It marshals linguistic, genetic and historical sources documenting the argument, just on the cusp of the line between history and pre-history, that this ethnic group migrated from India to what is now Turkey (Anatolia), stayed there for a prolonged period of perhaps hundreds of years, and then migrated to Southeastern Europe.

A group of people known as the Dom, now numbering about 2.2 million in Iran, migrated from South Asia in the 500s, and may represent an earlier wave of migrants than the Roma. Isolated visits in the Byzantine empire by the "Atsingani", (a name sometimes attached to the Roma in the 1300s) are found from the 800s and before, but it isn't clear if they are the same group of people or not (it may have been a separate Manichean religious sect from Persia, i.e. Iran, or the Dom people).

The primary departure from India to the Byzantine Empire may have been a movement by defeated Northern Indian soldiers with their families in the wake of their defeat in raids by the Persian Sultan Mahmud of Ghazni into India around 1000 A.D. Mahmud's empire was an Islamic empire followed by the empire of the Seljuk Turks which gradually expanded West into Orthodox Christian Byzantine territory. Historical documents place the arrival of the Roma in Greece and Southeastern Europe in the 1300s, around the time that the Byzantine empire collapsed and was conquered by the Islamic Ottoman Empire (a successor to the empire of Mahmud of Ghazni and of the Seljuk Turks that preceded it).

Linguistic evidence suggests that the Romani language is most closely related to a branch of Hindustani languages that did not arise until about 1000 A.D. Their language is a descendant of Sanskrit rather than Latin. Roma cultural practices and beliefs also suggest the South Asian roots that they themselves claim.

Genetic evidence also shows strong evidence of a link to South Asia. At least 47% of Roma men and about 30% of Roma women have genetic markers otherwise found only in India. There is also genetic evidence of "founder's effects" associated with a small population bottleneck between the current population and source population that is consistent with the overall historical picture. There is also many genetic subgroups within Roma populations in Europe, due either to "(i) a genetically substructured ancestral population, where the old social traditions of strict endogamy have been retained and subsequent splits of the comprising groups have enhanced the original genetic differences; (ii) a small homogeneous ancestral population spawning numerous subgroups where strong drift effects have resulted in substantial genetic divergence."

The precise origins of the Roma within India are unclear. The general language family into which the Romani language fits is found in Northern and Central India. The cluster of genetic markers that is most common among the "Romanis is more prevalent in central India than it is in northern India." At least one genetic disease which is unusually common among the Romani, however, suggests a genetic link to the Jat people near Punjab, in Northwestern South Asia. The historical hypothesis that Sultan Mahmud's invasions led to the Roma migration would also point to a Northern India origin. One effort to synthesize the linguistic information suggests that the Roma's Indian ancestors may "have lived in the central Indian area, from where they emigrated to the north-west of India (about 250 BC) to reside there for a longer period of time. Experts still disagree on the point of time of the "gypsies‘" emigration from the northwest of India."

The founding population of Roma in Europe was small. According to one of the scientific journal articles cited by Wikipedia, the overall number of Roma in its Balkan provinces in the 15th century was estimated at only 17,000. There are now millions of Roma in Europe, where they are one of the continent's most ill treated minority populations.

In the case of the Roma, this represents a full circle migration. Linguistic evidence puts the origins of the Indo-European languages in Anatolia (present day Turkey) among some of the earliest farmers in human history around 8,000-9,500 years ago, not long after the neolithic revolution. One branch of the language family produced the Indo-European languages of India (e.g. Sanskrit and its descendants), another produced the Indo-European languages of Europe (e.g. Latin and its descendants). The Roma represent a group from the Eastern most wing of that language family (India) that migrated back into area where the Western wing of that language family took hold (Europe) through the area where the Indo-European languages probably originated (Turkey).

UPDATE July 1, 2024: The language in strikeout has since been pretty definitively proven to be inaccurate. The Indo-European languages originated in the Pontic Caspian Steppe, not in Anatolia, and not in the early Neolithic with the first farmers.

Some Ancient Jewish and Proto-Jewish History

In the case of the Jewish people, their historic language, Hebrew (which was brought back from the status of a "dead language" used only by scholars and for religious purposes into the living language of Israel), is a Semitic language rather than an Indo-European one.

The Semitic languages have their roots in the Middle East, North Africa and the Horn of Africa. The most widely spoken languages in that family today are "Arabic (322 million native speakers, approx 422 million total speakers). . . followed by Amharic (27 million) [in North Central Ethiopia], Tigrinya (about 6.7 million) [mostly in Eritrea], and Hebrew (about 5 million)." The Maltese language spoken in Malta, off the coast of Italy, is also Semetic and is the only Semetic language that is written in the Western alphabet.

The ancestors of Proto-Semitic speakers are now widely believed to have first arrived in the Middle East from Africa around the late Neolithic," although an alternative theory puts the source of Proto-Semitic in the Middle East itself, largely because some Semitic languages in Africa have Sumerian loan words (and Sumerian clearly did originate in the Middle East). Semitic languages spread out from "the Arabian Peninsula by approximately the 4th millennium BC."

A (now dead) Semitic language adopted the cuneiform script of Sumerian (the first known written language of humanity used in Southern Iraq and spoke from at least the 4th millennium BCE) and replaced Sumerian "as a spoken language somewhere around the turn of the 3rd and the 2nd millennium BCE (the exact dating being a matter of debate), but continued to be used as a sacred, ceremonial, literary and scientific language in Mesopotamia until the first century CE (AD)." Semitic Cuneiform was later abandoned in favor of the Aramaic script. The Sumerian language is now dead and is not related to any other known language (i.e. it is a language isolate), except through some loan words in some Semitic languages. (Today, there are only six language isolates on the Eurasian continent, the best known being Basque and Korean).

Other notable members of this language family are the (now dead) Phoenician (once spoken in the Eastern Mediterranean) and its successor Aramaic whose script was adapted by Hebrew, Arabic and other other Middle Eastern languages including non-Semitic ones. "It was the day-to-day language of Israel in the Second Temple period (539 BCE – 70 CE), the original language of large sections of the biblical books of Daniel and Ezra, likely to have been the mother tongue of Jesus of Nazareth and is the main language of the Talmud." "Modern Aramaic is spoken today as a first language by many scattered, predominantly small, and largely isolated communities of differing Christian, Jewish and Muslim groups of the Middle East —most numerously by the Assyrians in the form of Assyrian Neo-Aramaic," numbering in all about 2.2 million people and is considered an "endangered language." Aramaic is the closest living language relative of Hebrew. The languages have a common origin.

The Roma and the Jews

The Roma and the Jews are among the only examples of mass migration of non-Roman ethnic groups into Europe from the East until a mostly post-colonial wave of immigration in to Europe in the last few decades (the Jewish diaspora began about 1900-2000 years ago). One reason that their histories are notable is because they provide a concrete interface between the modern world and the part of ancient history that pre-dates what many college students are taught in Western Civilization.

History from late Greek civilization and the Roman Empire onward is quite well documented and widely known. The roughly 3,500 years before that era starts to get fuzzy, and I certainly don't know it that well. Like most people educated when I was, I also don't know much about the pre-Colonial eras for much of the rest of the world. One of my long term interests these days is to get a sense of this era, and to better pin down what we know about our ancient history, from the pre-Greco-Roman era back to the Neolithic revolution and pre-Neolithic migrations of the human race.

Our lack of knowledge isn't entirely an accident. Before the Sumarian, nothing was written down. Many of the modern world's other links to pre-Western, pre-Arabic Civilization were deliberately wiped out in the late Roman Empire and the early Islamic empire. Also, it is surely no accident that the age of the world as measured by young Earth creationists closely coincides with the age of the world's written history.

Both the Roma and Jews were subject to persecution on and off for most of their time in Europe, culminating in the genocide of the Holocaust for each group. Their experience in the last two thousand years can also help us trace questions about ancient history by providing empirical examples of answer to key questions faced by those trying to piece together the ancient past.

How much of a cultural and genetic impact can a small group of people that migrates into a larger established culture have on the larger culture? How isolated can a population of people remain while co-existing with a larger population?

In the case of the Roma, one can observe this quite directly. In the case of the Jewish people, the influences through the common origins of Christianity and rabbinic Judaism have to be parsed from the subsequent impacts on European culture.

Wither Men?

Will Wilkinson thinks that men should consider becoming metrosexuals if they want to respond successfully to the changing role of women in our society.

A Million Texans Want Out

At present, the Texas Nationalist Movement has a petition with 1 million signatures directly calling for a vote of secession.


From Think Progress.

The Civil War established the precedent that a U.S. state may not unilaterally leave the United States. But, as the Canadian Supreme Court recently opined when considering the case of Quebec, the international law norm is that a state or province may leave a country lawfully if suitable political approval is secured from both the state and the country that it is a part of to approve the separation.

If the Congress passed a law, approved by the President, allowing Texas to leave if it had the approval of its voters, and the voters in Texas approved a referendum calling for secession, the split would be legitimate.

Would this be so bad?

In 1861, secession would have meant a continuation of slavery in Texas, which was an agricultural economy with significant reliance on slave labor. Oil would not come to change everything for another four decades.

But, in 2010, would a Republic of Texas be so unconscionably worse than the status quo that a vote of the people of Texas on that issue should be disregarded?

It also wouldn't necessarily lead to the rest of the United States unraveling. The fact that Texas was an independent Republic for four years or so, and was part of the Confederate States of America for another four years or so, has colored how Texans view their own political identity. Texas is an exceptional case. While other states were part of the CSA, had periods of independence (Hawaii, California), or were previously ruled by Spain (Florida, New Mexico, Arizona, California and parts of other states) or France (Louisiana) or Russia (Alaska), none have held on to their prior political identity so proudly.

Ironically, this is a politically opportune moment to allow a vote on secession in Texas.

The departure of Texas from the United States would cripple the Republican party, which is already well on the way to becoming a Southern regional party. And, even if the vote was to remain in the United States, Southern conservatives would have their patriotic credentials and credibility tarnished for generations, harming the party in less restive states.

Democrats, in contrast, have everything to gain politically in the rest of the country from the departure of Texas, which is a Republican stronghold and a major barrier to national consensus on environmental issues, criminal justice issues, and more.

Also, if a bill to allow the vote was signed by a black President, it would also give the crypto-racist supporters of the measure the cover they would need to win a popular vote, making the vote one of patriotic loyalty, as much as anything else.

I don't think that Texas should leave the United States, although it would be a tremendously interesting development to watch, but I am not deeply opposed to the idea in principle either.

28 August 2009

Our Kids Will Be Hot

Our kids will be hot, and not just because they have such good looking parents.

Because if our emissions aren't reined in quick, and I mean darned quick, this is what the U.S. could be facing by 2050. Yes, Kansas and Oklahoma could fare worse than Colorado. But I'm telling you, I don't think many Coloradans will think too highly of 7.5°F higher (or more!) temperatures in the summer.


We now endure long weeks of 95 degree Fahrenheit weather most summers. When my children are my age, in Colorado, those weeks may have heats in the low triple digits.

Places like Florida and Louisiana will fare much better in terms of raw temperature changes, but will have to face another threat: land lost as sea levels rise, particular in coastal wetlands and in cities like Miami and New Orleans. Dutch style dikes and Venetian style flood management may become the next trend of necessity there. The Great Plains could see another dust bowl. Glaciers will recede further and tourism driving snow in Colorado's mountains is also a likely casualty.

Those engaged in long term landscape planning should look to what they do in places like Phoenix and Las Vegas. Cycads in, maple trees and willows out?

It isn't to late to make a difference that will mitigate these predictions. But, we don't have a great track record of making responses to slow moving disasters any sooner than we must, and some of the damage is already irreversible.

Interest Rates and Savings

A naiive assumption which policy makers, reporters and opinion makers frequently rely upon as if it were true is that low interest rates discourage savings and encourage debt, while high interest rates encourage savigns and discourage debt. This is simply microeconomic thinking in practice.

The trouble is that reality isn't aways so convenient. Savings rates in the U.S. are the highest that they have been in a decade, despite the fact that a savings account at my local bank produces a meager 0.3% per annum rate of return, before taxes and without any adjustment for inflation. Aggregate consumer debt levels, meanwhile, are falling, something that we haven't seen for a long time, despite the fact that interest rates on car loans, mortgages and credit cards are below historical averages and not too far from record lows.

The lack of an interest rate spike is particularly odd given that default rates on consumer loans are at record highs. Financial Economics 101 tell us interest rates can be decomposed into a "risk free rate of return" in the credit markets (operationally determined by the market for Treasuries) and a "risk premium." Yet, despite the fact that consumer lending is empirically riskier than it has ever been, the risk premium hasn't shot up by anything close to the levels that current high default rates would imply lenders need to offer to make money.

Part of the issue is that lenders have responded to the financial crisis by tightening their underwriting standards (i.e. how easy it is to get a loan in the first place) rather than by increasing interest rates.

Like most weird phenomena in real world economics, there is also an element of "price discrimination," i.e. charging different people different prices for the same thing. Banks have targeted their riskier, credit dependent customers for higher effective interest rates in a way that doesn't have to be disclosed as a higher interest rate when people apply for credit cards, by piling on fees for the kind of conduct that high risk customers tend to engage in (like exceeding credit limits and making late payments) and quickly pulling the trigger for default interest rates that are much higher than the interest rates paid by those who make every payment as agreed.

The last big explanation of what we are seeing is the often useful lifetime income hypothesis. Empirically, a lot of the consumer debt bing we saw over the last decade was a function of people spending increases in the value of their homes and investments, even though they hadn't sold those assets. They felt wealthier and borrowed (implicitly or actually) against their investment gains in these assets.

When the asset price bubble (both in real estate and in financial assets) collapsed, people who used to feel like their asset prices had afforded them an ability to engage in consumer spending and relieved them from having to save, have now found themselves with much more debt relative to their assets (a huge percentage of homeowners with mortgages are now under water or very nearly so) and direly needing to save money for their expected future wants. Their expeected lifetime income has fallen and they are readjusting their lives accordingly.

So, is this some shockingly new phenomena? Not really. Baby Doe Tabor, a Colorado history heroine, experienced almost the same thing starting back when the United States currency was partially tied to the price of silver. It wasn't called the lifetime income hypothesis back then, but this not particularly new economic theory does explain how people acted, even back then.

Instinctual Human Fear

There are all sorts of dangers out there in the world. Most of them, we have to learn about from experience, either by being taught about them or "the hard way." We aren't born knowing that dark blue liquids are usually poisons, that gasoline is flammable, or that requests to deliver money to Nigeria are usually scams.

A few of our fears, however, are hardwired into our brains before we are even born.

Girls know as young as eleven months old associate snakes and spiders with fear (boys don't have that instinctual association at that age).

[T]hese findings support the idea that people have evolved a brain mechanism that primes them for learning to pair fear expressions with threats that would have repeatedly confronted prehistoric populations. . . . In [the study author's] view, bites from poisonous snakes and spiders presented a special danger to prehistoric women, whose children would have died or incurred great hardship without their mothers.

Surveys of adults and children find that 5.5 percent report snake phobias and 3.5 percent report spider phobias. These particular phobias affect roughly four times more women than men.


The study (a small one with just twenty subject) examined how infants response to pictures of different facial expressions in the presence of spiders and snakes.

"Males also develop fears and phobias about snakes and spiders" but the fears aren't manifest as early on and the phobias are less common.

Others reviewing the experimental results, offer an alternative explanation.

Another research notes that the gender differences seen in the study could also make sense if an inborn fear of spiders and snake was identical, "if 11- month-old girls generally recognize facial expressions better than their male peers." The infant study measured fear associations with spiders and snakes using pictures of faces with different expressions on them. This researcher found in her own work that 5-year-old girls recognize both threatening and non-threatening facial expressions more quickly than boys.

One of her recent studies also notes that some of the fear response may be to "snakes’ slithering motion" in particular. The study showed that "7- to 18-month-olds of both sexes look longer at movies of snakes while listening to frightened, versus happy, voices. But the same infants did not look longer at still images of snakes paired with a frightened voice, compared with snake images accompanied by a happy voice." In her view:

From an evolutionary perspective, it makes the most sense for both boys and girls to learn quickly to fear threatening stimuli such as snakes and spiders, but gender differences in fear learning warrant further investigation.


Either way, the most notable point to me is that the study confirms the widely known fact that we are born with and have access at a very early age, to a small set of instinctual fears that was evolutionarily relevant to our ancestors. While most of the knowledge we acquire during life is a matter of nurture, a small part of our memory is ancestral.

In every day life these days, some of those instinctual fears don't have a lot of evolutionary value. Dangerous snakes and spiders aren't a major threat to modern man. I suspect that some of the things we instinctually fear don't even exist anymore, in the wake of massive megafauna extinctions coincident with the rise of human and human ancestor populations on Earth. Instead, their main impacts these days seem to be phobias and the perennial popularity of certain horror movie cliches.

But, it is worth recalling that we have our instinct driven "animal" side, one which we share with all mammals. I've noted before that a disproportionate share of mental health issues seem to be associated with the part of our brain that all mammals share.

Even more intriguingly, much of what we commonly think of as especially "human" about us is also associated with this part of our brain, rather than the part that is actually unique to humans. For example, many people think of a capacity to love family members and experience fear, which are traits shared by most mammal species, as things that make us unique human. But, few people would consider species unique abilities like playing chess and translating never before seen text into language, to be quintessentially human. These days, that makes a certain amount of sense.

While in the "Age of Reason" the main question we faced in asking what it means to be human is the distinction between humans and animals, in the modern era, we ask the question mostly to distinguish between humans and computers.

The story of our struggle to understand what it means to be human these days is told more in classics like Arthur C. Clarke's 2001: A Space Odyssey, the 1983 movie War Games and Issac Asimov's I, Robot than it is by stories like Robert Louis Stevenson's Strange Case of Dr. Jekyll and Mr. Hyde that were relevant to an earlier era.

Truth = Financial Crisis 2.0?

A federal court has ordered the Federal Reserve to reveal the terms of its $1.5 trillion in loans to big banks during the Financial Crisis in response to a newspaper freedom of information act request.

A declaration filed by the banks as part of an end run around that order claims that revealing this information would produce a second financial crisis.

I simply do not buy it. These institutions are overwhelmingly publicly held and probably have an independent obligation to disclose the transactions to their shareholders anyway. The information requested is purely factual in nature. As such, changes that occur in response to it should be more accurate than the speculation currently in place. If the information shows that the banks really are in much worse shape than currently believed by market participants, we will pay for it sooner or later anyway.

When you borrow money from the nation's central bank, acting pursuant to its emergency powers when it is acting to carry out a President's policy response to a financial crisis, you don't have a reasonable expectation of privacy. Private contracts are not normally immune to subpeona power, and contracts made with governmental power are not normally immune to FOIA requests. Even litigation settlements are routinely a matter which the public may insist be disclosed. In many markets (e.g. real estate), public disclosure of transactions is required as a matter of law, and post-financial crisis legislation proposes to expand the scope of transactions that must be publicly disclosed.

The point of emergency lending is postpone a collapse during a widespread panic, until reason can prevail in realizing that the world is not coming to an end. It is not to permanently bury the truth.

27 August 2009

A New Fundamental Particle?

The standard model of particle physics is based on a core set of fundamental particles. But, there are hints that we may have found a new one that no one was really looking for in experiments that collide at high speeds short lived exotic particles that have already exhibited strange properties when they decay.

Standard Model Particle Background

Quarks, which are best known for making protons and neutrons, come in two basic types; these in turn come in "standard weight" and two heavier and short lived versions of each of the basic types. Electrons and neutrinos, in standard and two heavier and short lived version also exist. Each of these particles come in particle and anti-particle versions. These two basic types of quarks, electrons and neutrinos make up what we think of at a non-quantum level as "matter."

There are also three known kinds of "force carrier" particles for each of the fundamental forces. Photons mediate electromagnetism. Gluons mediate the "strong force" that holds atoms together. W particles (and their kin, the Z particle) mediate the weak nuclear force that causes the radioactive decay of large atoms. Gluons are never observed free standing and must be inferred. W/Z particles are heavy and short lived. Unlike "matter" particles, force carrier particles have not been observed to repeat themselves in heavier versions of themselves.

There are also two hypothesized but undiscovered particles in the standard model, the graviton to mediate gravity, and the Higgs boson to create inertia.

All currently observed particles have a spin of one-half (particles in the same category as quarks, neutrinos and electrons), or one (photons, gluons, Ws and Zs). The proposed Higgs particle would have a spin of zero; the proposed graviton would have a spin of two.

All current particles have a charge of zero (photons, gluons, neutrinos and Z particles and their anti-particles and gravitons), plus or minus one-third (down, strange and bottom quarks and their anti-particles), plus or minus two-thirds (up, charm and top quarks and their anti-particles), or plus or minus one (electron variants, W particles and their anti-particles) relative to an electron's charge. There are both charged and neutral versions of the proposed Higgs particle.

There are many types of compound particles as well. Two quark particles are called mesons. Most matter is made up of two of the many possible types of three quark particles, protons and neutrons. All compound particles other than protons and neutrons are highly unstable and quickly decay into something else.

Standard Model Symmetry Background

One of the important rules of quantum physics is that certain symmetries are observed. "Parity" is a property of particles that is conservated in all situations except weak force interactions. The first party violations were discovered in 1956.

"Charge parity" symmetry violations, which holds that if parity is not conserved in a set of particles, that electrical charge is reversed in that set of particles, moreover, is true in all but two known kinds of weak force interactions, neutral kaon decay (a type of meson), and B meson decay. The first CP symmetry violations were discovered in 1964 in netural kaons. CP violations in B mesons were discovered in the 1990s and definively confirmed in 1999.

All 140 or so of the two quark combinations (called mesons) are unstable with mean lifetimes on the order of a hundred millionth of a second or less. W and Z particles are likewise very short-lived with a mean life of about 3 × 10^−25 second. CP violations have been observed in only a handful of the 140 or so of the two quark combinations (there are four kinds of neutral kaons and a similarly small number of kinds of B mesons).

Current theories predict that CP violations would exist in strong nuclear force interactions, but they are not observed. A hypothetical particle called an axion could explain why this is the case and is also a candidate to explain the dark matter that is indirectly observed in cosmological observations.

These two rare types of CP symmetry violating weak force mediated decays still preserve "charge partity time reversal" symmetry (CPT). The combination of three properties of particles is preserved in all known circumstances.

The Experimental Results

New particle accelerator experiments have produced data that suggest that a new particle may have to be added to the roster.

The evidence involves 230 instances where a rare type of weak force particle mediated decay of a B meson were observed. The type of B meson decays studied are already of great interest to physicists because they are one of the few situations where charge parity symmetry is broken.

The data were out of line with what would have been predicted with precisely understood quantum mechanical laws and all known particles that were expected in this type of B meson decay.

The B meson has a mass of about 5.279 giga-electron volts (an electron volt is a unit of mass based upon Einstein's E=mc^2 relationship commonly used for fundamental physical particles). The experimental results point to a particle that would be somewhat heavier than a B meson. Particles that heavy are also always very short lived, lasting only a tiny fraction of a second before decaying.

Not all particle properties of the hypothetical new particle have been determined from the experimental results so far. So, we don't know the spin or charge of this hypothetical particle, at this point. If further experiments or data analysis could pin down these properties of the hypothetical new particle, we would be much closer to determining where it fits in the particle zoo.

What is it and what does it mean?

Basically, this is one of those "who ordered that?" moments. Scientists were expecting to fine tune their understanding CP violations in these experiments. They weren't expecting a new particle to be discovered, and scientists are still exercising great caution in the conclusions that they draw from these experiments until they know more.

By comparison, the heaviest weight quarks (top and bottom) have masses of 171.2 GeV and 4.2 GeV respectively, the heaviest neutrino weighs less than 15.5 MeV, the heaviest version of the electron (the Tau) weighs 1.777 GeV, the W particle weighs 80.4 GeV, and the Z particle weighs 91.2 GeV. Gluons and photons are massless.

The data seem to rule out the possibility that some sort of Higgs particle has been observed, and is not consistent with the hypothetical axion particle. There is no reason to believe that it would be a graviton. It would also be a poor fit for any other proposed dark matter particle, as it is unstable. The data suggest a particle that is too light to be a fourth generation quark or electron. And, the particle would weigh less than a W or Z particle, so they can't be heavier versions of these particles. The properties of almost all possible combinations of known fundamental particles are already well documented, and don't seem like likely fits.

Hypothetical particles that could fit the bill include a fourth generation neutrino, a preon or rare form of compound preon (preons are subcomponents of standard model fundamental particles), a supersymmetric particle of some kind, or something entirely different. The results could also be a result of some new quantum mechanical law. Then again, the results could also be a result of a subtle problem with the data analysis or application of existing quantum mechanical law, a problem with the experimental set up, or a statistical fluke.

A short lived, heavy particle that is observed only during a rare form of weak force mediated exotic B meson decay would probably be more or less devoid of any direct practical implications. The CP violations that we observe in experiments can't even adequately explain more than a small percentage of the matter-antimatter imbalance that we observe in the real world, and have no current practical implications.

But, because a new particle would expand the standard model of particle physics, this experiment could rule out theoretical all physics theories that don't predict its existence, and these experimental findings could open up new lines of theoretical physics inquiry.

The fact that this potential new particle was observed in one of the few little corners of the world of physics where CP violations are observed is probably not coincidental. These particle decays act in a weird way for some reason that we don't yet fully understand, so, it is a natural place to look for new physics. These observations could point to a new fundamental force of physics, new quantum physical law, or a more elaborate understanding of the weak force that better explains why CP violations happen.

Stay tuned.

26 August 2009

Zoning Theory

The theoretical basis of zoning laws are critically important for Denver right now as it considers adopting a major overhaul of its zoning laws that focus more on form than on use, an approach that is more restrictive (because form restrictions can be implemented on top of traditional zoning restricts).

The proposed Denver zoning code, in particular, places a premium on what might best be described as reducing "aesthetic harm" and visual deviance from established architectural norms in most of Denver's neighborhoods. Detractors argue that this isn't a right and shouldn't be protected, i.e. that the homeowner's expectations are unreasonable. Supporters argue that the economic impacts to them are as real as the impacts on property values created by limiting developer are to the property owner, and so their concerns ought to be addressed with regulation.

But, the laws also potentially represent a move towards an analysis more focused on what I believe the core inquiry of zoning involves, which is the externalities associated with particular land uses. My argument has long been that developers ought to have a right to "cure" externalities that they create, and should have a right to develop propert if they do.

A new book by Lee Anne Fennell, THE UNBOUNDED HOME: PROPERTY VALUES BEYOND PROPERTY LINES (excerpts available via SSRN) undertakes a sophisticated analysis of the key theoretical issues behind the question of when zoning restrictions are legitimate, a core values dispute in many, if not most, real life zoning disputes.

The Unbounded Home grapples with a core modern reality - that the value and meaning of a home extend beyond its property lines to schools, shops, parks, services, neighbors, neighborhood aesthetics, and market conditions. The resulting tension between the homeowner’s desire for personal autonomy at home and the impulse to control everything that could affect the home’s value fuels continual conflict among neighbors and communities. The home’s unbounded nature carries implications for nearly every facet of residential life, from the financial vulnerability of homeowners to the persistence of segregation by race and class. This book shows how innovations that increase the flexibility of property law can address critical issues of neighborhood control and community composition that have been simmering unresolved for decades - and how homeownership itself can be reinvented to better deliver on its promises.


As a footnote, an SSRN release of excerpts is also a very savvy way to publicize a new book from a university press, with a significant academic audience.

In a nutshell the argument is that homeowner concerns about their larger neighborhoods flow from very real economic impacts, but that different tools, similiar to carbon impact trading, and changes in the nature of home equity investment risks faced by homeowners that make these kinds of externalities less important, could help address the very limited array of tools provided by zoning codes and restrictive covenants.

The result has been confusion about what property ownership means, and equal measures of outrage against intrusions on one’s prerogatives as an owner and as an interested neighbor.

Second, even if individual communities can reach internal agreement about excluding particular land uses from their midst, the overall pattern of land use choices within a larger metropolitan area can create additional negative effects. Because excluding land uses (such as multifamily homes) often amounts to excluding households (those who cannot afford single-family homes), associational patterns in metropolitan areas are deeply impacted by the use of these property tools.

This book considers how society might design alternatives to existing property instruments that would address both localized extraparcel impacts and the larger-scale dilemmas produced by efforts to control those localized impacts.

In broad terms, these alternatives involve reconfiguring property so that it does a better job of aligning the homeowner’s returns with the homeowner’s choices. These reconfigurations require us to move beyond the binary choices that have dominated the metropolitan residential experience —banning or permitting uses, allowing or forbidding exclusion, renting or owning a home. Conceiving conflicts like those faced by the Middletons as resource dilemmas not entirely unlike those surrounding resources like clean air or a sustainable fishery allows us to expand the menu of policy options.

One reconfiguration approach involves developing new forms of alienable entitlements, rather than simply banning or allowing a particular activity. Drawing on innovations in environmental law, we can imagine devising tradable entitlements to engage in acts with aesthetic impacts, and even (in carefully delineated contexts) tradable entitlements relating to association with preferred neighbors and peers. These instruments would allow responsibility for inputs into common environments to be more precisely allocated and priced.

Another, quite different, approach would attenuate homeowners’ vulnerability to off-site impacts by scaling back their investment exposure so that it more closely aligns with their effective sphere of control. Here, building on an exciting line of work by Robert Shiller and his collaborators (among others), I examine the potential to reconfigure homeownership in a way that decouples the investment volatility associated with off-site factors from the homeowner’s bundle. . . .

Land use controls, as they exist today, operate mainly in a binary manner—either a use is banned, or it is allowed. There is almost never the openly acknowledged possibility that households could pay for the privilege of engaging in an unusual but especially valued use, such as adding a garage apartment, or that governing bodies could be required to pay for the privilege of banning a particular land use, such as multifamily dwellings. Moreover, few have thought creatively about the set of risks that the standard homeownership bundle should and should not contain as a default matter. For example, must homeowners be exposed to housing market risks that they have no power to control, or might these risks be more efficiently held by investors within diversified portfolios? By failing to probe such questions, property law has developed without a coherent understanding of the home as a resource.


Put another way, bribery by developers, or those opposed to development, isn't corrupt, instead, it is fair and desirable as a middle ground solution to otherwise intractible problems. I am reminded of an example of how a Japanese developer handled the fact that his building interfered with the neighbor's television reception. He kept the building design the same and paid for them to have cable TV instead.

The role of homeowner's associations in maintain a commons is relevant, and so is the interesting idea that property valuations that reflect situations that could be involuntarily changed could be discounted for tax purposes.

Also interesting is a citation to Hansmann's article "Theory of Status Organizations" which explains that many organizations impart value largely through your fellow members and are often organized as non-profits to prevent the organizer from exploiting member status for their personal gain.

The analysis is also notable because it uses approaches that are applicable to regulation in general, building on broader views of takings clause jurisprudence (i.e. the duty of the government to compensate property holders for the fair market value of property it seizes involuntarily).

That current legal arrangements require homeowners to gamble on matters far beyond their sphere of influence and expertise is, on reflection, rather remarkable. Homeownership is widely viewed as one of the most important stabilizing forces in society, but it comes packaged with an enormous dose of investment risk that homeowners are almost entirely powerless to insure against or diversify away. Homeowners typically have no other asset, aside from their own human capital, that makes up a larger share of their portfolios. Thus, households routinely plow a hefty chunk of their wealth into what amounts to stock in a single, risky enterprise—the neighborhood housing market. Placing all of the household’s eggs in one basket not only runs counter to basic principles of portfolio diversification but also motivates basket- guarding behaviors that can have high social costs. . . .

Even with the best available spillover-management tools in place, households may not be the parties best positioned to bear the residual risks. Accordingly, I consider here the prospects for scaling back the homeowner’s exposure to off-site risks that she cannot efficiently bear.


The conclusions aren't all included, and the theory isn't completely spelled out in the excerpts, but the bits that are available are tantalizing as a new way of thinking about the issues. The ferment of a major new conceptualization of property rights in something as basic as a house also opens the door to the already boiling cauldron of rethinking the paradigm of intellectual property as property rights, instead of, for example, a right to prevent unjust enrichment by third parties from your works.

The Early Federal Courts

The early federal courts had only a small number of judges. When the constitution was established there were 19 Article III judgeships. In 1891, after the U.S. Courts of Appeal were established there were 72 Article III judgeships. There are now at least 866 federal judgeships.

The U.S. Supreme Court initially had six U.S. Supreme Court judges who had both appellate duties and sat as part of trial courts with U.S. District Court judges, with one assigned to each judicial circuit. A seventh judge was added in 1837. The U.S. Supreme Court reached nine judges, in 1837. In 1863, a tenth justice was added, but in 1866 the court's size was limited to seven so soon as this was reached by attrition, and the court actually fell in size to eight justices. Since 1869, the U.S. Supreme Court has had nine justices. A Court stacking plan proposed by FDR was scrapped when the U.S. Supreme Court backed down on an aggressive stance of declaring federal laws targeted at economic matters that were arguably merely unwise to be unconstitutional.

Each state had one district judge (thus there were originally thirteen of them), until 1812 when New York State got two, which was then expanded to three in 1903. There are now 678 U.S. District Judges. For reference purposes, Louisiana was admitted as the 18th state in 1812. California was admitted as the 31st state in 1850, followed by Minnesota in 1858. Wyoming was admitted as the 44th state in 1890. Utah was admitted as state 45 in 1896. Oklahoma was admitted as state 46 in 1907. Hawaii and Alaska were the last states admitted, both in 1959.

Many serious federal cases and appeals were handled by circuit courts. From 1789-1801, from 1802-1855, and from 1863-1869, there were no separate circuit judges. These cases were handled by U.S. District Court judges and U.S. Supreme Court justices sitting together. There were seventeen circuit court judges in addition to the U.S. District Court judges from 1801-1802, positions that were swiftly revoked by Congress, there was one circuit judgeship from 1855-1863 (for California) which was then revoked, and there were nine circuit judgeships (one for each circuit) from 1869-1911 (from 1891-1911 serving only as trial judges). From 1789 to 1911, with the exception of the period from 1801-1802, circuit courts always borrowed judges from a U.S. District Court, the U.S. Supreme Court, or both. The circuit courts were abolished at the end of 1911.

When the U.S. Courts of Appeals were established in 1891, the nine existing circuit judges were transferred to it, and one more judge was appointed to each of the nine circuits. A U.S. District Court judge would sit on the three person panel with the U.S. Court of Appeals judges in each case. The U.S. Courts of Appeals now have 179 judges.

A court of claims was established in 1855 to hear claims against the United States, but served only in an advisory role until 1863, then by 1866 as an Article I legislative court with appeals to the U.S. Supreme Court permitted, until from 1953-1982 it served as an Article III court with three judges and seven special masters called commissioners. It was replaced by a U.S. Court of Federal Claims with sixteen judges in 1982.

An Article I customs court appeallable first to the circuit court and then to the U.S. Court of Appeals was in existence from 1890 to 1956, when it became an Article III court (job security was present from 1948) until it was converted into the Article III U.S. Court of International Trade with nine judges in 1980.

A court of customs appeals was established in 1909. Two other specialized courts were created in 1910 (for patent cases and for commerce cases, respectively). Separate bankruptcy courts weren't established until 1984.

Direct appellate review of right in federal criminal cases was limited to writs of habeas corpus until the 1891 Judiciary Act, which established nine U.S. Courts of Appeal for their respective judicial circuits.

These lifetime appointees also weren't very aggressive in serving as a constitutional check on Congress. While the 1803 case of Marbury v. Madison, established the federal court's power to strike down federal statutes, this wouldn't happen again until the 1856 case of Dred Scott v. Sandford.

The Unrecognizable Early Congress

According to political scientist Randall B. Riley, in his book "Congress: Process and Policy":

Until the 1880s the length of service of the average senator and representative remained at a low and fairly constant level -- representatives averaged two years (after a higher level of around three years during a peak of House influence on national policy in the 1810s and 1820s) and Senators around four years. Members of both chambers, especially the House, routinely left Congress for other opportunities, both governmental and private. Mid-term resignations were common.

Beginning about 1880 the average years of service rose dramatically in both houses, doubling in the Senate [to eight years] in less than two decades and almost tripling in the House [to six years] in three decades. The upward trend continued, with breaks for political turnovers, until the late 1960s.


The Senate filibuster wasn't established as a procedural tool in the the U.S. Senate until 1841. They were very rare until the 1960s and used mostly to block civil rights legislation.

Until the 17th Amendment to the United States Constitution was adopted in 1913, U.S. Senators were appointed by state legislatures, not elected, an appointment that almost always reflected the state legislature's political makeup. The U.S. Senate also started small, initially having only twenty-six members.

Elections back then were questionable things in any case. From 1789 to 1908 the House of Representatives resolved 382 contested elections (keep in mind that the House of Representatives started out with just 65 members), of which only three were given to the minority party candidate in the House. Since 1910, House of Representatives resolution of contested elections has grown much more rare, and state elections officials and courts almost always resolve the disputes before the House acts.

At first, only men could vote outside New Jersey (the 19th Amendment gave women the right to vote nationwide in 1920). Women weren't the only ones disenfranchised:

During the Revolution states with poll taxes and taxpayer franchises (such as North Carolina and New Hampshire) established nearly universal free male suffrage. When suffrage qualifications were tied to the value of an estate, wartime inflation eroded barriers. All states ended religious restrictions on voting. At war's end, the eligible electorate numbered from 60 to 90 percent of free males, with most states edging close to the high end of that range. . . .Between 1790 and 1860 almost every state, old and new, disfranchised free blacks while expanding the electorate to include almost all white adult male citizens.

The suffrage requirements of the frontier states were more democratic than eastern ones. Beginning with Kentucky in 1792, all but two western states embraced white male adult suffrage; in the East, all but two states retained either a property or taxpaying qualification for all or part of the period from 1820 to 1860. Several western states even enfranchised aliens who had established permanent residence and Indians who had given up tribal citizenship. . . .

[The] Rhode Island . . . requirement that a man own a $134 freehold in order to vote became increasingly restrictive as the Rhode Island economy shifted from agriculture toward industry. By the early 1840s more than half of the state's male population was disfranchised. But a reform effort led by Thomas Dorr, after holding an extralegal constitutional convention in 1841 and an extralegal gubernatorial election, sparked some liberalization of suffrage requirements. . . .

Black male suffrage became national in 1870 when the Fifteenth Amendment prohibited states from discriminating against potential voters because of race or previous condition of servitude (but not sex). . . . For most of Reconstruction, blacks voted and often used their resulting political power to protect their other rights. Although white Democrats overthrew Reconstruction by forcibly keeping blacks from the polls, thereafter they generally eschewed violence. Instead, they relied upon discriminatory apportionment and election laws to limit black and poor white political influence, and in the 1890s and 1900s, imposed poll taxes and literacy and property requirements through constitutional revisions. Thus they disfranchised virtually all black men and many poor whites, and thereby ensured Democratic hegemony.


Your vote was a public matter. The secret ballot was an innovation of the Progressive era:

In the United States, most states had moved to secret ballots soon after the presidential election of 1884. However, Kentucky was the last state to do so in 1891, when it quit using an oral ballot. Therefore, the first President of the United States elected completely under the Australian ballot was president Grover Cleveland in 1892.


Often an entire state's Congressional delegation was elected at large:

[M]ost of the original thirteen states used multi-member districts in the first congressional elections . . .

Congress passed a series of apportionment acts, primarily after each decennial census and representatives were added to the House as new states were admitted to the Union, but Congress remained silent as to the ways in which the states elected their representatives. In 1842, six states were electing representatives at-large and twenty-two states were electing representatives by single-member district. Three states had only one representative. This arrangement changed with an apportionment act in 1842 (5 Stat. 491). This act set the House membership at 223 members and contained a requirement for single-member districts. . . . In the first election after the passage of the 1842 act four states -- Georgia, Mississippi, Missouri, and New Hampshire -- continued to elect representatives at-large rather than by districts.


The members of Congress elected in those states were seated and the constitutionality of the 1842 act was questioned by the House committee resolving the credentials dispute.

An apportionment act passed in 1850 (9 Stat. 433) increased the size of the House to 233 but dropped provisions requiring elections by districts. However, an act in 1862 (12 Stat. 572) restored the provisions of the act of 1842 requiring districts composed of contiguous territory.

An apportionment act in 1872 (17 Stat. 28) again reiterated the requirement of districts composed of contiguous territory and added that they should contain "as nearly as practicable an equal number of inhabitants." The apportionment act of 1882 (22 Stat. 5) and an act in 1891 repeated the provisions of contiguous territory and equal population of the 1872 act. An apportionment act in 1901 (31 Stat. 733) added that districts should not only be of equal population and contiguous but also be of "compact territory." These provisions were also included in 1911's apportionment act (37 Stat. 13).

In 1929 Congress passed a combined census-reapportionment bill which established a permanent method for apportioning House seats according to each census. This bill neither repealed nor restated the requirements of the previous apportionment acts -- that districts be contiguous, compact, and equally populated.

It was not clear if these requirements were still in effect until the Supreme Court ruled in 1932 in Wood v. Broom that the provisions of each apportionment act affected only the apportionment for which they were written. Thus the size and population requirements, last stated in the act of 1911, expired immediately with the enactment of the subsequent apportionment act. . . . [This] allowed states to abandon districts altogether and elect at least some representatives at large, which several states chose to do, including New York, Illinois, Washington, Hawaii and New Mexico. In the 88th Congress (in the early 1960s), for example, 22 of the 435 representatives were elected at-large.

In 1967 Congress passed a law (PL 90-196) which prohibited at-large and other multi-member elections by states with more than one House seat. Only two states, Hawaii and New Mexico, were affected by this legislation: all other states by this time were using elections by districts.


Sessions of Congress were in any case short; they had little to do. The first Congress met seventeen months out of twenty-four implementing the new constitution. the Civil War and Reconstruction Congresses met between ten and twenty-two months. In every other two year session of Congress until the one commencing in 1911, Congress met for less than twelve of the twenty-four months of the session.

And, the House of Representatives was a tough place. In the 19th century, physical violence on the House floor was not infrequent and guns and knives were occasional carried onto the floor.

1962-2009

Ted Kennedy was elected to the U.S. Senate in 1962 at age 30, the youngest age allowed by the U.S. Constitution.

Ted Kennedy's election came two years after his brother, John Fitzgerald Kennedy, was elected President of the United States. Ted Kennedy filled the vacancy his brother's ascension to the Presidency created.

Ted Kennedy's other brother, Robert Kennedy, served as his brother JFK's United States Attorney General from 1961 to 1964. JFK's term was cut short by an assassin on November 22, 1963. In the election following the JFK assassination, Robert Kennedy ran for the U.S. Senate in New York and won. He served until his assassination in 1968, while running for the Democratic Presidential nomination.

Ted Kennedy served the state of Massachusetts in the U.S Senate for the rest of his life, which ended this week. Until the very end, when his brain cancer kept him from attending except for the very most critical votes, his service was exemplary, everything I could ask for in a U.S. Senator.

Still, 46 years of U.S. Senate service is a very long time. Perhaps Ted Kennedy deserves an exception from ordinary considerations. He brothers were cheated out of their legitimately won political terms by violent thugs.

Ted Kennedy's term in office was longer than the average life expectancy when the U.S. Senate was established in 1789. It is longer than the entire working career of most American adults.

It is longer than most federal judges, who are actually appointed for life, serve. As of 2005, from 1790 to 1970 the average length of service of Supreme Court justices was approximately 16 years. Since 1970, however, the average length had gone up to more than 24 years. Lower federal court judges resign when they are eligible to collect their pensions much more often than U.S. Supreme Court Justices do.

It also isn't unusual. When Congress convened in 2007, the average length of service in the House, at the beginning of the Congress, was about 9.3 years (slightly over 4.5 terms); in the Senate, 12.1 years (two terms). But, of course, over their entire careers, most members of Congress serve terms about twice as long, and many Senators previously served in the House of Representatives. The average U.S. Senator will spent almost thirty years in the House and Senate combined over the course of his or her career.

This wasn't always so. There was no need to regulate the terms of members of Congress in the nation's first century because the problem then was convincing members of Congress to stay in office, not very long terms of office. The average Representative prior to 1880 served two years. The average Senator prior to 1880 served four years.

Voters in most U.S. states and an even larger percentage of U.S. House of Representatives districts strongly favor one political party or the other. Incumbent U.S. Senators and Representatives are almost never successfully challenged in primaries. A surprisingly large number of Representatives and Senators in Congress end up spending more time in Congress (sometimes split between years as a U.S. Representative and years as a U.S. Senator, an example Mark Udall is currently following in Colorado), for longer than the average federal judge appointed for life spends in office.

The political tides shift the balance of power in Congress from liberal to conservative to liberal again, but most of the change in the political balance of Congress takes place in swing districts, swing states and open seats. The politics of personal attack are key tactics in Washington, because removing an incumbent on the merits in an ordinary election is such an insurmountable task in most of the country.

Turnover In The Federal Executive Branch

Following a precedent set by George Washington, the first President of the United States, all Presidents until Franklin Delano Roosevelt (1933-1945) limited their terms of service to eight years, if a lost election or death didn't force an earlier removal from office. Like Ted Kennedy, FDR was also a Democrat who was exemplary in his achievements and the public support he garnered as he guided America through the Great Depression and World War II until his death.

Still, the Twenty-Second Amendment to the United States Constitutional, adopted six years after his death, in 1951, which limited a President to no more than ten years in office (after no more than two elections) rightly received wide bipartisan support.

As a result of the higher turnover in the Presidency, Secretaries of federal government departments have served even less long than the Presidents under which they have served in most cases not involving a premature death. Most President have replaced most of their initial cabinet officers at least once before leaving office. The average tenure of a cabinet secretrary is around three years, give or take.

Colorado's Experience

Colorado has term limits in its legislature (effective for terms starting in 1991). Generally, in Colorado, one may serve no more than four two year terms in the state house and no more than two four year terms in the state senate. Term limits also apply to almost all other offices.

Terms limits of six years for the U.S. House from Colorado, and twelve years for the U.S. Senate from Colorado, adopted for the terms starting in 1995, were held unconstitutional by the U.S. Supreme Court, after they were adopted by Colorado voters: U.S. Term Limits, Inc. v. Thornton, 115 S. Ct. 1842 (1995).

In practice, the term limits in Colorado's legislature may be too strict. But, it might help curb the nation's aristocratic tendencies.

N.B. Neither U.S. Supreme Court Justice Anthony Kennedy, nor Colorado State Treasurer Cary Kennedy, are members of the Kennedy political family.

The State Of The U.S. Housing Market

Denver real estate prices look like they are in a recovery phase. The latest report shows that in metro Denver, "home prices rose 2.5 percent compared with May but were down 3.6 percent compared with the same month a year ago[.]"

Two major cities, Dallas and Cleveland, are doing a little better than Denver, but seventeen other major cities have weaker housing markets. Markets like Detroit, Miami, and Las Vegas have seen declines compared to a year ago or more than 20%, enough to wipe out all of the equity in a home purchased a year ago with a conventional mortgage.

Also, distressed sales of existing properties have made new homes uncompetitive. In normal times, there are about six existing homes sold for every new home sold. Right now, the ratio is twelve to one.

Fundamentally, housing bubble prices caused home builders to build too many new homes, and now our economy has more houses than it needs.

Our economy doesn't need more housing now. The hard lesson of the Soviet Union's economic collapse was that producing things that people don't need doesn't improve people's standard of living and simply hides economic stagnation. Indeed, production that isn't needed is not only an unsustainable to create jobs in the long run, it also puts people who would ordinarily be needed to produce what people actually need on average out of work in the future. The lesson of holes is to first stop digging. The housing market has done that now in an extremely painful way.

GDP is a meaningful measure of standard of living only in economies that approximate a free market model sufficiently that the assumption that goods and services produced have economic value is a reasonably accurate assumption.

The demand for new housing will continue to be the case until demand for housing that can be built for current, more normal prices (close to current prices in non-bubble markets like Denver, but possibly lower still in markets that had big housing bubbles) returns due to factors like population growth and income growth in particular local housing markets. A recovery in Denver and New York won't create a market for new housing in Las Vegas or Miami.

Given the extremes that the seven year long housing bubble reached in many areas, it could be long time before home building companies have much work to do. This means that jobs in an economic recovery will have to come from other sectors of the economy.

It will be a long time before immigration by less skilled workers will be a big issue again in the U.S. economy. Many of the jobs that immigrants filled were in the construction industry. Many more were in equally ailing hospitality industry (e.g., hotel maids) and manufacturing industry (e.g., meat processing plants). Without these jobs, the incentive to immigrant goes away, and many foreign workers will give up and return to their home countries. For a while, at least, reconciling available immigration quotas under reform legislation, and the people who want to legalize their status, may be relatively easy.

There Is A Mandate For Health Care Reform

Community rating for health insurance and universal health insurance coverage is extremely popular with the public. It has 86% support.

While there is dispute over whether our health care system should be more like the Medicare, the Canadians and the French (single payer), or more like the Dutch and the Swiss and federal civil servant coverage (universal private health insurance coverage), there is widespread support for change from the status quo, that leaves 14% of people uninsured.

There is no great support, anywhere in the current debate, for a British style system, where the government runs the hospital and employs doctors directly. The people who don't like the Democratic health care proposal, which is similar to the Dutch and Swiss systems, who want any kind of meaningful health care reform, want a single payer system, like Canada and France, not a single provider system, like the British.

Politically, single payer is not on the table. This is true mostly because none of the Presidential candidates in 2008 backed that approach, not Obama, not Clinton, and not any of the Republican candidates. Elections have consequences. So, the real choice at this point is between universal private health insurance coverage for all, or nothing.

Does the "Public Option" matter?

In my view, the debate over a "public option" is a red herring. It doesn't matter.

To opponents, a "public option" sound like Congress is considering a British style system, but will do far less than its supporters think to control costs. To supporters, a "public option" sounds like Congress is considering some sort of single payer system, which it isn't. Congress is simply considering chartering a new non-profit health insurance company.

The people who support single payer health care reform aren't wrong. Single payer systems are cheaper. A universal private health insurance coverage systems provides some of the benefits of a single payer system, but not all of them. They have simply been outmanuvered politically.

But, the advocated of a single payer system, mostly progressive Democrats, are wrong to believe that a "public option" will address health care cost concerns that they had hoped to address with a single payer system.

The extra edge that single payer systems have comes primarily from the single payer's superior bargaining power (similar to that of a labor union) and its administrative simplicity, not from the inherent efficiency of government bureaucrats relative to private sector bureaucrats.

Competitive markets, even with the small number of competitors found in health insurance markets, do a great deal to moderate excessive premium prices that are within the control of the insurer. The presence of non-profit participants in that market further limits their ability to overcharge.

But, adding one more government run player to a health insurance market that already has several important non-profit participants doesn't provide either the superior bargaining power or administrative simplicity of a single payer system. And, the need for a subsided public insurer of last resort for people with pre-existing conditions (like Cover Colorado), disappears when health insurance companies are no longer allowed to consider pre-existing conditions when writing health insurance policies (i.e. when community rating is adopted).

The administrative simplification that a single payer system makes possible is far less feasible with half a dozen or more private insurers still in the health insurance market. Medicare is administratively simple because every provider for elderly patients uses a single set of forms and it covers almost everything. A public option doesn't change this situation.

A public participant in the market among many, even if that participant can cut corners on marketability and doesn't have to make a profit, still can't unilaterally control prices. Marketing can encourage people to pay a little bit more for a private health insurance plan than a public one. And, an public health insurance company that doesn't pay what other players in the market do will face the same problem that Medicaid has had in trying to get providers to accept below market rate compensation for their services.

The Politics Of A Public Option Get Easier, Not Harder

Advocates of a public option also argue that the current reform bill is the last best chance to get a public option. If they are right, and they are also right that a public option matters, this is a concern. They are wrong in this political calculus as well.

Once the other parts of health care reform are in place, chartering a government owned, non-profit, not meaningfully subsidized health insurance company, which is all that the public option proposes to do, is going to be much harder to fight legislatively. There is no opportunity, with that kind of simple, narrow legislation, to attribute costs from other parts of the health care reform plan to this piece of the puzzle, or to argue convincingly that it amounts to a government takeover of private health insurance. It becomes an issue of allowing competition or not, in a market that already has non-profit competitors, in exchange for the possibility of reduced health care costs without meaningful additional public spending.

Indeed, another way to get a public option, if one is not convinced of my analysis (particularly for states that have highly concentrated health insurance markets), is to allow states to create health insurance plans if they wish (a power that they probably already have in most states). Where the market is concentrated and support for a public option is high, states could act. Where the market is not concentrated or support for a public optioon is low, states could do nothing.

Against Billable Hours (Or Not)?

Law firms have a business model more like defense contractors creating new weapons systems than like Toyota. The operate on a cost plus basis with results not guaranteed.

This system is called the "billable hour" model. The billable hour system, in addition to being popular with lawyers is also favored by plumbers and accountants.

An important variant on the billable hour model is a system in which lawyers make binding estimates of their charges that cease to apply if surprises come up in the case, a bit like a typical automobile repair shop or a general contractor doing a renovation.

The leading alternatives to billable hours include flat fees and contingency fees. Toyota builds cars for flat fees based on the value of the car. Realtors typically charge a contingency fee based on the sales price (although this is arguably a fee based upon the size of the matter).

Also not unprecedented are fees based upon the size of the matter (common for investment managers), a flat fee per month (sometimes called a "true retainer") like your trash collection bill, and a fee for task model (popular with online advertising agencies and construction subcontractors).

Current Practice

A debate over billable hours, per se, missed the point. In the vast majority of cases, the vast majority of the time, there are ordinary and customary approaches to billing clients for particular types of work that become a stable industry standard. There are many kinds of cases that are almost always handled with alternative fee arrangements. There are other types of cases that are almost always handled on a billable hour basis. Part of the real debate over billable hours is really simply haggling over the size of the fee. The other part of the real debate over billable hours is over whether the categories of cases assigned to the billable hour system are too broad.

Most of the debate involves billing for "commodity work." In other words, over fees in relatively routine cases that don't put the survival of big clients at risk if done in a reasonably workmanlike fashion.

So, what is the status quo?

Billable hours are most common in litigation and negotiations where it is hard to determine what the opposing party will do, taking fees out of the lawyer's control. General Dynamics insists on cost plus contracts for new military systems for the same reason. It is hard to know how much it will cost in the end, the project is likely to be shutdown before it is carried to its natural conclusion, and clients often change their minds about what they want. Some work is too unpredictable to handle profitably on a fixed flat fee basis.

The use of binding estimates with exceptions to the general rules is common is "commodity litigation" like insurance defense litigation in automobile accidents, and employment litigation for large employers for non-executive employees. Hourly fees subject to maximum compensation cap is a variant of this approach common when the government retains a private attorney to do work that might otherwise have been done by a public employee (like criminal defense work for indigent clients).

Flat fees are common for drafting non-negotiable contracts, routine estate plans, personal bankruptcies, routine corporate documents, and intellectual property filings, where the initial engagement dicatates the scope of the work. Flat fees are not uncommon for appellate work (where settlement is unusual and the amount of work is highly predictable).

Flat fees are also sometimes used by high end firms in "bet the company" cases or in the defense of very serious criminal charges, where the value to the client exceeds anything that the lawyer could plausibly bill on a billable hour basis and the client is paying a premium to maximize the likelihood of a good result. Often these cases are contingent cases for all practical purposes because the ability to pay may be destroyed if the client loses the case.

Contingency fees are common in personal injury cases and in collections cases; I've also seem them in disputed tax cases (e.g. property tax assessment cases or defenses of multi-issue audits).

Fees based on the size of the matter are common in some states in probate and trust administration cases, and securities offerings.

True retainers are popular for lobbying, and sometimes used for municipal attorneys or serving as a general counsel for a medium sized business that can't afford a full time in house counsel. In-house counsel, prosecutors, public defenders and other government attorneys are often paid in this way as well.

A fee for task model is common in criminal law, where there is often one fee for a plea bargain, an additional one if the case goes to trial, and yet another if a case if appealed. It is also common in guardian ad litem practice, with fixed fees for activities like appearing at a hearing or conferring with a client, and in mass produced foreclosure litigation that involves a low risk of losing despite a high amount of money at stake. This used to be the mainstay of bar association fee schedules and remains common in litigation practice in Europe. In these contexts, there would be fees per motion, per hearing, per witness examination, per trial day, and so on.

The Case Against The Billable Hour

Very succinctly stated, in a long and informed post on the topic:

What's wrong with the billable hour?

From my fundamental economic perspective, all you need to know is that it starts and ends the pricing determination based on "cost of production" rather than "value to client." In my book, that's per se irrational.


Karl Marx's labor theory of value hasn't been popular in business schools as a descriptive model of how an economic marketplace works for the last half century, at least. And, in big law firms, most of the people that the lawyers ultimately report to went to business school sometime in the past half century.

Why do lawyers use billable hours anyway?

A cost plus deal with a solvent client never produces a loss, and lawyers are risk-averse. While lawyers deal with risk for their clients all the time, they greatly prefer to avoid having risk themselves. Lawyers don't like assuming risks that are fundamentally a result of the problems that their clients bring to them, particularly when their own client and the opposing party have the ability to make cases more expense to litigate.

Also, even in billable hour cases without a binding estimate of fees, lawyers are required ethically, and as a matter of good business, to scale their efforts to the value to the client for a case. A very typical conservation in a litigation partner's day involves telling a client that he can pay his lawyer a minimum of $150,000 to fight a lawsuit, or he can settle it for $30,000. A billable hours approach protects a lawyer from clients inclined who might otherwise be inclined to do something stupid. Lawyers are willing to fight the rare economically irrational case that a client wants to pursue anyway in exchange for a decent profit, but clients who routinely decide to act in an economically irrational way are usually shown the door. Sooner or later, they won't be able to pay the fees if they operate irrationally.

Lawyers agree to alternative billing rates because they are comfortable that they can manage the unprofitability risk in that case, because the value is at least as great as the cost plus involved in producing the work, and because the client prefers the predictability or can't afford to pay otherwise.

Alternative billing gives lawyers a strong incentive to do work efficiently to give the client the same value at a lower cost, something that the lawyer is in a better position to do than the client in most cases. Typically, in flat fee and contingent work, a greater share of the work is done by paralegals and junior associates, the work is more form driven, lead attorneys take settlement more seriously as an option, and unproductive discovery is trimmed in litigation.

Equally important, in the end, it all boils down to price. Shifts to alternative billing are, in part, simply a face saving way to allow firms to negotiate with their clients over the price they are charging for legal services. In negotiations between in-house counsel and an outside law firm, which is a common situation, both sides have a good idea what the expected kind of work costs on a billable hour basis. The in-house counsel often spent years doing precisely that kind of work on a billable hour basis at the firm that is now outside counsel. A shift to a flat fee arrangement allows the client to reduce cost, while offering the law firm an opportunity to win back the profits sacrificed by becoming more efficient.

Once efficiencies have been implemented at the firm in question, the sophisticated client may very well return to a billable hour fee arrangement, while paying lower bills for the same work.

Law is a service industry. In the long run, they are paid what the market can bear. The market will not, in the long run, accept 33% contingencies to do fully secured prime mortgage foreclosures. The market will accept $1,000 per hour billing rates if the typical total cost of a deal is considerably less than the value the client gets from the work, particularly if the client can be persuaded that less expensive lawyers would provide less value.

"Biglaw" can afford to pay high salaries and still make big profits because they work on matters that have high value to their clients, often because their clients are big businesses whose significant entity level cases reflect their immense scale. It is easier to do high quality work that creates value while remaining profitable by making a multi-billion dollar financing transaction for a publicly held company work smoothly, than it is to do the same thing for a business that needs hundreds of thousands of dollars of financing.

Firms that deal with "human scale" cases like automobile accidents, rank and file employment matters, unpaid mortgages, and estate plans for mere millionaires, still manage to make nice upper middle class incomes if they can bring in enough work, but they don't make what big business senior executives do handling it.

Firms that do work for individuals often have no choice but to limit their fees to the often low per case value of the cases that they handle, and to accept payment only when the value is produced, because that value (be it a real estate closing or a personal injury settlement) is the only reason that the client has an ability to pay at all.

The post I cite also describes push back from clients who don't want to subsidize extreme profits for the lawyers, and this may have merit. The executives that big law firms work for have seen unprecedented declines in their incomes, and that impacts what they think it takes to hire someone who handles decisions similar in gravity to those they handle themselves. CEOs are willing to let their lead lawyers earn similar rates of pay to what their senior managers earn, on the kinds of matters that the CEO deals with directly. When senior managers in big corporations make less money, they are less comfortable with hiring lawyers who more money than they do to handle anything but the very most dire matters.

25 August 2009

How Far Out Of Balance Is The Federal Budget?

Forget the national debt, suppose that you just want to break even and pay out nation's expenses as they come due over the next ten years. How much would that cost?

It turns out, an almost 40% increase in federal tax revenues would be required.

This year's budget deficit is predicted to be $1.6 trillion. Over the next ten years, the White House projects a $9 trillion cumulative deficit, while Congress is predicting a $7 trillion cumulative deficit.

total tax receipts, including individual, corporate, excise, etc were only around $2.5 trillion in 2008. Next year, and probably in the few that follow, tax revenue should fall.


For the budget to break even over the next dedade with the same amount of spending, the increase in tax revenues needs to be 28% if revenues stay constant and the low end deficit figure is used, and 36% if revenues stay constant and the high end deficit figure is used. If tax revenues stumble, as expected, the tax increases need to be more like 40%.

Politically, this won't happen. Congress and the President will not balance the budget over the next ten years. Taxes will not be increased that much without new programs to use the revenues. But, ultimately, all spending must be paid for somehow. All public spending that doesn't itself carry with it a right to repayment (e.g. a company bailout or studnet loan) is a tax increase in the indefinite future.

The increase in the total tax burden at all level of governments to balance the aggregate government deficit would be smaller, because state and local governments generally have to have balanced budgets, and finance loans with pre-planned repayment options.

Med Mal Costs Still Don't Drive Health Care Costs

• Medical malpractice premiums, inflation-adjusted, are nearly the lowest they have been in 30 years.

• Medical malpractice claims, inflation-adjusted, are dropping significantly, down 45 percent since 2000.

• Medical malpractice premiums are less than one-half of one percent of the country’s overall health care costs; medical malpractice claims are a mere one-fifth of one percent of health care costs. In over 30 years, premiums and claims have never been greater than 1% of our nation’s health care costs.

• Medical malpractice insurer profits are higher than the rest of the property casualty industry, which has been remarkably profitable over the last five years.

• The periodic premium spikes that doctors experience, as they did from 2002 until 2005, are not related to claims but to the economic cycle of insurers and to drops in investment income.

• Many states that have resisted enacting severe restrictions on injured patients’ legal rights experienced rate changes (i.e., premium increases or decreases for doctors) similar to those in states that enacted severe restrictions on patients’ rights, i.e., there is no correlation between "tort reform" and insurance rates for doctors.


Via this diary.

Health care costs are out of control, consistently rising faster than inflation. But, the medical malpractice system has nothing to do with it.

Windows Smashed At Colo Dems HQ (updatedx2)

All of the plate glass windows at the Colorado Democratic Party's store front headquarters at 8th and Sante Fe in Denver were smashed sometime before 3 a.m. this morning in the early causing an estimated $10,000 of damage. Pictures from the CDP are available here.

This hits close to home for me and for many politically involved Democrats in the state as I frequently attend meetings there and used to live about three blocks away.

Remarkably, "Police said the person responsible is in custody." This is unusual because vandalism cases frequently go unsolved. My guess is that someone was either caught in the act, or was caught on videotape (perhaps a traffic camera or video from a nearby convenience store).

According to the Denver Post:

The Denver Police Department has a man in custody, though they are not yet releasing the suspect's name, said Det. Vicki Ferrari.

An officer on patrol spotted the vandal in the act around 2:20 a.m. and took him into custody after a short foot pursuit, she said.


Redstateblues, a sometimes reader, has posted a photo of an anti-health care flier ("National Socialist Healthcare" reads the headline) from the scene at Colorado Pols.

So, we may yet get a definite answer regarding the motive for the crime, which Colorado Democratic Party chair Pat Waak suggests was related to opposition to the President's health care plan.

I'll also be a first to say that this matter should be handled as criminal vandalism and not as some sort of political offense (e.g. terrorism). Dignifying the cause of someone who engaged in property destruction with a purpose doesn't help.

UPDATE #2 (first update reflected in original text): The perpetrator who was caught appears to be Maurice Schwenkler, age 24, who isn't registered to vote, who did some election work for a 527 group that favored Democrats, shortly before the election in 2008, "signed an online 2005 petition to free anti-war Christian protestors who were captured in Iraq" and is active in the Derailer Bicycle Collective (the collective's address is what he cites as his address in the contribution report; it is in the same neighborhood as the Colorado State Democratic Party HQ). The Collective is a bit of an anarchist leaning random acts of kindness liberal group. In other words, he's a usually harmless hippie.

Another suspect escaped the scene on a bicycle.

Clearly, this particular incident was not a case of right wing political violence. It may actually have been an attack calculated to engender sympathy for the Democrats, or it could have been some sort of more personal grievance.

Indirect Land Use Control

The predominant way for government to regulate land use in the United States today is with zoning laws that mandate that only certain kinds of property uses be allowed in certain places. But, this isn't the only possible tool.

Water As A Limitation on Development

In Grand Junction, Colorado, and in Douglas County, Colorado, the local municipal water agency frequently has more power in land use decisions than zoning officials. The supply of drinking water is modest and getting approval for a tap for a new structure is often more of a barrier to development in these places than what is legal to build in an area.

Access As A Limitation on Development

Another way too control development is by limiting access.

On public lands, Bill Clinton's famous "roadless rule" is designed to work that way (the "roadless rule" was gutted in the Bush Administration, but appears to have been reinstated by a court finding that the revocation was not done using the proper procedures).

Another example is the controversy in Bells Bend, Tennessee over a proposed second downtown called "May Town Center" on what is now agricultural land. Lack of access has protected the land from development so far, and opponents of the project prefer this approach to the open space greenbelt that the developer has suggested.

The current proposal is "to build at least two (and probably three) bridges to haul traffic in and out of currently remote and inconvenient Bells Bend." Planners favored the proposal stating:

Staff has evaluated May Town Center’s substantial economic impact, its aggressive land conservation plan, and its developers’ commitment to constructing public roads and bridges over the life of the project to manage off-site traffic impacts.


Opponents complaint that:

[The] chief planner seems to assume--naively defying all past rezoning realities--that a mere belt of undeveloped green space around May Town Center will insure open space conservation in Bells Bend better than the bridgeless Cumberland itself. Where did he get the notion that greenbelts provide anything but token resistance to greenbacks? Engineered open space would hardly be a match for the avarice folded into high returns on cheap undeveloped land and the developer-friendly tendencies of the Planning Commission.


Sacred Groves And Environmental Protection

These cases also bring to mind a study done by one of my father's environmental science students in another country (IIRC, India). The study found that in country where the local area was largely undeveloped, compared open space preservation in "sacred groves" protected by locals on religious grounds, with little government involvement in the preservation effort, to public land preserves where harmful uses were subject to government regulation and abuses were subject to criminal sanctions.

The sacred groves ended up being preserved from an environmental perspective better than the public land preserves. This happened despite the fact that the sacred grove rules, which were religious in nature, weren't expressly set out to preserve open space, plants and habitats. Some commentators even describe sacred groves as biodiversity hot spots where otherwise extinct specifies are found.

A quick search finds a study on sacred groves and environmental protection in Ghana and in Southeastern India. Ongoing inquiry of similar situations in Tanzania is being conducted.