30 May 2014

Sometimes The Law Is Just Plain Illogical

The life of the law has not been logic; it has been experience... The law embodies the story of a nation's development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.
- Oliver Wendell Holmes, Jr., "The Common Law" (1881) at 1.
We have never addressed the question of whether the inclusion of the words “In God We Trust” on United States currency violates the Constitution or RFRA and write today to clarify the law on this issue. Four other circuit courts have ruled on this question, however, and have found that the statutes at issue do not contravene the Constitution. . . . Gaylor v. United States, 74 F.3d 214, 216 (10th Cir. 1996) (holding that the “statutes establishing the national motto and directing its reproduction on U.S. currency clearly have a secular purpose” and that “the motto’s primary effect is not to advance religion; instead, it is a form of ‘ceremonial deism,’” and, therefore, the statutes do not violate the Establishment Clause).
- Newdow v. Peterson, Case No. 13-4049-cv (2nd Cir. May 28, 2014), Slip Op. at 3.

U.S. coins contain the national motto, "In God We Trust", which was adopted in the Eisenhower Administration in the 1950s, as motivated by the dominance and unity of large Christian and Jewish religious denominations, in contrast to the overtly atheist Communist enemies of the United States in the Cold War.  The same burst of religious motivation put Ten Commandments displays in governmental buildings across the nation and slipped "under God" into the previously secular Pledge of Allegiance to the Flag.

Under relevant U.S. Supreme Court dicta, the United States Court of Appeals for the Second Circuit concludes in a short per curiam opinion using reasoning succinctly summarized in the quote above, that doing so is not unconstitutional on the grounds that it constitutes the establishment of religious in contravention of the First Amendment to the United States Constitution. This is consistent with the past precedents and will probably not even be further reviewed by the U.S. Supreme Court. In other words, legally, this is probably a correct ruling.

A recent U.S. Supreme Court upholding expressly Christian legislative prayer sessions makes clear that the highest court in the land remains untroubled by this sort of anti-atheist expression (a move that provoked widespread criticism). Town of Greece v. Galloway, ___ U.S. ___, No. 12-696, 2014 WL 1757828 (May 5, 2014) (holding that the town’s practice of “offering a brief, solemn, and respectful prayer to open its monthly meetings” did not violate the Establishment Clause.) (a 5-4 decision cited in Newdow v. Peterson, supra at Footnote 4).

This said, the notion that “statutes establishing the national motto and directing its reproduction on U.S. currency clearly have a secular purpose” and that “the motto’s primary effect is not to advance religion; instead, it is a form of ‘ceremonial deism,’” as the 10th Circuit held in the Gaylor case in 1996 is still patently absurd.  All forms of so called "ceremonial deism" clearly support religion over atheism. They were enacted for this clearly religious purpose, a fact supported overwhelmingly by the legislative and social histories of the movements that put these laws in place.

There is simply no way to logically reconcile this conclusion with the more general legal standard that governs establishment clause cases:
The First Amendment of the Constitution provides that “Congress shall make no law respecting an establishment of religion.” In Lemon v. Kurtzman, 403 U.S. 602 (1971), the Supreme Court held that, in order to comply with the Establishment Clause: “First, the statute [at issue] must have a secular legislative purpose; second, its principal or primary effect must be one that neither advances or inhibits religion; finally, the statute must not foster an excessive government entanglement with religion.” Id. at 612-13 (internal citations and quotation marks omitted).
- Newdow v. Peterson, supra, Slip. Op. at 4.

The "ceremonial deism" exception to the Establishment Clause is nothing more or less than a blatant and absurd legal fiction in which the courts have chosen to deliberately favor the religious over the non-religious (and other non-monotheistic religious beliefs). It was adopted at a time when so few people shared this religious identification, publicly anyway, that the snub seemed benign and harmless.

Now, there are now almost as many people who identify as something other than Christian as there are people who identify as Roman Catholic in the United States, and given current trends that line could be crossed any day now. But, the precedents are there and the current U.S. Supreme Court seems disinclined to change its position any time soon.

The Second Circuit in Newdow v. Peterson picks up U.S. Supreme Court dicta suggesting that it is hard to be "neutral" as opposed to slipping into active hostility towards religion.  Newdow v. Peterson, supra, Slip Op. at footnote 2.

But, the ceremonial deism cases themselves, factually, make clear that a neutral secular stance doesn't hold water.  Removing "In God We Trust" from currency, and "under God" from the Pledge of Allegiance produce results that address the establishment clause concerns without any hostility towards religion.  Neutrality is easy to achieve in these contexts.

Newdow v. Peterson's ceremonial deism holding is simply a particularly clear case of a bad precedent continuing to make bad and illogical law, despite the fact that it is on its face crazy.  Unfortunately, this is how the law works.  Precedents talk, logic walks.  Fortunately, in this situation, while the bad precedent is offensive and illogical, it at least doesn't do much material harm to non-monotheistic Americans, who in much of the world would face far more oppression.  Many other terribly U.S. Supreme Court precedents do far more harm.

29 May 2014

Terrain and Spread of Military Technology Were The Keys To The Rise Of Civilization

[A] model with only a few simple parameters was incredibly good at fitting the genuine growth and evolution of complex societies over 3,000 years. More quantitatively about two thirds of the variation spatially in ‘imperial density’ can be accounted for by the spread and emergence of military technology and the ruggedness of the landscape.
From here citing Turchin, et al., "War, space, and the evolution of Old World complex societies" (September 23, 2013).

My intuition from similar studies of this type is that one or two measures of climate history, probably followed in turn by diffusion of agricultural technology, would add the most to the predictive value and chronological range of the model.

The fact that models like this can come reasonably close to the actual course of human history also supports my general tendency to see the broad coarse of human history as largely the deterministic result of technological advances and natural conditions, which in turn drive economic forces, which in turn drive cultural and political developments.

This economic determinism school of historical causation is at odds with the view that history is largely driven by the unpredictable decisions of "Great Men" in history who actually ended up being the proximate cause of what people did at key moments in history.  I see circumstances as driving decisions far more powerfully than I see people creating they key circumstances (with the possible exception of inventors, although their discoveries too are far more driven by circumstances and what is natural to investigate given what has come before them than is frequently acknowledged).

The Case For Law As An Undergraduate Degree

A post at the Legal Watchdog blog discusses at length the possibility of making law school a two year undergraduate program, rather than a three year graduate degree with no prerequisites and sometimes lax law school admissions standards for people who want to have a prayer of passing the bar exam, that follows a four year undergraduate degree.

For what it's worth, I would really favor a slightly less radical four year undergraduate degree path to a pre-professional law degree modeled on an engineering or bachelor of science program, rather than a bachelor of arts undergraduate degree.  This would contain two to three years of pre-professional law degree classes and another one to two years of general education classes which almost all current lawyers with bachelor's degrees, regardless of major field of study, take now before earning a bachelor's degree and starting law school.

Rather than restate the Legal Watchdog blog post's points, I'll raise three additional arguments in favor of this conversion that aren't stated as decisively as I would like.

The Feminist Argument For Undergraduate Legal Education

Women make up 60% of law school graduates, appropriately because there are significantly more women than men who are extremely high performing in verbal and writing ability pertinent to the practice of law (Wai 2010).  But only about 17% of equity partners in large law firms are women (citing Wittenberg-Cox 2014). The percentage of women at each intermediate step in the large law firm lawyer career path declines.  Women have made up about 45% or more of law school graduates for more than thirty years, so the pipeline arguments are exhausted.

The reason for this is really not a mystery.  Men who marry and have children who work at large law firms and have what it takes to advance on that career ladder continue to devote "Big Job" class time commitments of sixty hours a week or more to their jobs.  Women who marry and have children who work at large law firms and are capable of doing the work necessary to advance on that career ladder take time off for a number of years while they have young children if they can and pay a punishing economic price for doing so for the remainder of their careers and in terms of job advancement.

In the ordinary seven years of higher education needed to obtain a law degree, a typical newly admitted to the bar attorney is 25 or 26 years old.  Women who have kids before completing graduate school are much more likely not to earn their degrees at all and almost never get into the large law firm career track.  Further, the creme of the crop of law school graduates who go onto be partners at large law firms, are also often expected to spend a year or two as judicial branch law clerks prior to entering private law firm associates as a sort of final on the job training process for the best and the brightest that provides insights in future trial practice.

Promotion at a large law firm from associate attorneys to "of counsel" or "non-equity partner", which are the next steps up in the large law firm career ladder, typically takes seven to eight years as an associate and then "senior associate" attorney in that firm working sixty to eighty hours a week.  This puts a would be non-equity partner in a law firm at 32 to 35 years old.


A woman's biological clock makes it optimal for her to have children in her twenties or early thirties, after which fertility rates start to decline (see also Wikipedia) and adverse outcomes like birth defects grow progressively more common.  Throw in negative effects associated with a father of advanced paternal age and the fact that professional women tend to marry professional men similar in age or a bit older.

It is almost impossible to commit the relentless long hours necessary to be on partner track at a large law firm to an associate attorney's job if you are pregnant and then give birth to a couple of kids spaced a few years apart.  This is particularly true if you want to breast feed for at least the medically recommended lengths of time (about twelve months) and to not wish to feel like you are being a terrible mother.

Taking six months to a few years off immediately after being promoted to "Of Counsel" or "Non-equity partner" is likewise not a recipe for keeping your job in a large law firm, not matter what your reasons may be.

This leaves a woman who wants to have children without fertility treatments and other biological clock problems by having children before reaching advanced material age who is otherwise perfect law partner material with a window of three years or less to have kids while still securely reaching the penultimate step in the career ladder as "non-equity partner", which is pretty much the lowest perch from which you can return to the firm after an extended leave of absence and have any hope of ever becoming an equity partner in a large law firm.

Indeed, many women with an aptitude for law become paralegals or legal secretaries rather than lawyers because the earlier start is friendlier to their aspirations and desires to be parents at a reasonably young age while still having time to establish a meaningful career that they can return to without undue penalty once all of their children are ready to go to preschool.

Making law and undergraduate degree and disregarding the tradition of judicial clerkships for top law school grads seeking to become partners in private law firms turns a zero to three year window into a four or five year window, and maybe even a six year window to have children while having some hope of eventually becoming a partner if a woman takes enough AP classes, IB exams, and local college courses while in high school to finish an undergraduate degree in three years rather than four  Indeed, the addition three to five years in this window also makes it much more feasible for women who want to have children to make it all of the way to equity partner in a law firm before doing so.  And, holding onto the economic rewards of your career following an interruption in your working life for a few years is much easier for someone who has attained the status of equity partner than it is for anyone with a less senior position in a law firm.

If the biological clock theory advanced here is correct, it is fair to estimate that a shift of legal education from a graduate degree to an undergraduate degree would roughly double the number of women who become equity partners in large law firms.

The Gender Difference Feminism Case For Allowing Women But Not Men To Take Undergraduate Pre-Professional Law Degree Programs

If you are willing to look at the psychology literature on gender differences, and abandon the credo that education and employment should be entirely gender blind, there is even an argument that undergraduate law degrees should be made available to women, even if they are not available to men.

The argument here is that first, in a balancing analysis, women face much more demanding biological clocks on their child rearing than men, so compromising undergraduate general education is more justified in their case than for men.

Then, consider these data points.  The psychological case that women and men differ in peak adult intellectual, cognitive and behavioral aptitudes overall is controversial and quite weak (Halpern 2012 at 34).  But, the developmental psychology literature firmly establishes women reach their peaks on average at a younger age than men in social maturity (Cohn 1991), and in cognitive skills such as verbal skills and reasoning skills (Gur et al. 2012) which are critical to the practice of law.  Psychological studies similarly tend to show that women make mature decisions about their careers in life at a younger age (Patton and Creed 2001Luzzo 1995). (See also, e.g. Bramen, et al. 2011 and Bramen et al. 2012, looking at comparative brain tissue development by age and gender).

Arguably, the main reason to make law a graduate degree rather than an undergraduate degree, which is a decision that was largely made prior to the early 1970s when more than 95% of law students were men, is that it is best to defer pre-professional legal education until students have reached peak adult levels of social maturity, verbal skills and reasoning skills, which happens later in life for men than it does for women, and doesn't have to be balanced against other compelling reasons to start earlier.

There is also a fair amount of educational psychology research to suggest that single sex education is particularly beneficial to women in discussion oriented classes like many classes taught in law school.

Thus, there is a decent argument from the psychology literature and from work-family balance considerations that it would make sense for women to earn undergraduate law degrees, while men would continue to follow the current norm of earning pre-professional law degrees only after earning an undergraduate degree while they are still maturing cognitively and behaviorally.

The Economic Equity Argument

There is overwhelming empirical evidence to show that the cost of higher education results in much lower levels of college attendance and completion for poor students with given test scores and grades, than for more affluent students.  The most academically talented poor students are only about as likely to earn a college degree as the least academically talented affluent student.

Cutting three years and about $150,000+ of education costs (in the form of student loan debt for most poor, working class, and middle class law students) off the investment in human capital necessary to become an attorney dramatically expands access to the profession for students from less affluent families.

Put another way, given a choice between a 26 year old associate attorney applicant with one year of experience, and one with four years of experience, almost all employers would find the latter more valuable, and the change would make the lifetime earnings of all attorneys' net of education costs substantially higher.

In another related point, lower levels of student debt make it more viable for law school graduates to pursue governmental or public interest law careers out of law school, rather than being debt driven to pursue the position with the highest possible starting salary.

The Comparative Argument

The United States is exceptional in making a pre-professional law degree a graduate degree rather than an undergraduate degree.  Almost every other country in the world that has higher education requirements for becoming lawyers, in countries in the English common law tradition and in countries with the European civil law tradition alike, make a law degree an undergraduate degree.

This is solid evidence that there would be few if any detrimental effects to making a law degree in the United States and undergraduate degree rather than a graduate degree.  There is really nothing radical about making legal education an undergraduate enterprise except for institutional inertia.

A Footnote Related To Legal Education In Less Developed Economies

Even with both a typically four year bachelor's degree and a three year professional degree as well as a bar exam as pre-requisites, the American economy manages to have enough resources to provide a legal education to almost every law school applicant who is capable of passing a state bar exam and is astute enough in applying to safety schools to apply to a school with sufficiently lenient admission law school admission standards.  (Of course, admissions standards at some law schools are even lower if you are politically connected).

Likewise, very low levels of full time law degree required employment for recent law school graduates (particularly those with relative poor academic credentials at less prestigious law schools), unprecedented associate attorney layoffs during the financial crisis that are still continuing at low levels into 2014 (also here perhaps with actually quite significant layoffs in 2014) low pay for significant subsets of entry level lawyers (e.g. Massachusetts deputy district attorneys make on average less per year than court house janitors and public defenders there (who on average have more experience) make only slightly more), all tend to support the conclusion that the higher education system is not under-producing new lawyers at the margins in the United States.

In most less developed economies, this is not the case.

High school graduates are rare and college graduates are even more scarce, while many people in these countries are not just functionally illiterate in the official language of the country, but are totally illiterate in their native language.

In these circumstances, allowing people to enter the full fledged practice of law with a year or two of post-secondary training in law, may make a great deal of sense.  It is better to have a sufficient cadre of lawyers with some formal legal training to administer of functional legal system, than to have a much smaller cadre of lawyers trained to developed country standards whose numbers of completely inadequate to operate a functioning legal system.  The former may have more mistakes of law than would be optimal, but the latter will be effectively a system with lawless anarchy because the system can't handle the demands it needs to serve.

Also, in a less developed economy setting, it is probably more sensible to categorically limit the jurisdiction of courts that conduct Western style legal proceedings to a subset of the total judicial system docket that the available supply of lawyers can manage and to choose the cases for the subset of the total that are most critical to have handled by well trained legal professionals (e.g. serious felonies and real estate disputes), while reserving minor cases of the sort handled in courts of limited jurisdiction in the United States (e.g. misdemeanors and minor debt collection and residential eviction cases) to traditional dispute resolution processes or lay adjudication of some other type.

It may also make sense, if the supply of formally trained lawyers is small, to deploy the lion's share of formally trained lawyers as judges to maximize the accuracy of ultimate decision making based upon the cases presented to them, and to dispense with any formal licensing process for people assisting litigants in this process.

27 May 2014

Marriage and Social Class After the Civil War

A century and a half after the U.S. Civil War, questions concerning how this historical legacy has shaped modern America remain central in an American political discourse that remains starkly split along "Red State, Blue State" lines.

As our nation is in the throes of expanding the definition of marriage to include same sex couples and is more open to the notion of polygamy, the Civil War and Reconstruction era offer a dramatic natural experiment in how the institution of marriage in the Southern U.S. responded to the war time deaths of 18% of its adolescent, young adult and early middle aged white men.

Only in the last few years are really definitive answers to the question emerging.  Confederate widows who supported themselves without remarrying, Sugar Daddy widowers, and Southern women who became "cougars" by deferring marriage and then courting younger men, as well as relaxed standards for what constitutes a man worth marrying all contributed to bridging this gap, while more obvious options like interracial marriage and de facto polygamy turned out to play very minor roles.

Meanwhile, our economy is reaching levels of concentration that recall the plantation aristocracy of the South (indeed, it is probably more concentrated now than then), and our politics have taken a deeply partisan tone in which the GOP have become dominated by white Southerners.  So the question is, to what extent are today's political ideologies in the South legacies of the plantation aristocracy's antebellum political influence.

This too has an answer that can increasingly be answered accurately and with a certain degree a nuance.  Much of the South's backwardness today, economically, culturally and politically, can be attributed to the political staying power of the descendants of plantation elites in parts of the South where they were most dominant.  Their commitment to plantation farming conducted by ill educated workers has held back Southern economic, political and cultural development.

Hat Tip to a recent post by Tyler Cohen at Marginal Revolution on Civil War reparations and economic winners and losers.  This post was largely prompted by my efforts to check my own facts in a response to that post, and this research involved has significantly modified my understanding of the post-Civil War role of the planter elite in the development of the "New South".  This research also filled in the blanks of unanswered questions I had had about the impact that gender imbalance had on marriage in the Reconstruction era in a surprising way.  It is really astonishing how resilient marriage institutions were in this era.

Male Serial Monogamy Resolved The Post-War Shortage Of Men

About 18% of white men in the South aged 13 to 43 died in the American Civil War (1861-1865) and Reconstruction, and many more were crippled as a result, for example, with amputated limbs.  Yet, by 1890, there was no excess in the number of women never married in the South.

An open access article from 2010 tells much of the story (pdf version here).*

There were isolated incidents of bigamy.  Some isolated women also married across racial lines.  But, both of these responses were extremely uncommon.  About 7% of women never married at all, but this is barely elevated over the roughly 5% of each gender that never marries in ordinary times, so the widely expected epidemic of spinsterhood didn't occur either.

What did happen?

Southern women married during the war and then stayed widowed.

First and most importantly, women married Confederate soldiers at a frenzied pace during the war, so that they were widows rather than spinsters after the war.  Few of these widows remarried.
Approximately one in five southern-born women aged 40–49 in the South were currently widowed in 1880, compared with just one in nine among northern-born women. [A] county-level map of widowhood in 1880, highlights quite clearly the sectional impact of the war on subsequent widowhood. The prevalence of widows in the South appears to be concentrated around urban areas and along the Mississippi River, suggesting a degree of geographic mobility among widows to areas offering greater access to wage labor and social support networks and higher overall mortality rates in counties adjacent to the river.

Percentage of White Women from 40 to 59 Years Old Currently Widowed, 1880 Census.

Does the large percentage of Southern widowers hide more than it reveals?

This key point does call for a second more piercing look at some point in the future, however.

The fact that Civil War widows often did not remarry (20% or more in much of the South to more than 40% of women in many counties along the Mississippi River aged 40-59 being currently widowed in many Southern counties in 1880 and about a third of Southern women in this age backet as whole), may conceal the most interesting parts of the story, however.

A large share of these widows would have married during the Civil War and lost their husband's in the process.  A good guess would be that about 5%-6% of widows in this age group in the North and South alike were non-war widows (based on the ratio of the percentage of white men lost to the Civil War in each region).  War widows, in turn, probably made up about half of Northern widows in this age group, and about three-quarters of widows in this age group in the South, but more like 80%-90% of widowed women in urban areas along the Mississippi River.

The phenomenal rise in the number of not currently married women participating in the market economy and wage labor in the Reconstruction era and the couple of decades that follow probably represents a high water mark of female labor force participation until World War II and then again until the 1970s with the sexual revolution (see also, e.g. here).  Surely, there was more tolerance of female participation in the market economy and wage labor force in the South then than at any previous point in American history.  Otherwise, these widows would have starved and died.

But, it seems doubtful that a 20% to 40%+ of the women in these Southern countries stayed universally celibate for decades, even though they did not remarry, and it would be interesting to see statistics on child bearing by widows in this time period to corroborate that inference.  Few war brides could have had more than two or three children during the war, and many would have had just one or none.  But, I suspect that many of these women had children after their husbands died in the war.

Perhaps many of these women in the age range of 40 to 59 were celibate by the time that they were entering menopause (as many of them were at that point), but most women who were widows in the 1880 census had been widows for at least 15 years, since they were 25 to 44 years old and decidedly fertile.  Then again, the combined life expectancy at birth for men and women combined in the 1860 census, before the war, for whites in the U.S. as a whole was just 43.6 years and didn't break 50 for whites until the 1900 census.

Yes, total fertility rates (i.e. roughly speaking, the average number of children to whom a woman gives birth in a lifetime) declined from 1860 to 1870 to 1880.  But, this was part of an overall secular trend, even if the decline was somewhat steeper from 1860 to 1870 than in the decades before or after those time periods.  Even at the height of the Baby Boom, fertility did not return to even the levels of 1900.  While the end of the Baby Boom and "sexual revolution" are often attributed to the advent of oral contraceptives and other modern contraceptive devices, total fertility rates in 1970 were very similar to what they had been in 1930, by 1990 total fertility rates were back on the secular trendline.  Total fertility fell by 4.59 births per woman per lifetime from 1800 to 1930, gradually each decade, without the assistance of modern contraceptives.  (Similarly, the germ theory of disease started to dramatically reduce mortality rates from infectious diseases long before antibiotics or vaccines were widely available.)

The total fertility rate in the United States for white women in 1800 was 7.04, about the norm for third world women in Afghanistan and the poorer parts of Africa today.  For white women, it has declined every decade thereafter, without fail, through 1940 (when it reached 2.22) until it increased with the baby boom in 1950 (2.98) and again in 1960 (3.54), and then declined steadily again to an all time low in 1980 (1.77) and recovered in 1990 and 2000 (to 2.05).  Generally speaking, black women followed the same trend through 1990 (except for a slight upward blip in 1870 after emancipation), but total fertility for black women declined in 2000 for black women and increased for white women in that year.  Since 2000, the total fertility rate for white women surpassed that of black women for the first time in U.S. history for which reliable government statistics are available.

The authors of the current study dismiss the possibility that unacknowledged interracial relationships resolved the celibacy issue in the following well sourced passage (citations omitted):
Southerner Anna Bragg related to her husband news of a widower with three children remarrying and also described the wedding of Captain Paine to Miss Mary Frincks. “Some say he has a wife and child living,” Anna Bragg noted. A Union chaplain turned down the request of a woman who “had the hardihood to ask me to marry her to a man who confesses that he has a wife in Reading Pa. and who says his wife has had a ‘nigger baby’ since he came to the army,”

After the war, white southerners responded to interracial marriage with violence. In 1870 Frances Harper, who had been an abolitionist, described a conversation with a black man whose son had “married a white woman, or girl, and was shot down, and there was, as I understand, no investigation by the jury; and a number of cases have occurred of murders, for which the punishment has been very lax, or not at all … ." 
Widespread fears that emancipation would increase the incidence of interracial sexual encounters led states to pass more laws prohibiting interracial marriage “during the Civil War and Reconstruction than in any comparably short period.” The deaths of so many young men during the war probably contributed to such fears. John Blassingame, for example, has argued that the death of white men in the war led to a postwar increase in sexual contacts between white women and black men in New Orleans. The number of interracial unions no doubt remained quite small. Although instances of interracial marriage and cohabitation occurred during Reconstruction in numbers large enough to suggest some initial level of toleration from white neighbors, the vast majority of white women—confronted with the possibility of violence, rigid enforcement of miscegenation laws, and the vast social distance between themselves and black men—married white men.
A culture of covert interracial and non-marital sexual relationships with white widows that did not extend to vaginal intercourse during fertile periods, particularly in Mississippi River towns, seems very likely.

A substantial number of those widows may have not remarried because there were social welfare benefits to being a war widow that could not be secured in the event of a remarriage (as well as greater levels of control over a late husband's property and greater tolerance of participation in the labor force by widows), rather like the tendency of divorced alimony recipients who have significant others to not remarry so as not to jeopardize that means of economic support.  A discussion of this for Union widows is found here.  Widows and orphans are more sympathetic beneficiaries of charity and government largess than married women and their husband's stepchildren.

Sugar Daddies, Cougars and Lower Standards

Second, while marriage prospects for women who were widowed were poor, marriage prospects for widowers and poor men in that era were very good relative to the situation before the war.

Single women much more frequently married widowers (who unlike widows, rarely remained single), or deferred marriage and then became "cougars" pursuing men younger than themselves (taking advantage of the fact that as a result of population growth, younger cohorts were larger than older ones).

The standards that made a man worth marrying had been relaxed out of necessity.  So, young Southern women married carpetbaggers from the North despite the negative social association of doing so, disabled war veterans who would have had trouble getting married in other eras, poor landless men who also would have previously had poor marriage prospects, and older men who might otherwise have never married at all because of the social failings that kept them single when they were younger.
Southern women aged 20–24 in 1870 were more likely to be married to a younger man (the percentage who did so increased from 6.4 percent in 1860 to 8.6 percent in 1870) or a much older man (up from 6.4 to 8.0 percent). Southern women aged 20–24 were also more likely to be married to a man born in a northern census region (up from 4.8 percent in 1860 to 5.6 percent in 1870), a foreign-born man (up from 2.4 to 3.1 percent), or a man with little or no real estate wealth (up from 58.4 to 67.2 percent).
Some Southern women in occupied areas married Union soldiers and returned to the North with their new husbands. Some Southern women moved West where there was a shortage of women.

Some of these coping patterns weren't new. Even before the war, population growth created a 23%-25% surplus of white women aged 20-24 in the South over white men aged 25-29 who were their typical marriage partners, due in part to a growing population that made younger cohorts larger than older ones.

Subsequent marriages of widowers, together with low remarriage rates for widows, rather than polygamy, was the long standing mechanism for balancing a shortage of never married men relative to never married women in the South.  The American South's iconic role of the "Sugar Daddy", a wealthy older man who takes up with a much younger woman, was a necessary lynch pin of social stability for decades.

One point about sources for social history in this article is also notable.  The nation, Northern and Southern alike, kept copious diaries and journals during the Civil War itself, which was seen as an obviously important historic moment.  But, when the war ended in horrible losses and hardship after the war, Southern diary and journal keeping declined dramatically.  The fad ended and with it ended intimate access to a great deal of social history for that period.

Social Class After the Civil War

Some accounts of the Reconstruction era and the "New South" focus on the extent to which plantation owners lost their land to Northern bankers.  This certainly happened in many places and did displace planter elites in places where this aristocracy was already relatively marginal by Southern standards.  But, more comprehensive accounts reveal that in places where the planter elite was powerful in the first place, that this elite was able to maintain de facto economic and political dominance through their land ownership despite the fact that they no longer owned slaves.**

Where these elites were strongest and retained control, the effects are visible into the 1960s.  These areas:

* more swiftly returned to plantation style cash crop farming while discouraging subsistence farming and other kinds of economic development,
* replaced slave labor with a "gang labor" system,
* successfully discouraged and retarded the implementation of widespread and advanced public education leading to higher illiteracy levels but also less dissent from their agenda,
* retaliated against black elected officials at much higher rates than the 10% rate for the Reconstruction era South as a whole,
* dominated the constitutional conventions that drew up state constitutions during and after Reconstruction,
* maintained relatively high land prices,
* offered blacks protection from lynchings (which were less common in these areas), improved housing and improved medical care, in exchange for greater productivity and refraining from political opposition to them,
* had higher proportions of black tenant farmers, and
* were much poorer than areas where planter elites had been less dominant.

Planter elite policies of eschewing widespread public education and non-plantation economic development left areas resulted in areas with above average relative planter elite wealth in 1860 having for each two standard deviations in increased relative wealth, 7% lower productivity at the turn of the century and 23% lower productivity by 1950.

Those areas where the planter elite were most dominant before the war clung to that less productive economic business model, while areas where the planter elite was weaker before the war managed to rebuild themselves into the more productive business model of the "New South".

Key References

* J. David Hacker, Libra Hilder & James Holland Jones, "The Effect of the Civil War on Southern Marriage Patterns," 76(1) Journal of Southern History 39-70 (Feb. 2010).

** Phillip Ager, "The Persistance of De Facto Power: Elites and Economic Development in the U.S. South, 1840-1960" (November 2012).  Another important economic historian's confirmation of this hypothesis is found in Gavin Wright's "Old South, New South" (1986) which is cited by Ager and had been advanced by Wright as early as 1970 in articles by Wright cited by Ager.

See Also These Posts At This Blog:

* Economic Inequality In the 19th Century South (October 24, 2013)
* Modern India Compared To The New South Of The 19th Century (October 16, 2013)
* The Price Of Freedom (February 28, 2013)
* Impressions From Albion's Seed (October 6, 2012)
* Lobbyists for Liberty (November 8, 2011)

16 May 2014

Heroic Cat Saves Kid From Evil Dog

The linked video shows a heroic cat saving her family's four year old kid from a vicious puppy that nearly killed the child.  This serves to exemplify the universally known truth that cats are good and dogs are for soup.

Quote of the Day

Pollen is essentially "plant sperm".  Therefore, this makes hay fever an STD. Since no one voluntarily takes in pollen, I've concluded, we are all being raped by trees.
- From here.

05 May 2014

Against Freedom of Contract

American law affords people great freedom to determine the nature of their legal liabilities to each other by entering into contracts.  Indeed, it very nearly approaches an ideal that is not just libertarian, but anarchist, despite the fact that neither of these political ideologies had wide political support in the general public.

Contracts can alter the extent to which liability is imposed as a consequence of negligent acts that caused injuries and can change the measure of damages that will be imposed upon them in the event of a breach of a legal duty or modify statutes of limitations to enforce rights.  They can determine whether or not a party will be entitled to attorneys' fees in the event of a dispute that is litigated.  They can eliminate a right to punitive damages, despite the fact that such damages can only be imposed in cases of the kind of misconduct for which liability itself cannot be waived as a matter of public policy.  Promissory notes routinely impose higher interest rates for the time value of money when the notes are in default than when they are not, and very little law limits the size of the penalty rate imposed after a default.  The parties to a contract can also agree to a great variety of matters concerning the manner in which disputes between the parties will be resolved - waiving a right to a civil jury trial, waiving access to the federal courts, waive a right to pursue a claim as part of a class in a class action, authorizing someone not involved in the underlying transaction to bring suit based upon it, or even waiving access to the court system entirely in lieu of only minimally regulated arbitration systems.

General Mills would like to impose an arbitration agreement on anyone who likes their Facebook page in the event of a lawsuit arising from a purchase of Cheerios or Chex.  Home inspectors routinely try to insist that a lawsuit against them for any harm suffered by their failure to do their job properly is limited to a refund of their fee and that even that remedy can be secured only in arbitration.

Freedom of contract is not absolute.  Statutes and public policy considerations impose certain limitations.  They generally prohibit waivers of liability for intentional misconduct, willful and wanton misconduct and gross negligence.  They prohibit liquidated damages clauses in circumstances when actual damages are easily determined or when the penalty is grossly disproportionate to a difficult to quantify harm.  They must afford some means of resort to a third party for dispute resolution or contract enforcement that meets certain minimum standards.  Interest rates cannot exceed rates defined as constituting usury.  Consumer defendants cannot be bound to respond to debt collection lawsuits in geographically distant venues.  Outside highly regulated sports and medical contexts, one cannot consent to not sue someone for intentionally causing you physical harm, for example, in the context of a duel.  Contractual limits on the grounds upon which a married couple may obtain a divorce are generally void, as are agreements in advance concerning post-separation or post-divorce parental responsibilities or child support payments, although there are only modest limitations on contractual agreements regarding property division and maintenance upon a divorce.

Closely akin to the issue of freedom of contract is the impact of disclosure on legal liability.  Products liability law is unduly focused on failure to warn, rather than on the merits of whether a product is dangerous or defective.  Medical malpractice litigation tends to focus on whether a patient was told that something really bad could happen even in the absence of negligence, rather than on whether the physician took appropriate steps to reduce the likelihood that those bad outcomes would actually occur. Securities fraud law permits firms to avoid liability by formally warning investors of risks that all of the other conduct of the sellers pushes buyers to ignore.

As an attorney, it is my stock in trade to draft contracts that do all of these things, and to litigate in light of these terms when they exist.

But, I am deeply skeptical of the proposition that freedom of contract with regard to the nature and extent of tort liability, or with regard to dispute resolution details and terms, does not do more harm than good.

The firms that are sophisticated enough to systemically enter into contracts that minimize their legal liability are often the very same firms that would be in the best position to take the necessary care to prevent negligent harms from occurring in the first place and to refrain from taking actions that would breach their contracts.

Provisions that inflate remedies for contractual defaults, like late fees and high default interest rates, usually have the practical effect of unfairly preferring those contractual debts vis-a-vis third parties who did not agree to that contract who have debts for the same principle and non-default interest amount in bankruptcy, rather than influencing how much is paid by the defaulting party to the contract who often can't pay the principal amount of the debt and non-default interest, let alone the late fees and default interest amounts that are owed.

The overwhelming majority of cases in which there is an arbitration clause is one in which the clause was included to discourage the non-drafting party from asserting that party's substantive rights in the event of a breach of contract or tort, or to otherwise provide an inferior forum to that party to obtain a fair remedy for wrongdoing by the drafting party, rather than out of any legitimate concerns regarding privacy, litigation costs, delay or a potentially unfair public court forum.

It is not at all obvious to me that our economy would be less healthy, or less efficient, if all contractual agreements provisions regarding tort liability, damages in the event of breach of contract or tort, or dispute resolution process were per se void as a matter of public policy.

An immense amount of dead weight loss transaction costs in our economy is devoted to paying people like me to game the system, that would be better spent on funding insurance purchases and reserves for contract liabilities when torts and breaches of contract inevitably happen.  The notion that securing express consent to contractual terms via shrink wrap agreements, liability waivers, terms of service, and the other pervasive contracts of adhesion that fill our lives is dubious at best and often outright absurd.

For example, even if it makes sense to hold swimming instructors liable only for willful and wanton misconduct, gross negligence or intentional conduct, rather than for mere negligence, there is no reason at all to believe that doing so on a transaction by transaction basis with non-negotiable liability waiver contracts signed by parents on behalf of their children is a more efficient way to address this issue than it is to address it with a generally applicable statute concerning swimming instructor liability or common law rule applicable to that situation.  Hundreds of thousands of dollars of liability for a personal injury sustained by a child learning to swim in connection with that student's instruction, should not primarily depend upon whether or not the administrative employee in the front office remembered to have that student's parent sign a form or not.  The amount of legal and administrative expense that goes into the process of preparing, executing, and maintaining records of those waivers of liability for negligence is not insubstantial and is wildly inefficient.

Of course, these terms are also found in vigorously negotiated, individualized contracts involving sophisticated commercial firms that are bargaining at arms length.  There is often no question in those cases that there has been the kind of meaningful and knowing consent to these terms that the legal theory of contract law contemplates when it justifies its freedom of contract principles.  But, even in these circumstances, the desirability of this freedom is questionable.

The transaction costs involved in this part of the negotiations relative to the stakes involved in the transaction are often much higher than the transaction costs involved in consumer contracts of adhesion.  The negotiation process over these terms inevitably prevents a not insignificant percentage of large scale economically valuable deals that would otherwise have been entered into from being concluded.  The way that issues addressed by these provisions are resolved when contracts contain provisions that differ greatly from the default rules of law are present are often unfair and create systemic incentives for misconduct by the party whom the negotiated contracts tend to favor (the notion that parties to a contract are economic equals is almost never true).  And, there is virtually no meaningful empirical evidence to show that business would not be capable of proceeding efficiently and productively in the absence of a decent set of universally applicable default rules.

Of course, to the extent that we have bad default rules of law, an inability of private parties to negotiate around them encourages legislators to fix the problems, which benefits not just those who would otherwise have put the changed new rules that apply in absence of an agreement into their contracts, but only benefits those who weren't savvy enough to draft contracts with better rules regarding how disputes regarding breaches of contact are resolved both in terms of process and substance.

Of course, given the race to the bottom federalism considerations involved, implementing such policies would require federal law intervention under the commerce clause power and/or bankruptcy power of Congress to be viable to implement without creating intense choice of law problems.

What would the rules look like?

* The prevailing party in disputes involving express contracts would have a right to recover their attorneys' fees and costs (something that is already a default rule as a matter of law in the case of leases in Colorado).

* Default interest rates in excess of the non-default rate would either be prohibited as a penalty against public policy.  An intermediate position would be to subordinate those debts in bankruptcy to all other debts, but that would still disadvantage third party creditors to the extent of pre-petition payments of default interest that reduce the size of the pie.  Another intermediate position would be to impose a certain interest rate in addition to the contract rate, for example, four percentage points per annum or the prime rate, in addition to the contract rate in the event a default.  Thus, the default interest rate on a contract with no stated interest rate would be 4%, while the default interest rate on a contract with a 5% stated interest rate would have a default interests rate of 9%.  If all creditors got the same default interest rate relative to the non-default status quo, in and out of bankruptcy, the prejudice to third party creditors would be eliminated.

* Late fees might be capped at a one time fee of 5% of the payment then due and would not apply to accelerated balances until their non-default due date.

* A variety of boilerplate terms might be implied as a matter of law into certain kinds of written agreements unless otherwise provided, e.g. regarding contract interpretation, definitions, etc.  Colorado does this already, for example, in the case of powers of attorney and other grants of fiduciary powers.

* The right to a civil jury might be eliminated in most breach of contract claims based upon written contracts that do not involve allegations of fraud.  (There is already rarely a right to a trial by jury for fact and credibility intensive rescission claims.) Common law already makes contract interpretation a matter of law in most cases, and breach is generally well defined in such cases, so the main shift would be in giving judges the more authority to determine contract damages.  Contract claims already make up only about 25% of civil jury trials despite the fact that the number of written contract claims litigated vastly outnumbers the number of personal injury tort claims litigated on court dockets.

* Statutory law would draw clearer lines regarding the measure of damages in a variety of particular industry contexts such as defective software, inaccurate home inspections, etc.

* The circumstances under which arbitration clauses would be permitted would be greatly narrowed, the conduct that could submit someone to an arbitration agreement would be greatly formalized along the lines of recent uniform legislation on marital agreements, and the occupational activity of arbitrating disputes would be much more heavily regulated to insure neutrality and fairness in the process.

* Waivers of liability for negligence would be void as against public policy, but a set of statutes would outline circumstances where there would not be liability for negligence (e.g. volunteer emergency assistance and inherently dangerous activities).

* Punitive damages waivers would be prohibited.  But, punitive damages and statutory damages ought to be subordinate to all other claims in a bankruptcy, or in the event of a dispute between judgment creditors over the same income or assets in bankruptcy.

* In some cases, like medical malpractice, a negligence based tort process could be replaced by a no fault regime of strict liability for bad outcomes with strictly compensatory damages and mandatory insurance to cover that liability, rather than a fault based regime that covers non-economic damages, with exceptions of willful and wanton or reckless or intentional misconduct, and with automatic referrals for disciplinary action on a license in cases of gross negligence.

* Contractual terms giving rise to a default in the absence of a breach of the substantive obligations of the parties to perform the contract ought to be disfavored.

* Failure to warn liability ought to be reformed, both to make unreasonable or unlikely to be read warnings ineffective particularly if rebutted by other communications or advertising, and to make warnings of obvious dangers unnecessary.

* Some minimum substantive securities regulation standards ought to be imposed, so that certain kinds of offerings are prohibited even if investors are warned about the risks in question.  Alternatively, a doctrine that subordinates formal written disclosures to other communications with investors might apply.

* In part to discourage venue shopping and in part out of the federalism notion of subsidiarity, I would favor the elimination of federal court jurisdiction in all ordinary diversity cases involving U.S. citizens (for diversity of citizenship jurisdiction purposes) (currently allowed when the amount in controversy exceeds $75,000 and the parties are diverse in state citizenship) and in all federal question cases involving private parties who are U.S. citizens (for diversity of citizenship jurisdiction purposes) (currently allowed in all cases with a federal question).  Thus, almost all employment litigation involving private parties (now federal question litigation), almost all contract disputes involving private parties, and almost all personal injury cases involving private parties would be limited to the state courts.  Cases with international diversity of citizenship (e.g. between a non-U.S. company and a U.S. company), special diversity of citizenship cases (e.g. multi-state class actions, and interstate interpleader cases), and special federal question cases that don't come under the general federal question statute (e.g. intellectual property and civil rights cases), as well as cases involve the U.S. government as a party, would remain in federal court.

Against Timid Politics

One of the defining aspects of President Obama's campaign in 2008 was how inspiring his speeches were.  The same could be said for many of our current incumbents in office.  As we head into the 2014 primary season, I don't feel that anymore, from anyone.

The fiery rhetoric seems reduced to weary croaks.  Everyone, the President, members of Congress, Senators, candidates and incumbents alike, just sound so tired.  They are tired of their own rhetoric.  They are tried of stalemate.  They have no new ideas.  And, timid politics only favor Republicans.  Democrats are the party of hope.  Republicans are the party that believes government can't do anything right.  The gird lock and gross irresponsibility of Republicans playing chicken with the national debt that they have created only serves to help them and hurt Democrats in the new elections.  People who think Congress is screwing up want to throw the bums out, even when many of those bums are crucial to setting things right.

Often alarming developments abroad, like the civil wars in Ukraine, the war crime filled civil war in Syria, and the kidnapping of hundreds of school girls by Boko Haram in Northern Nigeria would galvanize the country into action.  Instead, our leaders and leaders of developed countries in Europe and Asia, are confounded.  We want to avoid World War III, which an open conventional war involving both the United States and Russia could easily become.  On the other hand, we have let a country that we were treaty bound to protect the integrity and sovereignty of in Ukraine be dismembered with scarcely a peep, we have taken the mildest of possible responses to the use of chemical weapons and bombing of civilians with aircraft in Syria after taking a much more bold stances that led on Syrian rebels in Libya, and while we have a variety of low profile military and intelligence involvements across war torn parts of Africa we have kept them off of the media radar screen when we could have taken decisive action.

Ukraine and Syria may put the U.S. at risk of Wold War III, if we play our hand too strongly.  We also have to be careful of what we wish for.  While sovereignty is a fine principle in Ukraine, a rump Ukraine with Crimea and some Eastern Ukrainian territory would be a more politically stable regime that was more pro-Western in the long term.  While the Syrian regime is led by ruthless war criminals, a rebel victory might create a new Islamic fundamentalist state and led to mass ethnic cleansing and segregation like that seen in Iraq.

But, deployment of military force in aid of Northern Nigeria is another story, our military opponent would have no international patron with real military clout, and a victory there would clearly lead to an outcome that the U.S. would prefer.  Intervention in that conflict would show the minorities in the Democratic political base, that the U.S. gives a damn about people whose skin is not white.  Conservatives would be pleased to see the U.S. intervene to help Christians, and also moderate Muslims, who are being slaughtered and raped and kidnapped, resist fundamentalist Muslims who want to create new theocracies and provide a villain as good as any that central casting in Hollywood could provide.  Nigeria is one of the few sources of oil, natural gas, and uranium, all critical to our energy future, that is controlled by Western aligned democratically elected politicians and is the most populous country in sub-Saharan African.  It is in our interest both economically, and from the perspective our of national values to back them militarily.  And, after more than a dozen years fighting counterinsurgency actions in Iraq and Afghanistan, we have never been better positioned to fight another such action against less militarily capable forces in Northern Nigeria, and we have never had combat tried military leaders more able to distinguish between fundamentalist Muslim foes and moderate Muslim allies.  To the extent that Boko Haram can be linked to al-Qaeda, the intervention could even be justified as part of an ongoing "long war" on terrorism justified by the Authorization for Use of Military Force (AUMF), originally in Afghanistan.  Of course, such bold military action would also draw attention away from stale, political sideshows and bring attention front and center to the conflict which patriotic, pro-military, interventionist, religious and morally driven foreign policy thinkers in conservative political circles would be hard pressed to oppose very seriously.  A nation at war always strengthens an incumbent President's hand.  A focus on this conflict would also discourage American politicians from rising to the bait of sword rattling in Asia, something that can only end badly.

In part it is this failure to take bold action elsewhere, that has allowed Republicans to beat the dead horse of decisions made a level or two below the pay grades of the President and Secretary of State Hillary Clinton, as part of a military and diplomatic campaign in Libya that was a foreign policy success still echo through the media vacuum, together with another controversy over a low level decision about allegedly politically motivated audits of conservative non-profits that were mostly crossing the line of what the law permitted them to do.  The President and the Courts and Congress continue to wrangle over continuing efforts to stay the course of a handful of detainees in Guantanamo Bay, Cuba, many of whom are low level foot soldiers (if guilty at all of anything prior to their detentions), and a handful of military trials that have proved to be far from the decisive and swift tools for meting out justice to terrorists that their proponents claimed - despite the notion that military trials are a forum that ought to give the President more power, rather than less.

President Obama's unwillingness to seriously change course on misguided military, intelligence and anti-terrorism policies of George W. Bush that he campaigned against is puzzling.

The President has taken a very deliberate pace in addressing a sea change in public opinion towards the decriminalization of marijuana and the war on drugs, but with new changes in the sentencing guidelines, crack cocaine sentencing reforms, a set of new more lenient (although still muddled) policies for U.S. attorneys towards the drug war, and rumors of a major new batch of pardons for excessively sentenced drug offenders being considered, this may simply be a matter of political timing - saving a hot button issue for after the midterm elections at moments in the two year Congressional election cycle where there is the slightest political fallout.

Some of the dearth of political ideas is a product of political deadlock in Washington.  But, wouldn't a vibrant, common sense agenda that could advance if only the people denied Republicans their majority in the House of Representatives and increased the Democratic majority in the U.S. Senate by a few seats mobilize the public to vote on partisan lines to achieve that agenda, rather than based on dissatisfaction with the existing deadlock or based on personalities rather than parties?

Why can't Democrats, unified by President Obama, united around their version of a Contract with America and turn the 2014 elections into a national referendum?  Their policies are profoundly more in touch with the view of the public, as revealed in opinion polls, than those of the GOP.  Even if the push only leads to break even results or mitigates the usual setback of an incumbent in a midterm election, the effort would reveal how out of touch the Republican party is with the beliefs of ordinary Americans and win the political identity war with the demographics that a growing, while the elderly, white, Southerners who are at the core of the GOP are diluted.