History and pre-history from the Neolithic Revolution ca. 10,000 years ago through the Industrial Revolution that has gathered steam starting about 200 years ago and ending in developed nations sometime in the later 20th century has a common theme. As larger and larger percentages of the population have been free to do things other than produce food, it has been possible for these people to develop technology, larger scale social organization, and more generally more non-food goods for people to consume.
In the Third World and developing world, this trend hasn't run its course yet. Hunter-gatherer societies are virtually extinct. For a significant minority of the world's population, however, pre-industrial agricultural methods or nomadic pastoralism remain a way of life. But, mechanized farming with modern biotechnological enhancements that greatly increase productivity per farmer is the norm in the developed world and is in the process of replacing traditional food production methods everywhere else.
In the developed world, less than two percent of the population continues to be employed in agriculture and other forms of food production like fishing. Even including other natural resource production sectors like forestry and mining, the percentage of people employed in these primary resource exploiting industries is under four or five percent.
Efficiency gains continue, but they have increasingly little relevance to the structure of our society which is overwhemingly non-agricultural now and will continue to remain non-agricultural. We have reached the point where all of the food the world needs can be produced by a small percentage of the world's population. Not everyone in the world has access to abundant food supplies, but this isn't because we aren't capable of making enough food to feed everyone. We make enough food to feed everyone, even if we don't always distribute it in a way that actually does feed everyone adequately.
This is a relatively recent development in the big picture of history. In Colonial era America, two hundred to three hundred years ago, something like ninety percent of the population was engaged in food production. The decline in the percentage of the population engaged in agriculture over this time period has been a steady one.
In the United States, from the beginning of the Industrial Revolution in the 1700s, until sometime around the 1970s, a growing share of the population not engaged in direct food production, was engaged in the manufacturing and construction sectors, making things.
Japan has been the cardinal illustration of an economy that followed the logic of the Industrial Revolution to its extreme, recognizing that the natural resource inputs have become such a small share of the total manufacturing process than one can have a prosperous economy despite having to import almost all natural resources need for manufacturing and having export goods in order to find an economic demand for the goods produced.
But, sometime around the 1970s, the share of the population engaged in manufacturing and construction started to shrink. The reasons for this change have been muddy. On one hand, there has been a dramatic improvement in efficiency due to automating technologies and other technological developments (e.g. the rapidly shrinking amount of physical resources that go into a computer). On the other hand, some of the manufacturing sector's decline has been due to the offshoring of large parts of the manufacturing sector.
Globalization has degraded the usefulness of conventional, nationally collected economic statistics in understanding what is going on in the economy. The United States economy could not exist without calling upon natural resources and work forces abroad to produce goods for export, and without export markets to which it can sell goods. For example, about half of our national imports consist of imported fossil fuel resources.
Still, despite the complication created by the growing scale of modern economies as a result of globalization and international trade, the trend towards more productive manufacturing that we have seen in the United States seems sure to spread globally just as certainly as the spread of industrial era agriculture has spread.
Our abundance of goods may, sometime not so far in the future, reach a point where an increasingly small percentage of the world population can make all the goods that the people of the world need. As was the case with food, not everyone in the world may actually have all the goods that they need. But, this may come to be more of a function of fairness in distribution than it is of an inability to produce everything that is needed.
Traditional economic indicators don't contemplate and hence, do not capture very distinctively, the idea that a person can have enough goods, let alone a whole population. While microeconomists are comfortable with the idea of decreasing marginal utility for particular goods, macroeconomists by and large, operate on the implicit assumption that more is always better. Welfare economists note that the rich may need more goods less intensely than the poor, the intellectual basis for progressive taxation, but have been reluctant to quantify the idea.
The result has been that while sociologists and economists and politicians are aware that the developed world has entered a "post-industrial" era, there is little consensus about what this means. The uncertainty about where we are headed has caused us to view our post-industrial prospects with considerably more skepticism about is desirability than we do our post-agricultural prospects.
In other words, if we have enough food and we have enough stuff, what should those of us who aren't necessary to producing food and other stuff be doing?
For the most part, we are so hung up on the idea that the purpose of the economic is to make stuff that the question is difficult to even comprehend. Aphorisms like "you can't be too rich" come to mind.
There are multiple answers to the question. We could purchase leisure time in lieu of goods. We could purchase services in lieu of goods (and perhaps services should be broken into subtypes with different trend lines, with food service, business services and movie production, for example, falling into different categories). We could opt for economic security or greater sustainability.
There is nothing fundamentally wrong with letting politics and market decisions help us figure out what precisely we want to do with the prosperity that comes from increased productivity. But, to the extent that our prosperity comes from improved technology, rather than exploitation of people outside the system, and at the global level this is surely true, the trend is very likely a good one and not one to be instictively feared.
Progress is more than a simple moral judgment - economies tend to develop from primative to more advanced. The most "advanced" economy may not be the best for every time and place, but when it is available, people would overwhelmingly choose it over the alteratives.
The trick in making good political and economic decisions going forward is to have some relatively nuanced sense about where we are going. Knowing you are in a post-industrial society is all good and well, just as it is all good and well to know you are in a post-hunter-gatherer society. But, those descriptions don't provide much insight into the basic logic of the direction in which of society is heading.
What will we do when 95% of the population is engaged neither in food production nor manufacturing and construction? What will we do when fewer than 10% of the population is engaged in making or selling goods?
Are we making our society vulnerable to collapse and/or lack of future innovation, when an increasingly small percentage of the population is involved in making goods and thus lack the knowledge needed to do so, and when no one person or small geographically local group of people know how to make the goods that we need? How many people in the United States could build a television from scratch?
28 February 2010
26 February 2010
My Grandchildren's World Part I: Climate Change
What will the world be like in the lives of my children and grandchildren? In other words, what will the world be like in the time frame of 2010 to 2140, more or less. These are long term predictions. Some of that future can't be predicted. But, we know enough to make some reasonable guesses.
Climate
Global Warming Will Happen
We are in the midst of a human caused period of rapid global warming. It appears completely unrealistic politically for there to be effective global political action sufficient to stop or reverse this trend in the time frame that scientists say is available.
China's economy is too wed to coal and too fast growing to wean itself to cleaner fuels in the several decades time frame we have for really effective action.
China, India and many other places in the developing world are going to find the urge to dramatically increase their consumption of motor vehicles to be irresistible. More cars will mean more global warming inducing air pollution, even if developing world cars are more fuel efficient, more intensely used, less polluting and less numerous per capita than those of the United States, Australia, New Zealand and Europe. Japan's level of car use is a more plausible model. But that still means far more cars being used in the world and with them more air pollution.
Deforestation is another trend that is unlikely to reverse itself in the next few decades sufficiently to crimp global warming. In the places where it is happening, the need of burgeoning local populations for agricultural income and mineral resources will not be held back indefinitely by any altruistic motives.
The developed world will make a concerted effort to reduce their emissions. Deforestation will be slowed. The developing world will have an industrial revolution that is more environmentally sound than the original. Most importantly, "peak oil" will produce dramatic changes in how our finite supply of fossil fuel resources are used which will lead to widespread adoption of new technologies and the reorganization of the economic sectors most at fault. There will be steps taken that actively counter the global warming trend. And, the ecology of the world will react to buffer the changes we are creating as natural global cooling factors in the biosphere kick in.
So, the problem will ultimately be addressed. Earth is not on track to become the next Venus. We simple won't be able to change soon enough avoid irreversible significant further change and probably not soon enough to avoid crossing a global climate tipping point of some kind. It will be too little, too late to arrest further human contributions to global warming in the next thirty to forty years. My optimistic predictions are that humanity will cease contributing to global warming, with a big boost from peak oil effects and advanced transportation and non-fossil fuel energy technology, sometime after my grandchildren reach adulthood, in the range of 2050 to 2080.
Selective Impacts From Global Warming
What will global warming mean to my children and grandchildren in their lives?
1. The sea level will rise significantly. In real life terms some of the implications of this sea level rise will include the following:
Much of coastal Louisiana will fall below sea level. The most historic parts of New Orleans will be preserved, at great expense, as a Venice or Netherlands on the Gulf of Mexico, with epic engineering efforts. Its connection to the continental United States will be as tenuous as that of Key West to Florida today.
Much of the Everglades and Southern Florida will fall below sea level. Miami and other major coastal cities will be preserved only with major engineering efforts that preserve a thin peninsula on the Florida's East coast from Miami to Cape Canaveral.
Many low lying islands in Oceania and the Caribbean will be flooded, forcing their residents to relocate. Most will move either to higher ground in Oceania, or to places in the developed world with which they have ties as a result of former colonial links, religious missionaries, or past tourism ties. The populations involved will be small enough, and the diaspora far flung enough, that this will have only a modest impact on the rest of the world.
Prime real estate resorts and homes near sea level right at the coast worldwide will be irreplaceably destroyed as the beach retreats inland.
Warmer sea surfaces will also fuel more and more powerful hurricanes, typhoons and cyclones, hitting particularly hard the coastal American South, the Caribbean, the Gulf Coast of Mexico and Central America, Japan, Coastal China, the Philippines, Bangladesh and coastal India.
2. Skiing Will Move North and Up; Glaciers will vanish.
Global warming will leave the winter ski seasons in existing ski resorts looking more like their fall ski seasons. The ski season will start later in the fall, and end earlier in the spring. In North America, Colorado and Utah will start to lose ground to Wyoming and Canada. In Europe, the Alps will become less attractive for skiers. In Japan, skiing opportunities will degrade. Skiing resorts may prosper in the Alaska Range, the Yukon, the Himalayas, Central Asian mountains and the Kolyma mountains of Siberia.
Glaciers are retreating rapidly everywhere in the world that they are found from Scandinavia to New Zealand, from Mount Kilimanjaro to the Alps, from the Rockies to the Himalayas.
Arctic species like polar bears have a dim future.
3. Canada's populated belt will expand.
The vast majority of the people who live in Canada live quite close to the U.S. border. As the global climate warms, the Northern parts of British Columbia, Alberta, Saskatchewan, Manitoba, Ontario and Quebec will all become more attractive for both agriculture and urban settlement.
4. Diseases, plants and fauna will migrate North and up.
Malaria, yellow fever, killer bees, termites, tropical parasites, hot weather plants, subtropical animals and tropical animals will all migrate significantly north and south from the equator, and to higher elevations than those at which they were previously known.
Kudzu will make a slow but sure migration North. Cotton will be grown further north in the American South that it used to, while Egyptian cotton will be pressured by hotter, drier conditions with that crop possibly migrating to the Levant or Northern Mediterranean.
Similarly, temperate crops will migrate north, with the line between deciduous and evergreen forests moving northward, for example.
Areas of the United States that are already marginal for agriculture, like areas used by dryland farmers in the arid West will tend to revert to open space as farming becomes hopeless outside irrigated areas. In the American South, tropical and subtropical plant varieties will increasingly become more viable.
5. The Sahara will expand.
The Sahara desert in Africa will expand, most notably to the South into the Sahel, where many people who speak Nilo-Saharan or Afro-Asiatic languages, most often Muslims, live now. Climate change will put strong pressure on these peoples to migrate South into territory previously controlled by Christan and animist peoples who do not speak Arabic and are seeing their traditional farming methods fail in the face of an increasingly drier, hotter climate.
We are already starting to see military conflicts, increased ethnic clashes, and famines breaking out in these regions. Somalia is a failed state divided politically along a North-South axis. This climate trend is a major driving force behind the genocidal warfare in Sudan which is likely to cede a new nation called South Sudan in the next few years. Southern Chad, Northern Nigeria, Niger and Mali are also all under these pressures.
Broad based fronts in clash of civilizations conflicts like the one that is developing as the Sahara expands are historically an important motivator for the unification of previously independent states into empires. I expect to see strengthening political and military alliances between Northern Sudan, Chad, Niger, the Islamic controlled Northern states of Nigeria, Niger, Mali and Mauritania, which would also fund insurgent forces of Islamic minorities in the Southern neighbors in West and Central Africa, Uganda and Kenya. Likewise, I expect to see strengthening military and political ties among the coastal nations of West Africa, the Central African Republic, Southern Sudan, Uganda and Kenya. Burkina Faso, which evenly divided between the two civilizations, is at a particularly grave risk of civil war. Minority populations on both sides of the divide are a grave risk of being expelled as refugees to neighboring countries where their side of the clash is in the majority, in brutal campaigns of ethnic cleansing on a much larger scale than seen from the Bosnia Serbs or the Algerian Islamist efforts to expel foreigners. The genocidal campaign seen in Darfur is likely to be a more relevant model for these conflicts.
These wars are likely to be particularly brutal and are likely to remain a military hot spot for most of the lives of my children and grandchildren. I would be surprised if Nigeria sustains this era of Sahel conflict in one piece. It seems more likely that the Islamic states of Nigeria in the North will unite and secede from Nigeria, particularly as the volume of Nigeria's oil production begins to decline. Nigeria will cease to be able to maintain an oil based economy as oil decreases in economic importance in the years running up to roughly 2055-2075. The prospect of sharing oil revenues is an important part of the glue holding many currently oil rich nations like Iraq and Nigeria together today.
One winner in Africa may be Khoisan speaking people who may face less threat from expanding urbanization as the Kalahari Desert grows less and less suitable for settled occupation. But, declines in arable land to the North will increase deforestation pressures in the greater Congo area further threatening Africa's Pygmy populations.
6. Inland seas will dry up.
Expect the Caspian Sea, the Aral Sea, the Dead Sea, Lake Chad, Lake Tana in Ethiopia, Lake Turkana in Kenya, Lake Vitoria, the Great Salt Lake in Utah, and the North American Great Lakes to dry up and get smaller as global temperatures rise. These changes will greatly impact local communities, but will rarely have major geopolitical impact.
Climate
Global Warming Will Happen
We are in the midst of a human caused period of rapid global warming. It appears completely unrealistic politically for there to be effective global political action sufficient to stop or reverse this trend in the time frame that scientists say is available.
China's economy is too wed to coal and too fast growing to wean itself to cleaner fuels in the several decades time frame we have for really effective action.
China, India and many other places in the developing world are going to find the urge to dramatically increase their consumption of motor vehicles to be irresistible. More cars will mean more global warming inducing air pollution, even if developing world cars are more fuel efficient, more intensely used, less polluting and less numerous per capita than those of the United States, Australia, New Zealand and Europe. Japan's level of car use is a more plausible model. But that still means far more cars being used in the world and with them more air pollution.
Deforestation is another trend that is unlikely to reverse itself in the next few decades sufficiently to crimp global warming. In the places where it is happening, the need of burgeoning local populations for agricultural income and mineral resources will not be held back indefinitely by any altruistic motives.
The developed world will make a concerted effort to reduce their emissions. Deforestation will be slowed. The developing world will have an industrial revolution that is more environmentally sound than the original. Most importantly, "peak oil" will produce dramatic changes in how our finite supply of fossil fuel resources are used which will lead to widespread adoption of new technologies and the reorganization of the economic sectors most at fault. There will be steps taken that actively counter the global warming trend. And, the ecology of the world will react to buffer the changes we are creating as natural global cooling factors in the biosphere kick in.
So, the problem will ultimately be addressed. Earth is not on track to become the next Venus. We simple won't be able to change soon enough avoid irreversible significant further change and probably not soon enough to avoid crossing a global climate tipping point of some kind. It will be too little, too late to arrest further human contributions to global warming in the next thirty to forty years. My optimistic predictions are that humanity will cease contributing to global warming, with a big boost from peak oil effects and advanced transportation and non-fossil fuel energy technology, sometime after my grandchildren reach adulthood, in the range of 2050 to 2080.
Selective Impacts From Global Warming
What will global warming mean to my children and grandchildren in their lives?
1. The sea level will rise significantly. In real life terms some of the implications of this sea level rise will include the following:
Much of coastal Louisiana will fall below sea level. The most historic parts of New Orleans will be preserved, at great expense, as a Venice or Netherlands on the Gulf of Mexico, with epic engineering efforts. Its connection to the continental United States will be as tenuous as that of Key West to Florida today.
Much of the Everglades and Southern Florida will fall below sea level. Miami and other major coastal cities will be preserved only with major engineering efforts that preserve a thin peninsula on the Florida's East coast from Miami to Cape Canaveral.
Many low lying islands in Oceania and the Caribbean will be flooded, forcing their residents to relocate. Most will move either to higher ground in Oceania, or to places in the developed world with which they have ties as a result of former colonial links, religious missionaries, or past tourism ties. The populations involved will be small enough, and the diaspora far flung enough, that this will have only a modest impact on the rest of the world.
Prime real estate resorts and homes near sea level right at the coast worldwide will be irreplaceably destroyed as the beach retreats inland.
Warmer sea surfaces will also fuel more and more powerful hurricanes, typhoons and cyclones, hitting particularly hard the coastal American South, the Caribbean, the Gulf Coast of Mexico and Central America, Japan, Coastal China, the Philippines, Bangladesh and coastal India.
2. Skiing Will Move North and Up; Glaciers will vanish.
Global warming will leave the winter ski seasons in existing ski resorts looking more like their fall ski seasons. The ski season will start later in the fall, and end earlier in the spring. In North America, Colorado and Utah will start to lose ground to Wyoming and Canada. In Europe, the Alps will become less attractive for skiers. In Japan, skiing opportunities will degrade. Skiing resorts may prosper in the Alaska Range, the Yukon, the Himalayas, Central Asian mountains and the Kolyma mountains of Siberia.
Glaciers are retreating rapidly everywhere in the world that they are found from Scandinavia to New Zealand, from Mount Kilimanjaro to the Alps, from the Rockies to the Himalayas.
Arctic species like polar bears have a dim future.
3. Canada's populated belt will expand.
The vast majority of the people who live in Canada live quite close to the U.S. border. As the global climate warms, the Northern parts of British Columbia, Alberta, Saskatchewan, Manitoba, Ontario and Quebec will all become more attractive for both agriculture and urban settlement.
4. Diseases, plants and fauna will migrate North and up.
Malaria, yellow fever, killer bees, termites, tropical parasites, hot weather plants, subtropical animals and tropical animals will all migrate significantly north and south from the equator, and to higher elevations than those at which they were previously known.
Kudzu will make a slow but sure migration North. Cotton will be grown further north in the American South that it used to, while Egyptian cotton will be pressured by hotter, drier conditions with that crop possibly migrating to the Levant or Northern Mediterranean.
Similarly, temperate crops will migrate north, with the line between deciduous and evergreen forests moving northward, for example.
Areas of the United States that are already marginal for agriculture, like areas used by dryland farmers in the arid West will tend to revert to open space as farming becomes hopeless outside irrigated areas. In the American South, tropical and subtropical plant varieties will increasingly become more viable.
5. The Sahara will expand.
The Sahara desert in Africa will expand, most notably to the South into the Sahel, where many people who speak Nilo-Saharan or Afro-Asiatic languages, most often Muslims, live now. Climate change will put strong pressure on these peoples to migrate South into territory previously controlled by Christan and animist peoples who do not speak Arabic and are seeing their traditional farming methods fail in the face of an increasingly drier, hotter climate.
We are already starting to see military conflicts, increased ethnic clashes, and famines breaking out in these regions. Somalia is a failed state divided politically along a North-South axis. This climate trend is a major driving force behind the genocidal warfare in Sudan which is likely to cede a new nation called South Sudan in the next few years. Southern Chad, Northern Nigeria, Niger and Mali are also all under these pressures.
Broad based fronts in clash of civilizations conflicts like the one that is developing as the Sahara expands are historically an important motivator for the unification of previously independent states into empires. I expect to see strengthening political and military alliances between Northern Sudan, Chad, Niger, the Islamic controlled Northern states of Nigeria, Niger, Mali and Mauritania, which would also fund insurgent forces of Islamic minorities in the Southern neighbors in West and Central Africa, Uganda and Kenya. Likewise, I expect to see strengthening military and political ties among the coastal nations of West Africa, the Central African Republic, Southern Sudan, Uganda and Kenya. Burkina Faso, which evenly divided between the two civilizations, is at a particularly grave risk of civil war. Minority populations on both sides of the divide are a grave risk of being expelled as refugees to neighboring countries where their side of the clash is in the majority, in brutal campaigns of ethnic cleansing on a much larger scale than seen from the Bosnia Serbs or the Algerian Islamist efforts to expel foreigners. The genocidal campaign seen in Darfur is likely to be a more relevant model for these conflicts.
These wars are likely to be particularly brutal and are likely to remain a military hot spot for most of the lives of my children and grandchildren. I would be surprised if Nigeria sustains this era of Sahel conflict in one piece. It seems more likely that the Islamic states of Nigeria in the North will unite and secede from Nigeria, particularly as the volume of Nigeria's oil production begins to decline. Nigeria will cease to be able to maintain an oil based economy as oil decreases in economic importance in the years running up to roughly 2055-2075. The prospect of sharing oil revenues is an important part of the glue holding many currently oil rich nations like Iraq and Nigeria together today.
One winner in Africa may be Khoisan speaking people who may face less threat from expanding urbanization as the Kalahari Desert grows less and less suitable for settled occupation. But, declines in arable land to the North will increase deforestation pressures in the greater Congo area further threatening Africa's Pygmy populations.
6. Inland seas will dry up.
Expect the Caspian Sea, the Aral Sea, the Dead Sea, Lake Chad, Lake Tana in Ethiopia, Lake Turkana in Kenya, Lake Vitoria, the Great Salt Lake in Utah, and the North American Great Lakes to dry up and get smaller as global temperatures rise. These changes will greatly impact local communities, but will rarely have major geopolitical impact.
25 February 2010
Mortgage Rates, 10 Year Treasuries and Fed Policy
The dominant factor in determining interest rates on thirty year mortgages in the interest rate on ten year Treasury bonds.
This isn't too surprising. The average convention thirty year mortgage is actually in place for ten years, and both are among the lowest risk investments in the financial system. Treasury bonds capture both the hypothetical risk free rate of return in terms of real time value of money preferences, and the market's assessment of ten year time period inflation expectations.
The deviations from the rule are mostly at the high end, when Treasury bond rates are around 10% or more, and mortgage interest rates are around 12% or more. This may reflect the unpredictability of inflation and expectations of strategic behavior by mortgage holders when interest rates are temporarily very high.
The Federal Reserve has recently embarked on a more than trillion dollar campaign to buy mortgage backed securities, an unprecedented effort that should lower mortgage interest rates, and indeed it has had that effect in a measurable way. But, does it really make sense to make the kind of massive investment in mortgages that the Fed has made simply to secure the results, which is a roughly 1/2 percentage point reduction in conventional thirty year mortgage interest rates for roughly a year?
The big picture lesson here, in my view, is not that massive efforts by the Fed can produce measurable results. Instead, it is that even Fed efforts on an unprecedented scale that involve a large share of the entire market have only a slight impact on mortgage interest rates. Individual participants in large financial markets, even participants as big as the Fed, have little ability to fight the powerful inherent tendencies of the market. Market trends are very difficult and very expensive to counter through market participation. Market participation is a very inefficient way for government to intervene in the economy.
This isn't too surprising. The average convention thirty year mortgage is actually in place for ten years, and both are among the lowest risk investments in the financial system. Treasury bonds capture both the hypothetical risk free rate of return in terms of real time value of money preferences, and the market's assessment of ten year time period inflation expectations.
The deviations from the rule are mostly at the high end, when Treasury bond rates are around 10% or more, and mortgage interest rates are around 12% or more. This may reflect the unpredictability of inflation and expectations of strategic behavior by mortgage holders when interest rates are temporarily very high.
The Federal Reserve has recently embarked on a more than trillion dollar campaign to buy mortgage backed securities, an unprecedented effort that should lower mortgage interest rates, and indeed it has had that effect in a measurable way. But, does it really make sense to make the kind of massive investment in mortgages that the Fed has made simply to secure the results, which is a roughly 1/2 percentage point reduction in conventional thirty year mortgage interest rates for roughly a year?
The big picture lesson here, in my view, is not that massive efforts by the Fed can produce measurable results. Instead, it is that even Fed efforts on an unprecedented scale that involve a large share of the entire market have only a slight impact on mortgage interest rates. Individual participants in large financial markets, even participants as big as the Fed, have little ability to fight the powerful inherent tendencies of the market. Market trends are very difficult and very expensive to counter through market participation. Market participation is a very inefficient way for government to intervene in the economy.
Truancy Laws Prevent Juvenile Delinquency
Does increasing the minimum dropout age reduce juvenile crime rates?
Yes. Minimum dropout age requirements significantly reduce property and violent crime arrest rates for youth aged 16 to 18 years-old.
Analysis of county-level arrest data for the U.S. between 1980 and 2006, viewed in the context of minimum dropout age laws in different places at different times shows a significant and robust link between arrest rates for teens and minimum dropout age requirements.
Why? In part, because kids in school have less time available to commit crimes. Crime rates are consistently responsive to factors that involve "an incapacitation effect."
High dropouts are exceedingly likely to end up involved in crime, compared to other demographics. Even if they don't learn a thing by being required to go to school for a few more years, they are likely to be better off, because this reduces their likelihood of being arrested for juvenile delinquency, something that is almost never a positive for the person arrested, or for the victim of the associated crime.
This crime reduction happens without criminal justice system involvement. Reduced juvenile justice system costs associated with reduced arrest rates for juveniles save the states that pay for them significant amounts of money.
This may require additional education funding, but since schools typically get part of their state funding based upon student attendance rates, the increased funding happens more or less automatically. It is part of a state level entitlement to K-12 education commitment that has already been made in every U.S. state.
Legislatively, the change is easy to make. Basically, a single number in a single law has to be changed. All of the implimentation systems are already in place.
In 2006, Colorado lifted the compulsory age at which students must be in school to 17 from 16. The impact it has had on high school participation and completion in the state has been great. The number of 12th graders enrolled is up more than 20% in districts in Denver, Aurora and Adams county that have had high dropout rates. Statewide, the number of 12th graders enrolled is up by about 5%.
But, do the kids who are now in school instead of on the streets cause trouble in the schools?
No. This is a plausible guess, but the facts don't support this hypothesis. In the Denver Public School District which is seen a 23% increase in the number of 12th graders in the district, largely as a result of the one year increase in the compulsory school attendance age, "the number of out-of-school suspensions is falling — decreasing about 44 percent over the past six years."
Perhaps dropouts aren't proving to be a bad influence on kids still in school. Perhaps kids who know they have to stay in school have less of an incentive to blow it off and get into trouble. Perhaps requiring kids to go to school undermines gangs. Perhaps greater attenance makes a generalized social norm of conformity stronger. When you get down to it, it really doesn't matter why it happens. Butm reducing the dropout rate also reduces the number of instances of serious misbehavior that call for out-of-school suspensions.
Now is also an optimal time for states to make a change like this one. Unemployment rates are near record highs, and unemployment rates for teens who have dropped out of high school is particularly high. Removing them for the labor market, at least during the school day, has the indirect effect of opening up jobs suitable for others who have the hardest time finding employment. The impact is considerable. In the Denver metropolitan area, this change took thousands of people out of the bottom of the local "during school hours" labor market.
Increasing the mandatory school attendance age takes just a year to open up a significant number of jobs for those who have the hardest time finding them, on a permanent basis, while reducing crime and in school delinquency. It is hard to find any other government program that provides such unequivocally positive results while resulting in no new expenditure a state hasn't already committed to make as part of the K-12 education entitlement program.
Yes. Minimum dropout age requirements significantly reduce property and violent crime arrest rates for youth aged 16 to 18 years-old.
Analysis of county-level arrest data for the U.S. between 1980 and 2006, viewed in the context of minimum dropout age laws in different places at different times shows a significant and robust link between arrest rates for teens and minimum dropout age requirements.
Why? In part, because kids in school have less time available to commit crimes. Crime rates are consistently responsive to factors that involve "an incapacitation effect."
High dropouts are exceedingly likely to end up involved in crime, compared to other demographics. Even if they don't learn a thing by being required to go to school for a few more years, they are likely to be better off, because this reduces their likelihood of being arrested for juvenile delinquency, something that is almost never a positive for the person arrested, or for the victim of the associated crime.
This crime reduction happens without criminal justice system involvement. Reduced juvenile justice system costs associated with reduced arrest rates for juveniles save the states that pay for them significant amounts of money.
This may require additional education funding, but since schools typically get part of their state funding based upon student attendance rates, the increased funding happens more or less automatically. It is part of a state level entitlement to K-12 education commitment that has already been made in every U.S. state.
Legislatively, the change is easy to make. Basically, a single number in a single law has to be changed. All of the implimentation systems are already in place.
In 2006, Colorado lifted the compulsory age at which students must be in school to 17 from 16. The impact it has had on high school participation and completion in the state has been great. The number of 12th graders enrolled is up more than 20% in districts in Denver, Aurora and Adams county that have had high dropout rates. Statewide, the number of 12th graders enrolled is up by about 5%.
But, do the kids who are now in school instead of on the streets cause trouble in the schools?
No. This is a plausible guess, but the facts don't support this hypothesis. In the Denver Public School District which is seen a 23% increase in the number of 12th graders in the district, largely as a result of the one year increase in the compulsory school attendance age, "the number of out-of-school suspensions is falling — decreasing about 44 percent over the past six years."
Perhaps dropouts aren't proving to be a bad influence on kids still in school. Perhaps kids who know they have to stay in school have less of an incentive to blow it off and get into trouble. Perhaps requiring kids to go to school undermines gangs. Perhaps greater attenance makes a generalized social norm of conformity stronger. When you get down to it, it really doesn't matter why it happens. Butm reducing the dropout rate also reduces the number of instances of serious misbehavior that call for out-of-school suspensions.
Now is also an optimal time for states to make a change like this one. Unemployment rates are near record highs, and unemployment rates for teens who have dropped out of high school is particularly high. Removing them for the labor market, at least during the school day, has the indirect effect of opening up jobs suitable for others who have the hardest time finding employment. The impact is considerable. In the Denver metropolitan area, this change took thousands of people out of the bottom of the local "during school hours" labor market.
Increasing the mandatory school attendance age takes just a year to open up a significant number of jobs for those who have the hardest time finding them, on a permanent basis, while reducing crime and in school delinquency. It is hard to find any other government program that provides such unequivocally positive results while resulting in no new expenditure a state hasn't already committed to make as part of the K-12 education entitlement program.
GOP: Unearned Income Should Be Untaxed
If you didn't earn it, you shouldn't have to pay tax on it. This is the basic idea behind the lastest Republican approach to tax reform.
The proposal would end the corporate income tax and gift and estate taxes. It would also end all taxes on investment income. Under the plan:
So, everybody gets a new 7.5% value added tax, and deductions for items like mortgage interest, state and local taxes, charitable deductions, per child child tax credits, and earned income tax credit, which would be replaced with a larger standard deduction and personal exemption for a family of four.
The change would be dramatically regressive, shifting tax burdens from the rich, who have the lion's share of unearned income, to the middle class that spends a larger share of its income on consumables that the VAT would capture than those who have higher incomes.
The proposal would end the corporate income tax and gift and estate taxes. It would also end all taxes on investment income. Under the plan:
Provides individual income tax payers a choice of how to pay their taxes – through existing law, or through a highly simplified code that fits on a postcard with just two rates and virtually no special tax deductions, credits, or exclusions (except the health care tax credit).
* Simplifies tax rates to 10 percent on income up to $100,000 for joint filers, and $50,000 for single filers; and 25 percent on taxable income above these amounts. Also includes a generous standard deduction and personal exemption (totaling $39,000 for a family of four).
*Eliminates the alternative minimum tax [AMT].
* Promotes saving by eliminating taxes on interest, capital gains, and dividends; also eliminates the death tax.
* Replaces the corporate income tax – currently the second highest in the industrialized world – with a border-adjustable business consumption tax of 8.5 percent.
So, everybody gets a new 7.5% value added tax, and deductions for items like mortgage interest, state and local taxes, charitable deductions, per child child tax credits, and earned income tax credit, which would be replaced with a larger standard deduction and personal exemption for a family of four.
The change would be dramatically regressive, shifting tax burdens from the rich, who have the lion's share of unearned income, to the middle class that spends a larger share of its income on consumables that the VAT would capture than those who have higher incomes.
Middle School Letter Week
This is the week that 5th graders across the Denver Public Schools are getting their middle school letters from schools from charter schools and programs that require special applications within ordinary schools.
The dreadful complexity of the process, numerous bureaucratic steps involved that must be completed, and individual preferences narrow the pool at each school a great deal. There are some school programs, like the Denver School of Science and Technology in Denver that are grossly oversubscribed. But, that is the exception. The vast majority of students are getting their first choice, and the 10%-15% or so that do not frequently have a carefully considered Plan B at the ready to implement that is likely to work out. My sense of how the system works is that in the 5th grade to Middle School progress, that involves perhaps 7,000 kids, my guess is that perhaps 4,000 will simply go with the flow and go to their home school, perhaps 3,000 will try to exercise some sort of right in the process, that about 2,550 or so will get their first choice, that about 400 will find an acceptable Plan B within the system, and that 30-60 students will not get an option that is satisfiactory to them, some of whom may end up trying to arrange a private school or homeschool arrangment, will give up and go to a school that they aren't happy with, or will hagglye with the administration.
Denver Public Schools also probably loses more kids to private schools and transfers out of the district (often with kids who don't participate in the regular process0 than at any other point in the process. Finding a tolerable environment is a small neighborhood elementary school turns out to be easier than finding a tolerable middle school, a time when life is almost guaranteed to be miserable in some way or other that also impacts their peers, even in the best environment.
The biggest benefit of the system may have only a little to do with the extent to which what is taught and how it is taught is tailored to the particular child choosing a school. There is a benefit in having a process in which a student participated and has a choice, because that process secures buy in to the middle school attended, regardless of the quality of the actual choices (likewise, studies show that people feel better about their country after voting, even if the race was uncontested or none of the candidates were particularly appealing).
School choice is also good for Denver real estate. As long as parents believe that their child in Denver can attend a good school with enough attention to the school choice process, wherever that child lives, that parent can feel comfortable living in Denver without having to pay for private school tuition. In contrast, in a system of strict neighborhood schools, real estate in attendance areas for schools where students perform poorly academically (and Denver has a great many such schools) become very unattractive to families with school aged children.
Proximity to good schools is still relevant, but only for the fundamental reason that it impacts the time it takes to get to school. My experience in where children we know have chosen to go to school strongly suggests that perceived school quality and offerings are a sufficiently powerful draw that middle class parents will largely ignore any transportation concerns within Denver to have their child attend what they see as a good school, a sensible stance since Denver is sufficiently geographically compact that even the longest trips aren't impossibly onerous.
On the whole, I support options like school choice and charter schools (and the Colorado practice of providing some of the in state student economic support available to public college students to students attending private colleges in the state). Milton Friedman was mostly right when he observed that parents have more effective power that is relevant to them in the power to make a school choice (even if it is hemmed by bureacracy that excludes some from the process and isn't entirely unlimited), than they do through the political process of electing school board officials. They may be at a disadvantage each way, but here and now, politics is even less of a level playing field than the process of navigating a complex school choice system. The risk of having parents choose not to attend your school adds a disciplined pressure to perform to every school in the system, public neighborhood, charter and private alike, without regard to the quality of management taking place at the district level at the behest of the school board.
There are plenty of ills that school choice does not solve. It does not itself solve achievement gap problems already mostly in place by the third grade and earlier. It does little to assure adequate public education funding and may even undermine it, because it is easier to rally and organize political support for institutions than mere spending formulas. It may lead to more, rather than less demographic segregation in schools. It doesn't by itself, bring more qualified teachers into the system. But, it does open up more room for experimentation disciplined by parental choices, it does decouple real estate decisions from education decisions, it does secure buy in from parents and students, and it does provide a much easier and reliable way for administrators to identify failing schools: parents choose not to send their children there. Simply by forcing school districts to address those schools that are failing relative to other options, the choice system brings up minimum standards for the district as a whole.
The dreadful complexity of the process, numerous bureaucratic steps involved that must be completed, and individual preferences narrow the pool at each school a great deal. There are some school programs, like the Denver School of Science and Technology in Denver that are grossly oversubscribed. But, that is the exception. The vast majority of students are getting their first choice, and the 10%-15% or so that do not frequently have a carefully considered Plan B at the ready to implement that is likely to work out. My sense of how the system works is that in the 5th grade to Middle School progress, that involves perhaps 7,000 kids, my guess is that perhaps 4,000 will simply go with the flow and go to their home school, perhaps 3,000 will try to exercise some sort of right in the process, that about 2,550 or so will get their first choice, that about 400 will find an acceptable Plan B within the system, and that 30-60 students will not get an option that is satisfiactory to them, some of whom may end up trying to arrange a private school or homeschool arrangment, will give up and go to a school that they aren't happy with, or will hagglye with the administration.
Denver Public Schools also probably loses more kids to private schools and transfers out of the district (often with kids who don't participate in the regular process0 than at any other point in the process. Finding a tolerable environment is a small neighborhood elementary school turns out to be easier than finding a tolerable middle school, a time when life is almost guaranteed to be miserable in some way or other that also impacts their peers, even in the best environment.
The biggest benefit of the system may have only a little to do with the extent to which what is taught and how it is taught is tailored to the particular child choosing a school. There is a benefit in having a process in which a student participated and has a choice, because that process secures buy in to the middle school attended, regardless of the quality of the actual choices (likewise, studies show that people feel better about their country after voting, even if the race was uncontested or none of the candidates were particularly appealing).
School choice is also good for Denver real estate. As long as parents believe that their child in Denver can attend a good school with enough attention to the school choice process, wherever that child lives, that parent can feel comfortable living in Denver without having to pay for private school tuition. In contrast, in a system of strict neighborhood schools, real estate in attendance areas for schools where students perform poorly academically (and Denver has a great many such schools) become very unattractive to families with school aged children.
Proximity to good schools is still relevant, but only for the fundamental reason that it impacts the time it takes to get to school. My experience in where children we know have chosen to go to school strongly suggests that perceived school quality and offerings are a sufficiently powerful draw that middle class parents will largely ignore any transportation concerns within Denver to have their child attend what they see as a good school, a sensible stance since Denver is sufficiently geographically compact that even the longest trips aren't impossibly onerous.
On the whole, I support options like school choice and charter schools (and the Colorado practice of providing some of the in state student economic support available to public college students to students attending private colleges in the state). Milton Friedman was mostly right when he observed that parents have more effective power that is relevant to them in the power to make a school choice (even if it is hemmed by bureacracy that excludes some from the process and isn't entirely unlimited), than they do through the political process of electing school board officials. They may be at a disadvantage each way, but here and now, politics is even less of a level playing field than the process of navigating a complex school choice system. The risk of having parents choose not to attend your school adds a disciplined pressure to perform to every school in the system, public neighborhood, charter and private alike, without regard to the quality of management taking place at the district level at the behest of the school board.
There are plenty of ills that school choice does not solve. It does not itself solve achievement gap problems already mostly in place by the third grade and earlier. It does little to assure adequate public education funding and may even undermine it, because it is easier to rally and organize political support for institutions than mere spending formulas. It may lead to more, rather than less demographic segregation in schools. It doesn't by itself, bring more qualified teachers into the system. But, it does open up more room for experimentation disciplined by parental choices, it does decouple real estate decisions from education decisions, it does secure buy in from parents and students, and it does provide a much easier and reliable way for administrators to identify failing schools: parents choose not to send their children there. Simply by forcing school districts to address those schools that are failing relative to other options, the choice system brings up minimum standards for the district as a whole.
24 February 2010
Jane Norton Who?
Jason Salzman makes a persausive case that the Denver Post is taking a decidely lax approach to their coverage of GOP U.S. Senate candidate Jane Norton.
Regional Crime Hot Spots Are Whack A Moles
For some areas repeatedly hit hard with crime, police intervention can shut down lawlessness and keep it down. But for others, police involvement just shifts the trouble around.
A mathematical model, based on Los Angeles data, helps distinguish the two kinds of hot spots (citing this paper). It turns out that there are two "sharply distinct" catagories of crime hot spots.
Crimes in one location encourage more crime in that location from other criminals. "Data has shown, for example, that the house next door to a house with a broken window is more likely to be robbed." When the criminals are drawn only locally to a location by prior crime, police action can wipe out of crime hot spot for good.
When criminals are drawn to a crime hot spot from distant locations by prior crime, for example, to place with a reputation for being a drug market, police action will only cause the hot spot to relocate (if more than one police action is present, usually roughly midway between the police crime suppression actions). There is a critical value for this distance that distinguishes the two kinds of hot spots, producing distinct catagories of them.
Burglaries typically have an impact on crime rates in a 2 kilometer radius, and tend to be highly local according to the supporting data for the study.
So, if you want to know if police action will reduce total crime, you need to know how far away from the crime hot spot criminals come to commit crimes. If this distance is more than the critical value, police action at the hot spot is unlikely to be effective at reducing total crime. If the distance is less than the critical value, the broken windows theory works.
Traumatic Brain Injury Common
Up to 20 percent of combat soldiers and an estimated 1.4 million U.S. civilians sustain traumatic brain injuries each year.
From here.
SCOTUS Adopts Nerve Center Test
For diversity jurisdiction purposes, corporations (but not limited liability companies or partnerships) are citizens of their place of incorporation and their principal place of business.
Where is the principal place of business?
In the "nerve center" of the corporation where its executive headquarters is really located in fact. Specifically:
Alterntiaves were considered, such as the state with the biggest sales. This would have meant that most multi-state corporations would have been citizens of California. This approach was unanimously rejected. A key point in opinion was that "place of business" referred to a place within a state, not an entire state.
This means fewer diversity jurisdiction cases in non-California jurisdictions, and more diversity cases in California. It also focuses the scope of evidence that is relevant in a diversity jurisdiction determination.
Where is the principal place of business?
In the "nerve center" of the corporation where its executive headquarters is really located in fact. Specifically:
[T]he phrase "principal place of business" refers to the place where the corporation’s high level officers direct, control, and coordinate the corporation’s activities. Lower federal courts have often metaphorically called that place the corporation’s "nerve center." We believe that the “nerve center” will typically be found at a corporation’s headquarters.
Alterntiaves were considered, such as the state with the biggest sales. This would have meant that most multi-state corporations would have been citizens of California. This approach was unanimously rejected. A key point in opinion was that "place of business" referred to a place within a state, not an entire state.
This means fewer diversity jurisdiction cases in non-California jurisdictions, and more diversity cases in California. It also focuses the scope of evidence that is relevant in a diversity jurisdiction determination.
Saab Deal On, Hummer Deal Off
General Motors announced today that an effort to sell its Hummer brand to a Chinese company has failed and that it will simply be shutting down the brand.
The commercial market for the Hummer brand dwindled as combinations of greater concern about fuel efficiency and tight consumer spending dwindled. An uncertain future killed what was left of the brand:
A 1.2% figure amounts to about 2,500 vehicles a month for the four brands combined. There were 2,493 Hummers in inventory at the end of January 2010.
The U.S. military has also discontinued its Humvee purchases this year in favor of new better armored and more bombproof vehicles; the Defense business was sold in 2003 to defense contractor General Dynamics.
The initial Hummer in 1992 had been a civilian version of the military model (GM bought the brand in 1998). Later models migrated towards civilian SUVs and large pickups. The Hummer brand had a component of patriotism tied up in its status (like Jeep's) as a military spinoff, which a Chinese company might not have been able to maintain with this niche product. (It will continue to honor warranties for existing Hummer's, a liaiblity assumed in the bankruptcy by the reorganized company.)
But, earlier this week, GM announced that it appeared that a sale of its Saab division, which has been on again, off again, was going to close. An earlier effort to sell the Saturn brand also failed so it is being shut down. GM discontinued the Pontiac brand without any effort to sell the brand.
This leaves GM with four remaining core U.S. brands: Chevrolet, GMC, Cadillac and Buick. It has also shed GMAC, its financing arm.
GM also has four more brands used abroad: GMDaewoo (70.1%), Holden, Opel, and Vauxhall. Opel/Vauxhall's fate is currently being negotiated (Vauxhall makes right hand driver's seat versions of Opel model for the U.K. market); it may be spun off or sold. Holden is the Australian based subsidiary of GM and also is the direct parent company of South Korean based GMDaewoo.
GM has yet to show a turn around despite its 2009 bankruptcy, and weakness in Toyota, a key competitor, due to recalls. GM continues to lose market share, have poor sales and lose money. But, the most recent detailed sales figures have revealed that much of its declining sales are now concentrated in discontinued or spun off brands, rather than in the core brands that it is retaining.
The bankruptcy has made it less important for GM to emphasize sales volume over profits, because it has far fewer fixed legacy costs like bond payments and retirement obligations that are lower per vehicle sold when sales are high. But, since it is now majority owned by government and unions, both of which are concerned primarily about preserving as many jobs as possible, GM's owners may not be interested in a strategy that calls for downsizing in the name of higher profitability per vehicle sold.
General Motors sold about 60% fewer vehicles in 2009 overall than it did in 1999 in the United States, and sales declined over prior year sales in every year from 2000 through 2009.
1999-- 5,017,150
2000-- 4,953,163 ▼1.3%
2001-- 4,904,015 ▼1.0%
2002-- 4,858,705 ▼0.9%
2003-- 4,756,403 ▼2.1%
2004-- 4,707,416 ▼1.0%
2005-- 4,517,730 ▼4.0%
2006-- 4,124,645 ▼8.7%
2007-- 3,866,620 ▼6.3%
2008-- 2,980,688 ▼22.9%
2009-- 2,084,492 ▼30.1%
Ford appears to have turned the corner without either a bankruptcy or a government bailout.
Chrysler, in contrast, while continuing to show success in selling the minivans that it invented and some of the vehicles in its Jeep line, has not shown positive sales trends in most of its other lines and is experiencing 40+ year low sales, despite being free of legacy costs, despite having Fiat as a business partner, and despite the weakness of GM and Toyota in the market. Chrysler seems to be the furthest of the Big Three automakers from a recovery.
The commercial market for the Hummer brand dwindled as combinations of greater concern about fuel efficiency and tight consumer spending dwindled. An uncertain future killed what was left of the brand:
Saturn, Pontiac, Saab and HUMMER combined volumes represented 1.2 percent of total [GM U.S.] sales in January, compared with 12 percent in May 2009. Inventories for the combined brands totaled 4,212 units at January month-end, representing a 96 percent decrease compared to the end of May 2009 (112,141 units).
A 1.2% figure amounts to about 2,500 vehicles a month for the four brands combined. There were 2,493 Hummers in inventory at the end of January 2010.
The U.S. military has also discontinued its Humvee purchases this year in favor of new better armored and more bombproof vehicles; the Defense business was sold in 2003 to defense contractor General Dynamics.
The initial Hummer in 1992 had been a civilian version of the military model (GM bought the brand in 1998). Later models migrated towards civilian SUVs and large pickups. The Hummer brand had a component of patriotism tied up in its status (like Jeep's) as a military spinoff, which a Chinese company might not have been able to maintain with this niche product. (It will continue to honor warranties for existing Hummer's, a liaiblity assumed in the bankruptcy by the reorganized company.)
But, earlier this week, GM announced that it appeared that a sale of its Saab division, which has been on again, off again, was going to close. An earlier effort to sell the Saturn brand also failed so it is being shut down. GM discontinued the Pontiac brand without any effort to sell the brand.
This leaves GM with four remaining core U.S. brands: Chevrolet, GMC, Cadillac and Buick. It has also shed GMAC, its financing arm.
GM also has four more brands used abroad: GMDaewoo (70.1%), Holden, Opel, and Vauxhall. Opel/Vauxhall's fate is currently being negotiated (Vauxhall makes right hand driver's seat versions of Opel model for the U.K. market); it may be spun off or sold. Holden is the Australian based subsidiary of GM and also is the direct parent company of South Korean based GMDaewoo.
GM has yet to show a turn around despite its 2009 bankruptcy, and weakness in Toyota, a key competitor, due to recalls. GM continues to lose market share, have poor sales and lose money. But, the most recent detailed sales figures have revealed that much of its declining sales are now concentrated in discontinued or spun off brands, rather than in the core brands that it is retaining.
The bankruptcy has made it less important for GM to emphasize sales volume over profits, because it has far fewer fixed legacy costs like bond payments and retirement obligations that are lower per vehicle sold when sales are high. But, since it is now majority owned by government and unions, both of which are concerned primarily about preserving as many jobs as possible, GM's owners may not be interested in a strategy that calls for downsizing in the name of higher profitability per vehicle sold.
General Motors sold about 60% fewer vehicles in 2009 overall than it did in 1999 in the United States, and sales declined over prior year sales in every year from 2000 through 2009.
1999-- 5,017,150
2000-- 4,953,163 ▼1.3%
2001-- 4,904,015 ▼1.0%
2002-- 4,858,705 ▼0.9%
2003-- 4,756,403 ▼2.1%
2004-- 4,707,416 ▼1.0%
2005-- 4,517,730 ▼4.0%
2006-- 4,124,645 ▼8.7%
2007-- 3,866,620 ▼6.3%
2008-- 2,980,688 ▼22.9%
2009-- 2,084,492 ▼30.1%
Ford appears to have turned the corner without either a bankruptcy or a government bailout.
Chrysler, in contrast, while continuing to show success in selling the minivans that it invented and some of the vehicles in its Jeep line, has not shown positive sales trends in most of its other lines and is experiencing 40+ year low sales, despite being free of legacy costs, despite having Fiat as a business partner, and despite the weakness of GM and Toyota in the market. Chrysler seems to be the furthest of the Big Three automakers from a recovery.
23 February 2010
Quest For Ancient Genetic History Reviewed
What have we learned about the pre-history of the human race from genetics?
A special issue of Current Biology linked at Dienekes' Anthropology blog, summarized the findings to date in this rapidly advancing field.
UPDATE: A number of interesting points are made in the articles.
* Climate is the major driving force in human pre-history and ancient history.
* Modern humans appear to have made their way to the Near East from Africa in the period from 130,000 to 75,000 years ago, with documentation of that presence at least as far back as 100,000 years ago in the Levant. But, coincident with climate change, this population may have gone extinct or retreated to Africa and been replaced by Neanderthals from 75,000 to 45,000 years ago, when desert conditions no longer blocked modern human access to the Levant. Southern Coastal migration leaving Africa via the Southern end of the Red Sea to India, Southeast Asia and Australia may have commenced 70,000 to 60,000 years ago while deserts blocked access to the Levant.
* While modern humans entered Europe 45,000 years ago or so, there was a retreat from Northern Europe and a decline in the sophistication of tool making for several thousand years leading up to the Last Glacial Maxmium around 20,000 years ago. The recolonization of Europe by modern humans from Southern European areas (e.g. Iberia and Italy and the Balkans) started around 11,500 years ago, probably about a thousand years before migration into Europe associated with the Neolithic revolution (i.e. the domestication of plants and animals for use in agriculture).
* Khoesan (East African and Southern African hunter-gatherer ancestors) and Pygmy (tropical African) populations in Africa appear from genetic evidence to have diverged from each other around 35,000 years ago, and proto-population appears from genetic evidence to have diverged from other African populations (from whom Eurasian and American indigeneous people trace their roots) around 70,000 years ago. The Eastern and Western Pygmy populations, in turn, appear to have been divided around 18,000 years ago. This suggests that the extinct Pygmy language, which probably went extinct during the early Bantu expansion in Africa (ca. 5,000 years ago) or earlier, may have been a Khoesan-like click language.
* The Nilo-Saharan languages probably have their roots in Sudan more than 8,000 years ago and expanded both East and West from there. Some speakers of Chadic languages appear to be genetically closely related to Nilo-Saharan speakers in East Africa and probably adopted a Chadic language as a population without much genetic change around 8,000 years ago.
* The linguistic and genetic evidence show remarkably little commonality between pre-Austronesean expansion New Guinea (i.e. pre-3,400 years ago) and Australia, despite the fact that they were geographically linked by land until 8,000 years ago and despite a link by an island chain that was navigated and supported a thin level of trade for at least several thousand years prior to European contact. This points to either deep isolation of the regions from very early after the 50,000 years ago settlement of Australia, or a separate migration from the West (i.e. from Indonesia) rather than the South (i.e. from Australia).
* Northeast Siberia, from which all indigeneous human populations of the Americas derive, was populated prior to the Last Global Maximum around 20,000 years ago. This population retreated or went extinct at this peak of the last ice age in Northeast Siberia and was then repopulated as climate warmed again. It isn't entirely clear to what extent it is the pre-ice age, or post-ice age population of Northeast Siberia that provided the ancestral population for the Americas. Genetic evidence tends to favors a split between Northeastern Siberian and American gene pools not later than 17,000 years ago, but it is possible that there was a Beringian refugia population (i.e. in Alaska and the exposed land bridge across the Bering strait) derived from the pre-ice age population of Northeast Siberia that was the core population for both and that Northeastern Siberia was recolonized in whole or in part from the East rather than from the South. This would delay the split from a population genetics perspective.
* There is a genetic evidence of a split between Eastern North American and Pacific Coastal genetics in the Americas. The Eastern group lacks the X halotype, while the Pacific group has it. The Clovis culture archeology seems to be a better fit with the Eastern branch, implying a colonization of North America via a Northern route and Atlantic North American migration pattern that turned inward. There are no Clovis sites in South America and they are more common in the Eastern United States than the West. The oldest archeological sites in the Americas are found near the Pacific Coast of South America (as far back as 14,000 years) and signficantly pre-date the oldest Clovis sites (13,000 years old). This genetic and archeological evidence supports a two wave theory of migration into the Americas, with a first wave along the Pacific coast a thousand years earlier or so, than the wave of migration indicated by the Clovis culture.
19 February 2010
Overgenerous Fired CEO Settlement Undone
The 2005 revision to the bankruptcy code was specifically designed to make it easier to undo severance payments to executives on the even of bankruptcy. A new case from the 5th Circuit put that into action, affirming a clawback of $2.2 million of a $3 million severance bonus to a former CEO.
His employment contract provided that he would get $3 million if he was fired without cause, $1.5 million if he was fired for good cause, and $0 if he resigned. He negotiated a deal with the company to pay him $3 million in installments, and then resigned after the deal was struck. The trial court found that the company had good cause to fire him and knew it, so it found that the company did not get "reasonably equivalent value" when it agreed to pay the former CEO the full $3 million.
The settlement was the last straw that forced the company into bankruptcy.
The analysis used, in theory, could apply outside bankruptcy as well.
Difficulties that might have been raised about the value of his services were avoided because there was an employment contract in place which could be measured against the settlement reached.
His employment contract provided that he would get $3 million if he was fired without cause, $1.5 million if he was fired for good cause, and $0 if he resigned. He negotiated a deal with the company to pay him $3 million in installments, and then resigned after the deal was struck. The trial court found that the company had good cause to fire him and knew it, so it found that the company did not get "reasonably equivalent value" when it agreed to pay the former CEO the full $3 million.
The settlement was the last straw that forced the company into bankruptcy.
The analysis used, in theory, could apply outside bankruptcy as well.
Difficulties that might have been raised about the value of his services were avoided because there was an employment contract in place which could be measured against the settlement reached.
Ranks Of Non-Religious Growing
The millennials — those born after 1980 who began to reach adulthood around the year 2000 — are less likely to claim a religion than their parents and grandparents were at the same age. . . one in four, or 25 percent, do not identify with a denomination or faith. They describe themselves as either atheistic, agnostic or "nothing in particular." Among Generation X, whose younger members were young adults in the late 1990s, one in five, or 20 percent, were unaffiliated. For baby boomers, who were young adults in the late 1970s, that figure was about one in eight, or 13 percent. . . . About 64 percent say they are absolutely certain of God's existence, compared with 73 percent of their elders.
From here.
Europe has experienced a similar trend already, which appears to be stable and permanent, and the steady sustained growth in the share of the population identifying as non-religious is confirmed aby other sources.
Rhode Island's Civil War
In 1841 and 1842, "unfortunate political differences . . . agitated the people of Rhode Island." This is how the U.S. Supreme Court described that dispute over which government was the legitimate government of the state of Rhode Island, that divided the state into armed camps.
The legitimacy issue had been resolved before the case reached the U.S. Supreme Court, and the dispute hasn't made it into many history books or even constitutional law texts. But, the story is laid out at length in a majority and dissenting opinion discussing an intentional tort lawsuit brought in federal court in the wake of this near civil war in the early days of Rhode Island's accession to the United States.
If you still aren't sated, you can read about the constitutional law relevance of dueling.
The legitimacy issue had been resolved before the case reached the U.S. Supreme Court, and the dispute hasn't made it into many history books or even constitutional law texts. But, the story is laid out at length in a majority and dissenting opinion discussing an intentional tort lawsuit brought in federal court in the wake of this near civil war in the early days of Rhode Island's accession to the United States.
If you still aren't sated, you can read about the constitutional law relevance of dueling.
Health Insurance And Pregnancy
One of the biggest divided between group health insurance (from an employer) and individual health insurance is the way the policies treat pregnancy.
Group plans invariable cover pregnancy costs and this inclusion doesn't add a huge amount to the total claims cost of the plan, because while most people get pregnant more than once in a lifetime, employers with group plans rarely have all of their employees have a pregnancy at once.
In contrast, individual health insurance almost always excludes pregnancy (even if you represent that no one covered is pregnant at the time) unless you buy a rider on the policy that costs something very close to the cash cost of all medical care for a full pregnancy, whether or not you someone covered actually gets pregnant, or you get a deductible for pregnancy costs that is a least as big as the typical cash cost of all medical care for a full pregnancy (and even then the rider isn't cheap).
This isn't any great mystery. Individual health insurance plans are protecting themselves from adverse selection. If they covered pregnancy for the price that group plans do, people would buy health insurance, promptly get pregnant, get all the pregnancy costs paid, and drop the policy, inevitably recouping at least as much, if not more, than they paid in premiums in health insurance benefits.
This wouldn't have to be so. There is a very subtle piece of health insurance regulation and insurance company economics that drives this adverse selection incentive. You can cancel future coverage from a health insurance policy at any time for any reason. If you prepaid your premium, you get get your premium back. The right is almost as universal and absolute as the right of an employee to quit; there may be breach of contract damages in very rare haggled one of a kind employee deals, but even then, the employer can't force you back to work unles that employee is the military. A contrary requirement would be considered slavery or indentured servitude.
But, the prohibition on irrevocable health insurance policies is not nearly so profound. While there are consumer risks to buying in for the long term and then getting stuck with poor service, and there are insurance company risks that they will have an insured who doesn't pay a premium or has lots of kids with expensive care requirements there are ways around this, if the regulatory environment were interested in making it happen.
The insurance company could offer a policy in effect for the rest of an insured's life (or through some age reasonably chosen to let almost all insureds reach menopause) that would cover one, two, three, or any number of pregnancies, with a price based on age and the number of covered deliveries chosen. The policy could be prepaid. There could be insurance company arranged financing that would not use the policy as collateral (possibly with the same kind of preferences in bankruptcy that student loans receive on the theory that education similarly is not a suitable form of collateral).
A policy for an eighteen year old that covers two deliveries through age forty-eight ought to cost significantly less than $10,000, the cost of providing two deliveries reduced by the time value of money from the time that the policy is purchased to the time that the policy is used each time (and taking into account the costs avoided when someone with coverage decides not to have kids). Some people would pay up front, others would financing the policy over five, ten, fifteen, even thirty years. Insurance regulators and the right to bring bad faith claims would give insureds bargain power to make sure that they got the coverage that they paid for and insurance regulators would require insurance companies to set aside reserves to fund the plans. The policy could be quite cheap if purchased when a girl was born, perhaps $2,000 or less for a two pregnancy policy.
This is not the only way one could cut the Gordian knot. One could have comprehensive universal health care in some form or another; any system that brings everyone or almost everyone into the risk pool would do. Medicare payroll taxes could be slightly increased (perhaps from 1.45% for employer and employee to 1.5%) to fund a Part E benefit that covered all pregnancies and deliveries and neonatal care. One could simply make it a government benefit available to all out of general revenues - what politician doesn't love babies?
But, the plus is that it would solve a market failure with a reasonably regulated market solution, that otherwise can't be fixed without a government program that will depend from year to year on favorable federal funding.
Suppose that some treatments are too politically controversial to cover, like abortion or fertility treatments? One could set up a market for multiple year, irrevocable pre-paid supplemental plans that would cover the costs associated with those services. To keep the administrative costs down, these supplemental plans could be run something like AFLAK, which sells insurance policies that simply hand a flat predetermined dollar amount to an insurance when an event happens. To avoid concerns about creating bad incentives, the insurance company could, for example, sell "positive pregnancy test" insurance that pays the insured $1,000 upon a positive pregnancy test and lets the insured decide what to do with that money.
Maybe the time to market positive pregnancy test insurance would be when a girl is born, a bit like life insurance companies do. This way, the cost of the policy could be reduced by a decade or two or three of investment returns between the payout and the premium payment, and the risks of adverse selection would be lowest. Maybe a policy would cost a couple hundred bucks.
If some philanthropist or non-profit (perhaps Planned Parenthood) or government agency that can't afford to send every kid in a neighborhood to collect wanted to give families in the area a long term boost, it could buy positive pregnancy test insurance for every girl born in the community at $200 each, a reasonably affordable proposition that would help a lot of young women at a possibly tough moment in their future. Perhaps a non-profit like Planned Parenthood could even open a side business that offers the financial product, the way it did when it got into the drug marketing business when the private sector wouldn't sell a controversial pill.
At its root, the issue is that a whole life insurance model may be a better approach to funding some kinds of health insurance than a casualty insurance model, for certain kinds of care, if one is going to use insuranc at all.
The notion of endowing a child early on also doesn't have to be narrow. College funds are the norm for the middle class. People routinely save for retirement. Money can solve lots of problems that average people have in life. People save up for down payments on homes. It used to be common place to save for a wedding rather than for college for girls. There is something quite sensible about starting out young adults in life with a financial cushion set aside for them. The upper middle class of America sets up trust funds for their children for good reason.
It is cheaper, as a general rule, to start setting aside money for foreseeable expenses when you know that they are likely, rather than when payment on them is due. Then compound interest works in your favor, rather than against you (actually, no sensible consumer loan ever charges interest on interest, interest payments and some principal payment is made regularly instead; one of the surefire signs of a likely to be exploitive contract is that it is likely to have a substantial interest on interest component).
Group plans invariable cover pregnancy costs and this inclusion doesn't add a huge amount to the total claims cost of the plan, because while most people get pregnant more than once in a lifetime, employers with group plans rarely have all of their employees have a pregnancy at once.
In contrast, individual health insurance almost always excludes pregnancy (even if you represent that no one covered is pregnant at the time) unless you buy a rider on the policy that costs something very close to the cash cost of all medical care for a full pregnancy, whether or not you someone covered actually gets pregnant, or you get a deductible for pregnancy costs that is a least as big as the typical cash cost of all medical care for a full pregnancy (and even then the rider isn't cheap).
This isn't any great mystery. Individual health insurance plans are protecting themselves from adverse selection. If they covered pregnancy for the price that group plans do, people would buy health insurance, promptly get pregnant, get all the pregnancy costs paid, and drop the policy, inevitably recouping at least as much, if not more, than they paid in premiums in health insurance benefits.
This wouldn't have to be so. There is a very subtle piece of health insurance regulation and insurance company economics that drives this adverse selection incentive. You can cancel future coverage from a health insurance policy at any time for any reason. If you prepaid your premium, you get get your premium back. The right is almost as universal and absolute as the right of an employee to quit; there may be breach of contract damages in very rare haggled one of a kind employee deals, but even then, the employer can't force you back to work unles that employee is the military. A contrary requirement would be considered slavery or indentured servitude.
But, the prohibition on irrevocable health insurance policies is not nearly so profound. While there are consumer risks to buying in for the long term and then getting stuck with poor service, and there are insurance company risks that they will have an insured who doesn't pay a premium or has lots of kids with expensive care requirements there are ways around this, if the regulatory environment were interested in making it happen.
The insurance company could offer a policy in effect for the rest of an insured's life (or through some age reasonably chosen to let almost all insureds reach menopause) that would cover one, two, three, or any number of pregnancies, with a price based on age and the number of covered deliveries chosen. The policy could be prepaid. There could be insurance company arranged financing that would not use the policy as collateral (possibly with the same kind of preferences in bankruptcy that student loans receive on the theory that education similarly is not a suitable form of collateral).
A policy for an eighteen year old that covers two deliveries through age forty-eight ought to cost significantly less than $10,000, the cost of providing two deliveries reduced by the time value of money from the time that the policy is purchased to the time that the policy is used each time (and taking into account the costs avoided when someone with coverage decides not to have kids). Some people would pay up front, others would financing the policy over five, ten, fifteen, even thirty years. Insurance regulators and the right to bring bad faith claims would give insureds bargain power to make sure that they got the coverage that they paid for and insurance regulators would require insurance companies to set aside reserves to fund the plans. The policy could be quite cheap if purchased when a girl was born, perhaps $2,000 or less for a two pregnancy policy.
This is not the only way one could cut the Gordian knot. One could have comprehensive universal health care in some form or another; any system that brings everyone or almost everyone into the risk pool would do. Medicare payroll taxes could be slightly increased (perhaps from 1.45% for employer and employee to 1.5%) to fund a Part E benefit that covered all pregnancies and deliveries and neonatal care. One could simply make it a government benefit available to all out of general revenues - what politician doesn't love babies?
But, the plus is that it would solve a market failure with a reasonably regulated market solution, that otherwise can't be fixed without a government program that will depend from year to year on favorable federal funding.
Suppose that some treatments are too politically controversial to cover, like abortion or fertility treatments? One could set up a market for multiple year, irrevocable pre-paid supplemental plans that would cover the costs associated with those services. To keep the administrative costs down, these supplemental plans could be run something like AFLAK, which sells insurance policies that simply hand a flat predetermined dollar amount to an insurance when an event happens. To avoid concerns about creating bad incentives, the insurance company could, for example, sell "positive pregnancy test" insurance that pays the insured $1,000 upon a positive pregnancy test and lets the insured decide what to do with that money.
Maybe the time to market positive pregnancy test insurance would be when a girl is born, a bit like life insurance companies do. This way, the cost of the policy could be reduced by a decade or two or three of investment returns between the payout and the premium payment, and the risks of adverse selection would be lowest. Maybe a policy would cost a couple hundred bucks.
If some philanthropist or non-profit (perhaps Planned Parenthood) or government agency that can't afford to send every kid in a neighborhood to collect wanted to give families in the area a long term boost, it could buy positive pregnancy test insurance for every girl born in the community at $200 each, a reasonably affordable proposition that would help a lot of young women at a possibly tough moment in their future. Perhaps a non-profit like Planned Parenthood could even open a side business that offers the financial product, the way it did when it got into the drug marketing business when the private sector wouldn't sell a controversial pill.
At its root, the issue is that a whole life insurance model may be a better approach to funding some kinds of health insurance than a casualty insurance model, for certain kinds of care, if one is going to use insuranc at all.
The notion of endowing a child early on also doesn't have to be narrow. College funds are the norm for the middle class. People routinely save for retirement. Money can solve lots of problems that average people have in life. People save up for down payments on homes. It used to be common place to save for a wedding rather than for college for girls. There is something quite sensible about starting out young adults in life with a financial cushion set aside for them. The upper middle class of America sets up trust funds for their children for good reason.
It is cheaper, as a general rule, to start setting aside money for foreseeable expenses when you know that they are likely, rather than when payment on them is due. Then compound interest works in your favor, rather than against you (actually, no sensible consumer loan ever charges interest on interest, interest payments and some principal payment is made regularly instead; one of the surefire signs of a likely to be exploitive contract is that it is likely to have a substantial interest on interest component).
Tax Rates On Flush 400 Down
In 1992, the 400 highest income earners in the United States earned 0.52% of the nation's aggregate adjusted gross income. In 2007, they earned 1.59% of the nation's aggregate adjusted gross income, more than three times as large as a share of the pie. In 1992, these high income earners paid an average of 26.4% of the AGI in taxes. By 2007, that had fallen to 16.6%. Only 71 members of the Flush 400 paid 25% of more of their AGI in federal income taxes.
On an after federal taxes tax basis, the Flush 400 have seen their share of national AGI go from 0.40% to 1.25%, a 313% increase over fifteen years.
The average AGI of a member of the Flush 400 is $344.8 million per year; the cutoff for membership in the Flush 400 is $138,815,000. This cutoff has increased in both real and inflation adjusted terms over the past sixteen years.
Two-thirds of that income (66.3%) was classified as a capital gain for tax purposes (in 1992 it was 36.1%). Income classified as salaries and wages was only 6.5% of the income of the Flush 400 (in 1992 it was 26.2%). Sole proprietorship and farm income fell from 5.2% of AGI in 1992 to 0.1% of AGI in 2007. Partnership and S corporation income has fallen from 17.7% of Flush 400 income in 1992 to 12.2% in 2007. In 1992, earned income accounted for 49.1% of the income of the Flush 400, in 2007, earned income accounted for 18.8% of the income of the Flush 400.
The key trend is the tax driven convertion of wage, salary, and closely held non-C corporation business income into capital gains income like stock options and carried interests. This accounts for 100% of the shift in the income mix to capital gains income for the Flush 400. Other kinds of income, collectively, held a steady percentage of the total (although, of course, there were some variations in the exact mix in other categories of income).
Over sixteen years about a third of members of the Flush 400 appeared only in one year, while seven appeared every year. A little under half of them appeared three or more times in sixteen years.
President Clinton started the practice of releasing the information. George W. Bush refused to release the information. President Obama reinstated the public release and back filled the information gap from the Bush Administration.
On an after federal taxes tax basis, the Flush 400 have seen their share of national AGI go from 0.40% to 1.25%, a 313% increase over fifteen years.
The average AGI of a member of the Flush 400 is $344.8 million per year; the cutoff for membership in the Flush 400 is $138,815,000. This cutoff has increased in both real and inflation adjusted terms over the past sixteen years.
Two-thirds of that income (66.3%) was classified as a capital gain for tax purposes (in 1992 it was 36.1%). Income classified as salaries and wages was only 6.5% of the income of the Flush 400 (in 1992 it was 26.2%). Sole proprietorship and farm income fell from 5.2% of AGI in 1992 to 0.1% of AGI in 2007. Partnership and S corporation income has fallen from 17.7% of Flush 400 income in 1992 to 12.2% in 2007. In 1992, earned income accounted for 49.1% of the income of the Flush 400, in 2007, earned income accounted for 18.8% of the income of the Flush 400.
The key trend is the tax driven convertion of wage, salary, and closely held non-C corporation business income into capital gains income like stock options and carried interests. This accounts for 100% of the shift in the income mix to capital gains income for the Flush 400. Other kinds of income, collectively, held a steady percentage of the total (although, of course, there were some variations in the exact mix in other categories of income).
Over sixteen years about a third of members of the Flush 400 appeared only in one year, while seven appeared every year. A little under half of them appeared three or more times in sixteen years.
President Clinton started the practice of releasing the information. George W. Bush refused to release the information. President Obama reinstated the public release and back filled the information gap from the Bush Administration.
Foreclosure Situation Update
The default situation for subprime mortgages got so bad that it has started to get better, but new records for prime mortgage defaults (75% of all mortgages and almost 90% fixed rate) continue.
For the fourth quarter of 2009, about 10% of all prime mortgages were in default and a little more than 3% were in foreclosure. For fixed rate prime mortgages, the default rate is about 7.5% and the foreclosure rate is about 2%. This is a record high. In the 2005-2006 period before the financial crisis, defaults for all prime mortgages hovered at around 2.5% and foreclosures were around 0.2% of all prime mortgages, and was slightly less for fixed rate prime mortgages. This is presumably drive by long term loss of income and by vanishing home equity.
Subprime mortgage defaults peaked in the third quarter of 2009 and are now falling. But, the absolute numbers are still very high. More than 40% of subprime mortgages are in default and more than 15% of subprime mortgages are in foreclosure. Back in 2005, subprime mortgage default rates were under 15% and subprime foreclosre rates were arond 3.5%.
Keep in mind too that new subprime mortgages aren't being created in any significant numbers, while prime mortgages continue to be made as a slow but steady rate. The subprime mortgage concept is well on its way to becoming a historical relic over the next few years.
Overall 14% of all mortgages are in default and about 4.6% are in foreclosure.
New housing construction remains at very low levels by historical standards. Housing vacancy rates are at record highs (about 5.5% for all housing combined) so this situation is likely to persist for a while.
The short term future of commercial real estate is also not bright:
Commercial real estate prices have been falling for thirteen straight months. Hotel occupancy is down 17% from the 2005-2007 peak as a result of a two year long slump, and new hotels planned during the boom and opening now aren't going to help hotels get out of trouble any time soon (commercial mortgage backed security default rates are by type highest for hotel properties). Commercial real estate defaults are expected to continue to rise into 2012.
Underwater commercial real estate that is underwater and comes due has to be either surrendered to the lender or refinanced with money put in by the owner or other new investors. New investors interesting in putting money into commercial real estate that appraises as underwater are likely to be scarce, even if they do have money, why should they put in into a loan that is upside down when they can buy foreclosure properties at firesale prices? Owners have an incentive to put money in, because the alternative is a deficency judgment on top of a foreclosure, but may not have the cash on hand to do so.
About 27% of the nation's industrial capacity is idle.
Core inflation is negative for the first time since 1982.
(Note, mortgages in default as used here, includes mortgages in foreclosure).
For the fourth quarter of 2009, about 10% of all prime mortgages were in default and a little more than 3% were in foreclosure. For fixed rate prime mortgages, the default rate is about 7.5% and the foreclosure rate is about 2%. This is a record high. In the 2005-2006 period before the financial crisis, defaults for all prime mortgages hovered at around 2.5% and foreclosures were around 0.2% of all prime mortgages, and was slightly less for fixed rate prime mortgages. This is presumably drive by long term loss of income and by vanishing home equity.
Subprime mortgage defaults peaked in the third quarter of 2009 and are now falling. But, the absolute numbers are still very high. More than 40% of subprime mortgages are in default and more than 15% of subprime mortgages are in foreclosure. Back in 2005, subprime mortgage default rates were under 15% and subprime foreclosre rates were arond 3.5%.
Keep in mind too that new subprime mortgages aren't being created in any significant numbers, while prime mortgages continue to be made as a slow but steady rate. The subprime mortgage concept is well on its way to becoming a historical relic over the next few years.
Overall 14% of all mortgages are in default and about 4.6% are in foreclosure.
New housing construction remains at very low levels by historical standards. Housing vacancy rates are at record highs (about 5.5% for all housing combined) so this situation is likely to persist for a while.
The short term future of commercial real estate is also not bright:
Nationwide, at least $1.4 trillion in commercial real estate debt is expected to roll over during the next three years. . . half of commercial real estate mortgages will be underwater by the beginning of 2011. A fifth of residential mortgages are underwater now. . . . Unlike residential mortgages, which often can be paid over 30 years, commercial real estate mortgages typically must be paid off or refinanced within five years. Commercial properties mortgaged in 2005, 2006 and 2007, at the height of the boom, are reaching their maturity date.
Commercial real estate prices have been falling for thirteen straight months. Hotel occupancy is down 17% from the 2005-2007 peak as a result of a two year long slump, and new hotels planned during the boom and opening now aren't going to help hotels get out of trouble any time soon (commercial mortgage backed security default rates are by type highest for hotel properties). Commercial real estate defaults are expected to continue to rise into 2012.
Underwater commercial real estate that is underwater and comes due has to be either surrendered to the lender or refinanced with money put in by the owner or other new investors. New investors interesting in putting money into commercial real estate that appraises as underwater are likely to be scarce, even if they do have money, why should they put in into a loan that is upside down when they can buy foreclosure properties at firesale prices? Owners have an incentive to put money in, because the alternative is a deficency judgment on top of a foreclosure, but may not have the cash on hand to do so.
About 27% of the nation's industrial capacity is idle.
Core inflation is negative for the first time since 1982.
(Note, mortgages in default as used here, includes mortgages in foreclosure).
The Future Will Have Less Grass
According to Denver Water, household water used breaks down as follows:
54% landscaping
13% toilets
11% laundry
10% showers and baths
6% faucets
5% leaks
1% dishwashers
The most painless way for most households to cut water useage is to reduce the amount used on landscaping.
There will be pressure as Colorado and other parts of the arid west grown the water supply of the region. But, simply reducing landscaping water use can increase the carrying capacity of the Denver metropolitian area from a water perspective by up to about a million people.
Legalizing gray water systems to allow toilets to use recycled water not fit for cleaning or drinking could stretch supplies even further.
This set of statistics doesn't show it, but xeriscaping golf courses (or eliminating them), discontinuing irrigated agriculture in places where it is marginal, and increasing the efficiency of irrigated agriculture through steps like replacing broadcast sprinkling systems with drip systems, also leave lots of room to strech our current water resources further.
54% landscaping
13% toilets
11% laundry
10% showers and baths
6% faucets
5% leaks
1% dishwashers
The most painless way for most households to cut water useage is to reduce the amount used on landscaping.
There will be pressure as Colorado and other parts of the arid west grown the water supply of the region. But, simply reducing landscaping water use can increase the carrying capacity of the Denver metropolitian area from a water perspective by up to about a million people.
Legalizing gray water systems to allow toilets to use recycled water not fit for cleaning or drinking could stretch supplies even further.
This set of statistics doesn't show it, but xeriscaping golf courses (or eliminating them), discontinuing irrigated agriculture in places where it is marginal, and increasing the efficiency of irrigated agriculture through steps like replacing broadcast sprinkling systems with drip systems, also leave lots of room to strech our current water resources further.
18 February 2010
Conclusions From The Linguistic Big Picture
The big divide in the world of linguistics is between the "lumpers" (a.k.a. disciplinary liberals) like Jospeh Greenberg and Murray Gell-Mann and the "splitters" (a.k.a. disciplinary conservatives) who probably hold a majority of the discipline.
My sense of the basis of the divide is that the priority of the groupers is to find "genetic" links between languages (i.e. common origins) in pursuit of a phylogenic tree with points of origin for the dividing points. The splitters, in contrast, are more concerned about clustering. Their operational definition for a language family is based upon mutual similarity.
This explains why Australian and American languages are so controversial for linguists.
For reasons I explain below, I think that Greenberg, who has advocated for intensely controversial positions within the linguistic community, like lumping all but a couple of of the language families of the Americas into a single Amerind superfamily with a genetic origin in Siberian languages is right when it comes to the origin of these languages, and that all Australian languages likewise belong in a single macro-family. The American and Australian language family trees are probably far bushier, more intertwined and less branching than their Old World counterparts.
But, conservative linguists are right in observing that the languages of America and Australia are so diverse that acknowledging that they are all part of respective linguistic macro-families provides surprising little useful information about those languages, because the micro-language families that conservative linguists have constructed in these parts of the world, with a small number of exceptions traceable to early agricultural or proto-agricultural societies, have remarkably little in common and probably have ties to each other that are more remote in time than Old World language families despite their common origins.
One of the big sources for the disciplinary divide is probably the failure of both sides to articulate why it makes sense that there should be little intermediate language family structure in the New World.
The Near Certainty of Genetic Links Between New World Languages
From the pespective of the groupers, there is no persausive evidence that people in either place ever sat around and created a new language from scratch. With only a couple of pre-European contact exceptions in large, organized, agricultural economies that came much later than the Neolitihic Revolutions in the Near East, Indus River Valley and China did any of these people even commit their own languages to writing, a task that takes less leisure for the person creating it and takes less linguistic insight, than creating a new language from scratch. Also, humans who are raised with parental contact from fellow humans in the age range critical to acquiring language (particularly given the extended lactation period in hunter-gatherer societies) suffer major cognitive defects from which they cannot recover, die without adoptive care, and never fully recover. Therefore, there is very good reason to assume that all modern oral languages evolved in genetic relationships from prior languages. There may be more creolization leading to new languages than is commonly assumed, but there is very good reason to assume that almost none of the languages of Australia, New Guinea or the Americas were constructed or were influenced by outsiders during their populations' well established periods of near total isolation from the rest humanity.
The conclusion that all or almost all pre-Colombian American languages have genetic links to a very small number of common proto-languages of ancient migrants (probably no more than four or five and quite plausible one or two), and that all pre-contact Australian languages similar have genetic links to a very small number of common proto-languages of ancient migrants (probably not more than two) follows naturally from everything else were know from genetic and archeological evidence. The known small founding population sizes, the need for the founding population to have some level of cohesiveness in the migration period, and the known fairly narrow window for migration rule out any founding population that was significantly more linguistically diverse.
In the case of the Americas there is even an established link between a language family found closest to the Bering Straight and one of the most ancient languages spoken on the other side of the Bering Straight. So, we can establish pretty definitively that some relative of the modern Ket language was one of the important languages of the ancient migrants.
Almost all pre-Columbian American languages must share deep genetic links to some, possibly extinct language spoken at the time of migration by Paleo-Eskimos in Northeastern Russia. The evidence is overwhelming. They are genetically linked in a superfamily relative to all other human languages. Similarly, all pre-contact Australian and Tasmanian languages must share deep genetic links to a common ancestor language.
At that level, the groupers are surely right, even if we can't find the linguistic evidence.
But, the splitters have a point as well.
Australia and New Guinea and the Americas clearly have more linguistic diversity than the Old World, and if one is to judge which languages do and do not belong in a single family based upon their mutual similarities, each of these places should be broken into many more linguistic families, and those linguistic families often lack obvious intermediate phylogenic connections to a common proto-language. They don't resolve easily into splitting branches of a family tree from a common origin.
The Remarkable Youth Of Old World Languages
The remarkable thing about 28 language families of the Old World and their less than a dozen language isolates (a few tiny new ones might yet be discovered) is how young their common origins seem to be.
Modern humans evolved about 160,000 years ago and had reached the Near East about 100,000 years ago. They reached Australia (and one would think many places between the Near East and Australia) about 60,000 years ago. They reached Europe about 45,000 years ago. By 15,000 years ago they had reached Northeastern Siberia.
Dogs were domesticated roughly 15,000 years ago, they were the first domestic animal, shortly before the ancestors of Native Americans migrated to America, but after Australians, Tasmanians and New Guineans had been isolated from the rest of humanity by rising sea levels (a small number of dingos would arrive by boat in Australia from Southeast Asia much later). Food crops and animals were domesticated around 10,000 years ago (after Native Americans, Australians, Tasmanians and New Guineans were isolated from Eurasia by rising sea levels).
There was at least a 35,000 year period prior to the development of agriculture when modern humans were spread over an expanse of territory as large as Australia or the Americas, leaving plenty of time and isolation in which languages with little genetic relationship could have developed. Yet, almost all of our major modern language families appear to have much younger origins, some from less than 10,000 years ago (after farming developed), and several considerably younger than that, and a large share of them evolved in and around the far smaller geographic regions where food production evolved.
The common point of origin for the North Caucasian languages has been estimated at something on the order of 4000 years ago. It is hardly a wild guess to suppose that all of the Dravidian languages of India probably have common roots in Harrapan in the Indus River Valley Civilization, which is no more than 7000 years old. The Sino-Tibetan language family owes its great expanse to expansion Neolithic China not much more than 9500 years ago. Proto-Indo-European is no older and could be a couple of thousand years younger. We know that the Bantu language wasn't spoken widely outside the West Africa until 3500 years ago. Austronesian expansion from the island of Formosa (Taiwan) dates to 6000 years ago and the proto-Austronesian language is probably not more than 2000 years older. These language families have had three to ten cycles of language divergence as great as that from modern English to Old English, at most, to become dissimilar from a common ancestor language.
It is reasonable to assume that Eurasia has just as much linguistic diversity as the Americas and Australia and New Guinea did at first contact, and that the expansion of agricultural societies caused the vast majority of these languages to go extinct without leaving any real noticable traces in the roughly 6,000 year period between the invention of agriculture and the beginning of written history. This first wave of this process would have been running its course in the early historical era which recorded the extinctions of languages like Sumerian, Eustrucian, Elam, and Hattian. In at least a couple of cases who language families of early agricultural societies known to exist in historical times went extinct. This process was very complete, leaving us with just 40 language families (counting language isolates as their own families) many of which are on the verge of going extinct themselves.
While there was probably a large pool of potential languages from which a lucky winner could expand and form a language family, if some of the leading hypotheses about the formative locations and formation times of our major languages is correct, Uralic, Indo-European, all of the families of languages in the Caucuses, Altaic, Dravidian, Afro-Asiatic and the Nilo-Saharan may all trace their roots to a geographic area that stretches only modestly beyond the Middle East (probably not farther than the Urals, Ethiopia, North Africa and Western Iran) over a time period not longer than 5000 years.
If we can also assume that habitable space of modern humans in this area and time frame was limited significantly by geographical barriers and that there was a fair amount of communication in and intercourse and word borrowing between these people, who probably did have a common language in the distant past when they left Ethiopia to maintain through communication with each other to some extent. So, it isn't unreasonable to think that the proto-languages for these language families may not have been all that distant from each other at the time they emerged. There are plausible factors that would have limited language divergence in this space through the time period when these proto-languages emerged from their very distant common origin in an Ethiopian proto-language.
Modern humans in the early Neolithic zone in the West didn't have nearly as much room to expand and ignore their neighbors as the people in Australia, the Americas, pre-agricultural India, pre-agricultural East Asia, and pre-agricultural continental Europe did. In these tighter quarters there may have been far fewer micro-communities of neighboring tribes exchanging brides and engaging in low level war and trade, in the early agricultural region that gave rise to the language families than there were in the Americas and other pre-agricultural areas inhabited by modern humans.
This is a weaker hypothesis than an early Neolithic proto-language hypothesis popular with some "lumpers," but would produce an only moderately weaker version of the same expected effects in modern and historically known languages.
The Lack of Evidence In The New World
Another important reasons that the splitters are comfortable with the widely accepted language families it that we have access to a number of now extinct proto-languages like Latin and Sanscrit, and to historical evidence of linguistically important migrations in China and Africa that make the case for a common origin for these language families much easier to make. We aren't limited to working only with the modern versions of these languages. The proto-languages of these families were a lot closer to each other relative to other languages than the modern versions of these languages.
It is reasonable to guess that language families with as much similarity between languages as these families are probably not much older than these language families. For languages that were never committed to writing, i.e. all but two or three distinct language lineages in the Americas and none of the languages of Australia or New Guinea, we have documented proto-languages, only proto-languages inferred from modern language evidence. Even if there were oral traditions that could have substituted for written historical documentation (some historical languages in the Old World are known from religious litanies that could have been reproduced orally rather than in writing in non-literate cultures), the roughly 95% population decline experienced by almost all Native American and Australian societies after first European contact seriously disrupted this oral tradition as well.
If one insists upon having direct linguistic evidence of genetic linkages to show language family connections, one simply isn't going to find it, even if those linkages exist, in the New World, because a lack of historical linguist evidence limits the degree to which the linguistic past can be resolved.
The Likely Lack Of Intermediate Structure Or Preservation Of Information Through Language Content In The New World
The Language Evolution Time Clock
We also have good reason to believe that nothing comparable in scope and intensity to the historical conquests that produced language extinction in the Old World ever took place in Australia, Tasmania or most of the Americas.
In Australia and Tasmania, the time period between the proto-language we know must exist and the first documentation of Austalian and Tasmanian languages extends for about 50,000 years, five times as long as the oldest proto-language of an established language family.
In the Americas, the genetic evidence regarding migration to the New World suggests that 12,000 years ago "every human who migrated across the land bridge came from Eastern Siberia, and that every Native American is directly descended from that same group of Eastern Siberian migrants." Recent evidence of a small Paleo-Eskimo migration to Greenland from Siberia about 4,000 years ago that we know from archeological and genetic evidence ultimately died out without leaving much of a trace behind them (much like the Viking colonists who did the same thing 3,000 years later), does not undermine the basic thust of this conculsion.
This make the common proto-language of the Americans at least two thousand years older than that of the oldest known Old World proto-language.
If each thousand year cycle of random linguistic drift produces a certain percentage change in a language, then language change happens at an exponential, albeit slow, rate and each additional iteration blots out evidence of its origins much more completely.
The Lack Of Communication As A Factor To Prevent Language Divergence
It is likely that ancient Americans (as illustrated by a lack of technology transfers between different regions) that they were more isolated from each other than the proto-Indo-Europeans or proto-Sino-Tibetans (the oldest Neolithic civilizations) were from each other.
Peoples in regular communication with each other can be expected to diverge less linguistically (as illustrated by reduced linguistic divergence during the Roman Empire and Chinese empires) than people who are more isolated from each other.
Instead, we have good reason to believe that American hunter-gather societies formed small groups that were isolated into micro-communities for long periods of time.
We have every reason to believe that the presence of equally advanced neighbors with whom relations may not have been entirely peaceful from documented examples of hunter-gatherer communities in New Guinea, the Amazon, first European contact Nepal, first European contact Australia and first European contact Africa and India would suggest. And we know from archeological evidence that most of the people in the Americas never formed large communities that collectively interacted with each other and shares common territory as a community, so there would be a reduced need for a common language for large groups of people.
Implications For Language Family Structure
This may have inhibited overarching language families from every evolving. A group that settled in any particular part of the Americas with a hunter-gatherer economy may have a people not have had any connections to anyone other than their immediate neighbors for tens of thousands of years after their group's initial arrival and the inital filling out of the continent that coinicided with the megafauna extinctions in the Americas and Australia.
On two counts, great community age and small community size, we would expect a much greater degree of linguistic divergence between languages in the Americas and Australia (communication between groups may have been greater in more compact New Guinea) than in the Old World. So, it is hardly surprising that the modern or recently extinct languages in these countries are harder to connect into language families.
The key missing link that allows us to mutually accomodate the groupers and the splitters may be that we also know that the whole of the habitable parts of Australia and the Americas were very quickly settled in linguistic time. It took no more than a couple of thousand years for people to expand to fill these continents from the initial migration, quite possibly, less. Very soon there was no virgin territory to expand into and barring large scale local die offs of tribes by newly emerging epidemics or food supply failures or war, no group with any distinct advantage over any other group. Thus, very early on, there may have been no large scale migrations going on.
Thus, there is good reason to think that in Australia and the Americas they may be no large scale intermediate language structure between the ultimate proto-language and the large number of highly local languages and language isolates found by linguists today that it waiting to be discovered. It may simply not exist.
In almost all of the Americas there may be twelve modern English to Old English cycles of almost completely random linguistic drift, which was independent in each locality, between proto-Amerind and the indigenous American languages of today. These twelve cycles may involve so much random linguistic drift that the information about the proto-language retained in its modern descendants may be all but erased. There may be nothing we can extract from language family grouping and proto-language reconstruction today, no matter how expert our methods, that is more probing than the one cross Bering Strait connection between American Indian languages and Siberian languages that has already been inferred.
Indeed, it may be more fruitful for linguists studying indigneous American languages to start from a presumed proto-language to see what insight it can give them into the modern languages and the independent drift processes by which they came into being, than it is to try to trace the modern languages back to common sources.
Australia Compared
The Australians, from what know about their pre-history, seem to have overlapped more with each other, been confined mostly to a geographically smaller area, and migrated over larger distances than was typical in the Americas. The interaction may have created large communities in which having a common language would be useful, making linguistic drift for different languages less independent.
But, Australia was a highly fractured society for all of its pre-European contact history (there was Austronesian contact for the last millenia or two at most, but this was very slight) which was much longer than that of the Americas. Again, there isn't much to drive phylogenetic tree style deep structure linking the proto-language of Australia with its successor languages. At the time scales involved there are so many cycles of random turnover that commonalities that do exist may have more to do with loan words that arose randomly and then were exchanged between Australian languages than a common source in a proto-language.
My sense of the basis of the divide is that the priority of the groupers is to find "genetic" links between languages (i.e. common origins) in pursuit of a phylogenic tree with points of origin for the dividing points. The splitters, in contrast, are more concerned about clustering. Their operational definition for a language family is based upon mutual similarity.
This explains why Australian and American languages are so controversial for linguists.
For reasons I explain below, I think that Greenberg, who has advocated for intensely controversial positions within the linguistic community, like lumping all but a couple of of the language families of the Americas into a single Amerind superfamily with a genetic origin in Siberian languages is right when it comes to the origin of these languages, and that all Australian languages likewise belong in a single macro-family. The American and Australian language family trees are probably far bushier, more intertwined and less branching than their Old World counterparts.
But, conservative linguists are right in observing that the languages of America and Australia are so diverse that acknowledging that they are all part of respective linguistic macro-families provides surprising little useful information about those languages, because the micro-language families that conservative linguists have constructed in these parts of the world, with a small number of exceptions traceable to early agricultural or proto-agricultural societies, have remarkably little in common and probably have ties to each other that are more remote in time than Old World language families despite their common origins.
One of the big sources for the disciplinary divide is probably the failure of both sides to articulate why it makes sense that there should be little intermediate language family structure in the New World.
The Near Certainty of Genetic Links Between New World Languages
From the pespective of the groupers, there is no persausive evidence that people in either place ever sat around and created a new language from scratch. With only a couple of pre-European contact exceptions in large, organized, agricultural economies that came much later than the Neolitihic Revolutions in the Near East, Indus River Valley and China did any of these people even commit their own languages to writing, a task that takes less leisure for the person creating it and takes less linguistic insight, than creating a new language from scratch. Also, humans who are raised with parental contact from fellow humans in the age range critical to acquiring language (particularly given the extended lactation period in hunter-gatherer societies) suffer major cognitive defects from which they cannot recover, die without adoptive care, and never fully recover. Therefore, there is very good reason to assume that all modern oral languages evolved in genetic relationships from prior languages. There may be more creolization leading to new languages than is commonly assumed, but there is very good reason to assume that almost none of the languages of Australia, New Guinea or the Americas were constructed or were influenced by outsiders during their populations' well established periods of near total isolation from the rest humanity.
The conclusion that all or almost all pre-Colombian American languages have genetic links to a very small number of common proto-languages of ancient migrants (probably no more than four or five and quite plausible one or two), and that all pre-contact Australian languages similar have genetic links to a very small number of common proto-languages of ancient migrants (probably not more than two) follows naturally from everything else were know from genetic and archeological evidence. The known small founding population sizes, the need for the founding population to have some level of cohesiveness in the migration period, and the known fairly narrow window for migration rule out any founding population that was significantly more linguistically diverse.
In the case of the Americas there is even an established link between a language family found closest to the Bering Straight and one of the most ancient languages spoken on the other side of the Bering Straight. So, we can establish pretty definitively that some relative of the modern Ket language was one of the important languages of the ancient migrants.
Almost all pre-Columbian American languages must share deep genetic links to some, possibly extinct language spoken at the time of migration by Paleo-Eskimos in Northeastern Russia. The evidence is overwhelming. They are genetically linked in a superfamily relative to all other human languages. Similarly, all pre-contact Australian and Tasmanian languages must share deep genetic links to a common ancestor language.
At that level, the groupers are surely right, even if we can't find the linguistic evidence.
But, the splitters have a point as well.
Australia and New Guinea and the Americas clearly have more linguistic diversity than the Old World, and if one is to judge which languages do and do not belong in a single family based upon their mutual similarities, each of these places should be broken into many more linguistic families, and those linguistic families often lack obvious intermediate phylogenic connections to a common proto-language. They don't resolve easily into splitting branches of a family tree from a common origin.
The Remarkable Youth Of Old World Languages
The remarkable thing about 28 language families of the Old World and their less than a dozen language isolates (a few tiny new ones might yet be discovered) is how young their common origins seem to be.
Modern humans evolved about 160,000 years ago and had reached the Near East about 100,000 years ago. They reached Australia (and one would think many places between the Near East and Australia) about 60,000 years ago. They reached Europe about 45,000 years ago. By 15,000 years ago they had reached Northeastern Siberia.
Dogs were domesticated roughly 15,000 years ago, they were the first domestic animal, shortly before the ancestors of Native Americans migrated to America, but after Australians, Tasmanians and New Guineans had been isolated from the rest of humanity by rising sea levels (a small number of dingos would arrive by boat in Australia from Southeast Asia much later). Food crops and animals were domesticated around 10,000 years ago (after Native Americans, Australians, Tasmanians and New Guineans were isolated from Eurasia by rising sea levels).
There was at least a 35,000 year period prior to the development of agriculture when modern humans were spread over an expanse of territory as large as Australia or the Americas, leaving plenty of time and isolation in which languages with little genetic relationship could have developed. Yet, almost all of our major modern language families appear to have much younger origins, some from less than 10,000 years ago (after farming developed), and several considerably younger than that, and a large share of them evolved in and around the far smaller geographic regions where food production evolved.
The common point of origin for the North Caucasian languages has been estimated at something on the order of 4000 years ago. It is hardly a wild guess to suppose that all of the Dravidian languages of India probably have common roots in Harrapan in the Indus River Valley Civilization, which is no more than 7000 years old. The Sino-Tibetan language family owes its great expanse to expansion Neolithic China not much more than 9500 years ago. Proto-Indo-European is no older and could be a couple of thousand years younger. We know that the Bantu language wasn't spoken widely outside the West Africa until 3500 years ago. Austronesian expansion from the island of Formosa (Taiwan) dates to 6000 years ago and the proto-Austronesian language is probably not more than 2000 years older. These language families have had three to ten cycles of language divergence as great as that from modern English to Old English, at most, to become dissimilar from a common ancestor language.
It is reasonable to assume that Eurasia has just as much linguistic diversity as the Americas and Australia and New Guinea did at first contact, and that the expansion of agricultural societies caused the vast majority of these languages to go extinct without leaving any real noticable traces in the roughly 6,000 year period between the invention of agriculture and the beginning of written history. This first wave of this process would have been running its course in the early historical era which recorded the extinctions of languages like Sumerian, Eustrucian, Elam, and Hattian. In at least a couple of cases who language families of early agricultural societies known to exist in historical times went extinct. This process was very complete, leaving us with just 40 language families (counting language isolates as their own families) many of which are on the verge of going extinct themselves.
While there was probably a large pool of potential languages from which a lucky winner could expand and form a language family, if some of the leading hypotheses about the formative locations and formation times of our major languages is correct, Uralic, Indo-European, all of the families of languages in the Caucuses, Altaic, Dravidian, Afro-Asiatic and the Nilo-Saharan may all trace their roots to a geographic area that stretches only modestly beyond the Middle East (probably not farther than the Urals, Ethiopia, North Africa and Western Iran) over a time period not longer than 5000 years.
If we can also assume that habitable space of modern humans in this area and time frame was limited significantly by geographical barriers and that there was a fair amount of communication in and intercourse and word borrowing between these people, who probably did have a common language in the distant past when they left Ethiopia to maintain through communication with each other to some extent. So, it isn't unreasonable to think that the proto-languages for these language families may not have been all that distant from each other at the time they emerged. There are plausible factors that would have limited language divergence in this space through the time period when these proto-languages emerged from their very distant common origin in an Ethiopian proto-language.
Modern humans in the early Neolithic zone in the West didn't have nearly as much room to expand and ignore their neighbors as the people in Australia, the Americas, pre-agricultural India, pre-agricultural East Asia, and pre-agricultural continental Europe did. In these tighter quarters there may have been far fewer micro-communities of neighboring tribes exchanging brides and engaging in low level war and trade, in the early agricultural region that gave rise to the language families than there were in the Americas and other pre-agricultural areas inhabited by modern humans.
This is a weaker hypothesis than an early Neolithic proto-language hypothesis popular with some "lumpers," but would produce an only moderately weaker version of the same expected effects in modern and historically known languages.
The Lack of Evidence In The New World
Another important reasons that the splitters are comfortable with the widely accepted language families it that we have access to a number of now extinct proto-languages like Latin and Sanscrit, and to historical evidence of linguistically important migrations in China and Africa that make the case for a common origin for these language families much easier to make. We aren't limited to working only with the modern versions of these languages. The proto-languages of these families were a lot closer to each other relative to other languages than the modern versions of these languages.
It is reasonable to guess that language families with as much similarity between languages as these families are probably not much older than these language families. For languages that were never committed to writing, i.e. all but two or three distinct language lineages in the Americas and none of the languages of Australia or New Guinea, we have documented proto-languages, only proto-languages inferred from modern language evidence. Even if there were oral traditions that could have substituted for written historical documentation (some historical languages in the Old World are known from religious litanies that could have been reproduced orally rather than in writing in non-literate cultures), the roughly 95% population decline experienced by almost all Native American and Australian societies after first European contact seriously disrupted this oral tradition as well.
If one insists upon having direct linguistic evidence of genetic linkages to show language family connections, one simply isn't going to find it, even if those linkages exist, in the New World, because a lack of historical linguist evidence limits the degree to which the linguistic past can be resolved.
The Likely Lack Of Intermediate Structure Or Preservation Of Information Through Language Content In The New World
The Language Evolution Time Clock
We also have good reason to believe that nothing comparable in scope and intensity to the historical conquests that produced language extinction in the Old World ever took place in Australia, Tasmania or most of the Americas.
In Australia and Tasmania, the time period between the proto-language we know must exist and the first documentation of Austalian and Tasmanian languages extends for about 50,000 years, five times as long as the oldest proto-language of an established language family.
In the Americas, the genetic evidence regarding migration to the New World suggests that 12,000 years ago "every human who migrated across the land bridge came from Eastern Siberia, and that every Native American is directly descended from that same group of Eastern Siberian migrants." Recent evidence of a small Paleo-Eskimo migration to Greenland from Siberia about 4,000 years ago that we know from archeological and genetic evidence ultimately died out without leaving much of a trace behind them (much like the Viking colonists who did the same thing 3,000 years later), does not undermine the basic thust of this conculsion.
This make the common proto-language of the Americans at least two thousand years older than that of the oldest known Old World proto-language.
If each thousand year cycle of random linguistic drift produces a certain percentage change in a language, then language change happens at an exponential, albeit slow, rate and each additional iteration blots out evidence of its origins much more completely.
The Lack Of Communication As A Factor To Prevent Language Divergence
It is likely that ancient Americans (as illustrated by a lack of technology transfers between different regions) that they were more isolated from each other than the proto-Indo-Europeans or proto-Sino-Tibetans (the oldest Neolithic civilizations) were from each other.
Peoples in regular communication with each other can be expected to diverge less linguistically (as illustrated by reduced linguistic divergence during the Roman Empire and Chinese empires) than people who are more isolated from each other.
Instead, we have good reason to believe that American hunter-gather societies formed small groups that were isolated into micro-communities for long periods of time.
We have every reason to believe that the presence of equally advanced neighbors with whom relations may not have been entirely peaceful from documented examples of hunter-gatherer communities in New Guinea, the Amazon, first European contact Nepal, first European contact Australia and first European contact Africa and India would suggest. And we know from archeological evidence that most of the people in the Americas never formed large communities that collectively interacted with each other and shares common territory as a community, so there would be a reduced need for a common language for large groups of people.
Implications For Language Family Structure
This may have inhibited overarching language families from every evolving. A group that settled in any particular part of the Americas with a hunter-gatherer economy may have a people not have had any connections to anyone other than their immediate neighbors for tens of thousands of years after their group's initial arrival and the inital filling out of the continent that coinicided with the megafauna extinctions in the Americas and Australia.
On two counts, great community age and small community size, we would expect a much greater degree of linguistic divergence between languages in the Americas and Australia (communication between groups may have been greater in more compact New Guinea) than in the Old World. So, it is hardly surprising that the modern or recently extinct languages in these countries are harder to connect into language families.
The key missing link that allows us to mutually accomodate the groupers and the splitters may be that we also know that the whole of the habitable parts of Australia and the Americas were very quickly settled in linguistic time. It took no more than a couple of thousand years for people to expand to fill these continents from the initial migration, quite possibly, less. Very soon there was no virgin territory to expand into and barring large scale local die offs of tribes by newly emerging epidemics or food supply failures or war, no group with any distinct advantage over any other group. Thus, very early on, there may have been no large scale migrations going on.
Thus, there is good reason to think that in Australia and the Americas they may be no large scale intermediate language structure between the ultimate proto-language and the large number of highly local languages and language isolates found by linguists today that it waiting to be discovered. It may simply not exist.
In almost all of the Americas there may be twelve modern English to Old English cycles of almost completely random linguistic drift, which was independent in each locality, between proto-Amerind and the indigenous American languages of today. These twelve cycles may involve so much random linguistic drift that the information about the proto-language retained in its modern descendants may be all but erased. There may be nothing we can extract from language family grouping and proto-language reconstruction today, no matter how expert our methods, that is more probing than the one cross Bering Strait connection between American Indian languages and Siberian languages that has already been inferred.
Indeed, it may be more fruitful for linguists studying indigneous American languages to start from a presumed proto-language to see what insight it can give them into the modern languages and the independent drift processes by which they came into being, than it is to try to trace the modern languages back to common sources.
Australia Compared
The Australians, from what know about their pre-history, seem to have overlapped more with each other, been confined mostly to a geographically smaller area, and migrated over larger distances than was typical in the Americas. The interaction may have created large communities in which having a common language would be useful, making linguistic drift for different languages less independent.
But, Australia was a highly fractured society for all of its pre-European contact history (there was Austronesian contact for the last millenia or two at most, but this was very slight) which was much longer than that of the Americas. Again, there isn't much to drive phylogenetic tree style deep structure linking the proto-language of Australia with its successor languages. At the time scales involved there are so many cycles of random turnover that commonalities that do exist may have more to do with loan words that arose randomly and then were exchanged between Australian languages than a common source in a proto-language.
The Case For The Altaic Language Family
A new analysis makes a solid empirical case that Turkic, Mongolian, Tungusic, Korean, and Japanese languages are all members of a "genetically related" Altaic language family with a tool called "Consonant Class Matching" validated with an analysis of Indo-European and Semetic languages.
There are 30 Turkic languages spoken by about 180 million people, most notably in Turkey and Central Asia. There are at least nine Mongolian languages spoken by 5-6 million people mostly in Mongolia and neighboring parts of China. There are about 75,000 speakers of thirteen Tungusic languages, many of which have only a few hundred or fewer speakers. About 78 million people speak Korean. More than 130 million people speak Japanese or closely related languages.
About Genetic Relationships in Linguistics and Language Families
Languages change over time. Over many centuries, a spoken language typically changes enough to cease to be clearly the same language. For example, most speakers of modern English find it takes effort to read Shakespeare (flourished 1589-1613), and that it is very difficult to read the Middle English original works of writers like Chaucer (flourished late 1300s), and that Old English (ca. 400s to 1100s) is another vaguely familiar looking language.
When people who speak the same language are mostly isolated from each other for extended period of time, the source language changes in ways that differ for each group of languages. For instance, when the Western Roman Empire fell (476 AD is a date commonly used as the defining moment), the common Roman version of Latin spoken in the Roman empire which avoided breaking into mutually unintelligble dialects through political unity began to diverge. The result was the Romance languages: French, Portugese, Spanish, Italian, Romanian and many less well known languages that are spoken today.
Some languages have well established "genetic" links. We can show through a combination of historical and linguistic evidence, and sometimes surviving evidence of intermediate versions of the languages, that they are all primarily derived from a common source language (although there may be borrowed words and influences from other languages). Two of the best established language families are Indo-European (which includes Germanic languages, Romance languages and the Sankscrit derived languages of India) and Semetic (which includes both Hebrew and Arabic).
The Evidence For Altaic
Not all language families fit into a consensus language family. There are classification controversies. One such controversy concerns the validity of the hypothesis that Turkic, Mongolian, Tungusic, Korean, and Japanese languages make up a single Altaic language family. Wikipedia sums of this debate as of 1999:
These language families share numerous characteristics. The debate is over the origin of their similarities. One camp, often called the "Altaicists", views these similarities as arising from common descent from a Proto-Altaic language spoken several thousand years ago. The other camp, often called the "anti-Altaicists", views these similarities as arising from areal interaction between the language groups concerned. Some linguists believe the case for either interpretation is about equally strong; they have been called the "skeptics" (Georg et al. 1999:81).
Another view accepts Altaic as a valid family but includes in it only Turkic, Mongolic, and Tungusic. This view was widespread prior to the 1960s, but has almost no supporters among specialists today (Georg et al. 1999:73–74). The expanded grouping, including Korean and Japanese, came to be known as "Macro-Altaic", leading to the designation by back-formation of the smaller grouping as "Micro-Altaic". Most proponents of Altaic continue to support the inclusion of Korean and Japanese.
Micro-Altaic would include about 66 living languages, to which Macro-Altaic would add Korean, Japanese, and the Ryukyuan languages for a total of about 74. (These are estimates, depending on what is considered a language and what is considered a dialect. They do not include earlier states of language, such as Old Japanese.) Micro-Altaic would have a total of about 348 million speakers today, Macro-Altaic about 558 million.
The new study compared languages using a tool called Consonant Class Matching, which rates the similiarity of languages based upon the percentage of pairs of words in the language in which the first two consonants in one hundred sample words fall in the same classes.
The method was validated with Indo-European and Semetic languages.
Applying the procedure to 21 modern Indo-European (IE) languages . . . we find that it reliably identifies such branches as Indic, Slavic, Germanic, and Romance (SIs varying between 45 and 77%, all statistically significant at P < 10-6). By contrast, similarity between languages belonging to different branches is much lower (between 1 and 21%). A particularly interesting comparison is between Germanic and Indic languages. The SIs are very low, between 1 and 7%. Half of the comparisons are not significant at the 0.05 level, while all but one of the rest are weakly significant at 0.05 < P < 0.01. [Ed. A 1-2% similarity is expected from random chance.]
Both the Indic and the Germanic groups reveal themselves beyond any doubt, while the genetic relation between these two groups is not convincingly demonstrated [with a one on one comparison of the languages]. We recall that the validity of the IE family was originally established not on the basis of modern languages but rather by comparing ancient ones, which are much closer to each other. The results of the CCM method reflect the greater degree of similarity (all comparisons are significant at least at P < 0.02 level, and most at much higher significance levels).
The SI between Old High German and Old Indian, in particular, is 14%. The probability of this overlap happening by chance is vanishingly small (<10–6). When we apply the CCM approach to several ancient Semitic languages we find that SIs for all comparisons are highly significant (P << 10–6). . . .
When we apply the CCM method to the proto-languages of four IE branches, we obtain the same pattern as for attested ancient languages. For example, the SI between the Proto-Iranian and the Proto-Germanic languages is 13%. By contrast, in pairwise comparisons between five modern Germanic languages (German, English, Dutch, Icelandic, and Swedish) and two modern Iranian languages (Kurdish and Ossetian) it ranges between 5 and 10% (average = 7%).
Using reconstructed proto-languages can sometimes yield even better results than using attested old languages, as is shown in the Iranian–Germanic comparison. The SIs between Old High German and Avestan or Classical Persian are only 9–10%, whereas the overlap between Proto-Germanic and Proto-Iranian is 13% (and the statistical significance of the result increases by several orders of magnitude). This improvement is at least partially due to the greater age of Proto-Germanic and Proto Iranian compared with Old High German and Classical Persian respectively.
So, how do the proposed languages of the Altaic language family fare?
Next, we use the CCM approach to test the reality of the Altaic family. We have four independent reconstructions: Proto-Turkic, Proto-Mongolian, Proto-Tungus, and Proto-Japanese (Korean dialects are too similar to one another to justify a reconstruction of Proto-Korean). We also calculated the degree of similarity between these four languages and Proto-Eskimo, because Mudrak proposed that Eskimo languages are closely related to the Altaic family. The SIs for the four Altaic proto-languages range between 6 and 11% (average = 8.7%). This range of values is lower than that for the IE family. Nevertheless, the significance levels range between 0.01 and less than 10–5, and this is strong evidence for historical connections among the four linguistic groups.
Note that when we run the test on modern languages, the degree of similarity between them is greatly attenuated. For example, comparing five modern Turkic languages (Turkish, Tatar, Chuvash, Yakut, and Tuvinian) with two modern Japanese ones (Tokyo and Nasa) we detect a statistically significant relationship only in two out of ten cases (P-values are 0.03 and 0.01). The SI between the proto-languages, however, is significant at P < 0.001 level. This is the same pattern that we have already noted in the context of the IE family. Interestingly, we find support for the hypothesis of Mudrak that there is a relationship between Altaic and Eskimo.
So, there is strongly statistically significant evidence that the relationship between the language is not random. But, what if loan words, rather than a genetic relationship, links the languages.
What remains, however, is the second objection: that the proto-languages of these families could have acquired similar lexicons “due to a prolonged history of areal convergence." One possible response to this alternative explanation is that borrowings into the basic lexicon (100-word lists) are rare. Thus, we expect that languages belonging to different linguistic families will have low SIs, even when they have coexisted in the same region for a long period of time. We test this proposition empirically.
First, we looked at comparisons of languages belonging to different families that were located in spatial proximity: (a) Old Chinese vs. the proto-languages within Altaic; (b) Turkish vs. modern languages of people that inhabited the Ottoman Empire (1378–1914); and (c) Turkish vs. Classical Persian and Arabic. The last comparison is particularly interesting because these three languages have coexisted in close cultural interaction at least since the Seljuk Sultanate (eleventh century), and many educated persons in the Middle East were trilingual.
The SIs . . . are somewhat higher than expected under the null hypothesis: three out of eleven comparisons are significant at 0.05 level, and the maximum SI is 6%. What is important for our purposes, however, is that prolonged contact yields much lower SIs than those observed beween proto-languages within Altaic (such as the SIs of 11% observed in comparisons of Proto-Mongolic with Proto-Turkic or Proto-Tungus). This observation is contrary to the hypothesis that the observed similarities between Altaic languages are entirely due to borrowings.
More generally, in the 66 comparisons between Altaic and Semitic languages the SIs ranged between 0 and 5% and there were only two significant P-values (whereas we expect 3.3; . . . ). This pattern is precisely what should happen when languages are so distantly related that most “signal” has been lost and there were no cross-borrowings into the basic lexicon.
In the 363 comparisons between Altaic and IE languages, however, there were 45 significant values (versus the expected 18). There is, thus, evidence for either some limited degree of cross-family borrowing or else deeper genetic connections between the Altaic and Indo-European families, as was proposed by Illich-Svitych in the context of his Nostratic superfamily, or both.
The main point, however, is that the evidence for internal connections between the Altaic languages is orders of magnitude stronger. (To test the superfamily idea properly using CCM it will be necessary to compare the reconstructed proto-languages of Indo-European, Altaic, and so forth.) The maximum observed SI in comparisons of modern languages or proto-languages within Altaic to those within IE was 8% (between Albanian and Nasa, no doubt caused by chance: the bootstrap-estimated probability of getting at least one SI=8% or better in the 363 comparisons is P > 0.7). By contrast, in the comparisons between the proto-languages within Altaic we observe SIs up to 11%. The . . . probability of getting two SIs of 11% in six comparisons . . . by chance is much less than 10–6. . . .
The evidence for the common origin of the Altaic languages, at least with respect to word-list comparisons, is thus nearly as strong as that for the Indo-European languages. If the Indo-European family is accepted as real, the same conclusion should also apply to the Altaic family.
This study would among other things, find a language family home for the world's largest language isolate (Korean).
With this conclusion what does the Big Picture look like?
If this evidence is correct in showing that there is a Macro-Altaic language family, then the largest language families in the world (by the percentage of the world population that speaks them) would be as follows;
*Indo-European languages 46% (Europe, Southwest to South Asia, America, Oceania)
*Sino-Tibetan languages 21% (East Asia)
*Niger-Congo languages 6.4% (Sub-Saharan Africa)
*Afro-Asiatic languages 6.0% (North Africa to Horn of Africa, Southwest Asia)
*Austronesian languages 5.9% (Oceania, Madagascar, maritime Southeast Asia)
*Altaic languages 5.8% (Central Asia, Northern Asia, Anatolia, Siberia, Japan, Korea)
*Dravidian languages 3.7% (South Asia)
*Austro-Asiatic languages 1.7% (mainland Southeast Asia)
*Tai-Kadai languages 1.3% (Southeast Asia)
This nine language families include the languages of 97.7% of the world's population. (The Semetic language family discussed is a subgroup of the Afro-Asiatic language family.)
Old World Languages Overview
A complete list of living language families of the world, excluding Australia, New Guinea (and a few neighboring islands) and the Americas would comprise the following 28 language families, 9 language isolates, and 8 unclassified small African languages.
At least six proposals to merge some of these language families are being discussed and some of the languages and language families are at risk of becoming extinct. If all six proposals were adopted and proposals to classify four of the language isolates and two unclassified languages were adopted, there would be just 22 language families, 5 language isolates and 6 unclassified languages outside the Americas, Australia, New Guinea and the nearby islands.
The languages are arranged by geography and suggestions of relatedness.
African/Near Eastern Language families and language isolates (11+2):
* Afro-Asiatic languages (formerly Hamito-Semitic)
* Nilo-Saharan languages
* Kadu languages (probably Nilo-Saharan) nine languages
* Koman languages (perhaps Nilo-Saharan) 47,000 speakers of five or six languages in Ethiopia and Sudan
* Songhay languages 2.6 million speakers of many languages around the Niger River
* Niger-Congo languages (sometimes Niger-Kordofanian)
* Mande languages (perhaps Niger-Congo)
* Ubangian languages 2-3 million speakers of 70 languages in and around the Central African Republic
* Khoe languages (part of the Khoisan proposal) 300,000 speakers of eight languages.
* Tuu languages (part of Khoisan) 4,200 speakers of two languages.
* Juu-ǂHoan languages (part of Khoisan) 45,000 speakers of two languages.
* Hadza language isolate (Tanzania) (perhaps Khoisan) Fewer than 1,000 speakers.
* Sandawe language isolate (Tanzania) (may be related to Khoe) 40,000 speakers
The last five languages/language families are all languages of peoples with click languages and hunter-gatherer economies in the recent past or the present.
European, Central Asian, South Asian and North Asian language families (9+7):
* Basque language isolate (Spain, France) (related to extinct Aquitanian) 1,063,000 speakers
* Northwest Caucasian languages (often included in North Caucasian) 1.7 million speakers of four living languages
* Northeast Caucasian languages (often included in North Caucasian) 4 million speakers of about thirty-two living languages
* South Caucasian languages 5.2 million speakers of four living languages
* Altaic languages (see map and description at top of post)
* Uralic languages (Finland, Hungary and Northern Russia) 25 million speakers of 39 languages
* Yukaghir languages (Eastern Siberia in Russia) 200 speakers of two languages (possibly related to Uralic)
* Indo-European languages (Europe, Iran and India)
* Nihali (aka Kalto) language isolate (Maharashtra, India) (sometimes linked to Munda) 2000 speakers (a "tribal" of India indigeneous to local tropical jungles probably as hunter-gatherers)
* Dravidian languages (Southeast India)
* Great Andamanese languages (part of the Andamanese proposal) (islands near India) 50 speakers of 1-2 languages.
* Ongan languages (part of the Andamanese proposal) (islands near India) 296 speakers of two languages.
* Kusunda language isolate (Nepal) 8 speakers (sometimes linked to Andaman and West New Guinea) (a moribund hunter-gatherer tribe physically dissimilar to neighboring groups)
* Shompen language isolate (Nicobar Island) (little known; appears to be two languages) 400 speakers (currently interdicted hunter-gatherers with minimal outside contact)
* Chukotko-Kamchatkan languages (Northeast Siberia in Russia) 11,000 speakers of four living languages
* Nivkh or Gilyak language isolate (Russia) (sometimes linked to Chukchi-Kamchatkan) 1,000 speakers
* Ainu language isolate (Japan, Russia) (like Arabic or Japanese, the diversity within Ainu is large enough that some consider it to be perhaps up to a dozen languages while others consider it a single language with high dialectal diversity) 100 speakers (indigenous people of Northern Japan who had substantial trade with the Nivkh people)
* Dené-Yeniseian languages (Dené is a discontinguous Native American language group, Yeniseian is an Eastern Siberian language family whose last living language Ket has about 500 speakers)
* Burushaski language isolate (Pakistan, India) (sometimes linked to Yeniseian) 87,000 speakers
Tibetan, Chinese, Southeast Asian and Pacific language families (5+0):
* Austronesian languages (part of the Austro-Tai proposal) (Indonesia, Madagascar, the Pacific)
* Sino-Tibetan languages (including China and Tibet)
* Tai-Kadai languages (part of Austro-Tai proposal) (Southeast Asia and South China)
* Hmong-Mien languages (Southeast Asia)
* Austro-Asiatic languages (Southeast Asia)
Collectively, language isolates mostly have few speakers. The eight language isolates other than Basque have fewer than 131,000 speakers combined, and most of those languages have hypothetical links to existing language families.
There are about eight living languages in Africa that are not currently classified:
* Ongota (perhaps Afro-Asiatic)
* Gumuz (perhaps Nilo-Saharan)
* Bangi-me (ethnically Dogon)
* Dompo
* Mpre
* Jalaa
* Laal
* Shabo
There are also a number of notable language isolates and unclassified languages in this region that are extinct and there is at least one extinct language family.
There are some notable extinct language isolates in this region:
* Elamite (Iran) (sometimes linked to Dravidian)
* Sumerian (Iraq)
* Hattic (Turkey) (sometimes linked to Northwest Caucasian)
The Hurro-Urartian and Tyrsenian language families are also extinct.
There are some notable extinct unclassified languages in this region:
* Iberian (Spain)
* Tartessian (Spain, Portugal)
* North Picene (Italy)
* Kwadi (perhaps Khoe)
* Meroitic (variously thought to be Nilo-Saharan or Afro-Asiatic)
* Quti
* Kaskian
* Cimmerian
Murray Gell-Mann at the Sante Fe institute is one of the authors of the New Study and a post related to him discusses proposals for linking many of the worlds remaining language families and language isolates with deeper genetic relationships.
New World Languages Overview
There is strong circumstantial evidence to suggest that all the language families and language isolates in Australia, New Guinea and the vicinity are really descended from the language of the founding population of Australia, the language of the founding population of Taiwan, or are creoles of languages in those two families. These languages are currently classified into 36 language families and about 20 language isolates.
Linguists currently break Native American languages (North, Central and South American) into about 85 language families and 57 language isolates, despite the fact that it is clear that all pre-Columbian Native American peoples who leave any linguistic trace (e.g. excluding Norse settlers who arrived around 1000 AD and whose colonies ultimately failed without leaving a linguistic trace in neighboring areas) share a common descent from a small number of waves of immigration from Siberia. Such common origins are strongly suggestive of a genetic relationship between all of the pre-Columbian languages of the Americas into a small number of macro-language families.
But, common genetic linguistic roots for Native American languages are hard to discern. Like Australia and New Guinea, these peoples spent well over ten thousand years in isolate, and a small number of geographically large political units emerged in the pre-Columbian area. The New World regions did not experience, as the Old World did, hosts of language extinctions in the pre-historic era. As a result, there is much more linguistic diversity in the Americas than there is in the Old World.
UPDATE (2/24/2010): The study did not analyze Korean linkages despite discussing it as a possible member of the language family.
There is a hierarchy of strength of relationships between proposed Altaic language family languages. The strength of the language relationships between non-Japanese proto-languages is strongest between the mostly geographically linked languages. Thus, Proto-Turkic is most closely related to Proto-Mongolian which in turn is most closely related to Proto-Tungus (i.e. Manchurian), which in turn is most closely related to Proto-Eskimo, on an anticipated West to East axis. Proto-Turkic is barely distinguishable as related to Proto-Eskimo at all when compared directly, only the intermediate language relationships so the extended Altaic language relationships.
Japanese is the exception. It is as closely related to Proto-Turkic as is Proto-Mongolian, and the strength of the relationship of Japanese languages to all other language families in the Altaic language superfamily appear to be derivative of its relationship to Turkic languages, despite the great distance between Anatolia and Japan compared to the geographical distance associated with the prior languages.
Proto-Japanese probably reflects the language of the mounted military invaders of Japan from the North (frequently Korea is described as the source) ca. 2400 years ago (perhaps up to 500 years earlier). The relationships bear the same hierarchical relationship, but are about half as strong, for proposed Altaic languages and modern Japanese. It is likewise more closely related to the Anatolian Turkic languages than it is to the Eastern Turkic languages. This is a linguistic mystery.
The speakers of proto-Japanese were a bronze age civilization. The last bronze age civilization in Anatolia was that of Troy, 3000 BC-700 BC, whose end period coincides roughly with the beginning of the Yayoi period of colonization. The Trojans, of course, were a sea going civilization although not known to have traveled as far as the Near East. They, however, probably spoke an extinct Indo-European language. Phrygia was also an Antatolian civilization at the time, but they also spoke an Indo-European language at the time as did the neo-Hitties
The earliest written evidence of Turkic language comes from Mongolia in the 700s AD. It did not originate in modern Turkey, and instead expanded to this region during the Middle Ages. Proto-Turkic is assumed to date from about the 300s AD. The history of Turkic expansion can be summed up as follows:
The Turkic migration as defined in this article was the expansion of the Turkic peoples across most of Central Asia into Europe and the Middle East between the 6th and 11th centuries AD (the Early Middle Ages). Tribes less certainly identified as Turkic began their expansion centuries earlier as the predominant element of the Huns. Their prehistoric point of origin was the hypothetical Proto-Turkic region of the Far East including North China, especially Xinjiang Province and Inner Mongolia with parts of Mongolia and Siberia possibly as far west as Lake Baikal and the Altai Mountains. They may have been among the peoples of the multi-ethnic historical Saka known as early as the Greek writer Herodotus.
Certainly identified Turkic tribes were known by the 6th century and by the 10th century most of Central Asia, formerly dominated by Iranian peoples, was settled by Turkic tribes. The Seljuk Turks from the 11th century invaded Anatolia, ultimately resulting in permanent Turkic settlement there and the establishment of the nation of Turkey. Meanwhile the other Turkic tribes either ultimately formed independent nations, such as Kyrgyzstan, Turkmenistan, Uzbekistan and Kazakhstan or formed enclaves within other nations, such as Chuvashia. Turkics also survived on the original range as the Uyghur people in China and the Sakha Republic of Siberia, as well as in other scattered places of the Far East and Central Asia.
The Proto-Turkic homeland is geographically close to Tibet, and so would explain the frequency of a Y chromosome halotype that is now common in Tibet in a conquering class speaking a language related to proto-Turkic in Japan.
Inner Mongolia's history at the relevant time was as follows:
During the Zhou Dynasty, central and western Inner Mongolia (the Hetao region and surrounding areas) were inhabited by nomadic peoples such as the Loufan, Linhu, and Dí, while eastern Inner Mongolia was inhabited by the Donghu. During the Warring States Period, King Wuling (340–295 BC) of the state of Zhao based in what is now Hebei and Shanxi provinces pursued an expansionist policy towards the region. After destroying the Dí state of Zhongshan in what is now Hebei province, he defeated the Linhu and Loufan and created the commandery of Yunzhong near modern Hohhot. King Wuling of Zhao also built a long wall stretching through the Hetao region. After Qin Shihuang created the first unified Chinese empire in 221 BC, he sent the general Meng Tian to drive the Xiongnu from the region, and incorporated the old Zhao wall into the Qin Dynasty Great Wall of China. He also maintained two commanderies in the region: Jiuyuan and Yunzhong, and moved 30,000 households there to solidify the region. After the Qin Dynasty collapsed in 206 BC, these efforts were abandoned.
The Donghu people in particular look plausible as proto-Japanese settlers, as were the neighboring Dongyi people. Also plausible are the Xiongnu to the West of the Donghu whom the Great Wall was errected to keep out. This would have been near the boundaries of the Silk Road and preceded any civilization politically conceived of as Tibetan.
An overview of theories of the origins and classifications of the Japanese language can be found here and generally favors a relationship between Korean and an extinct language spoken in Korea before Korean became dominant there.
Genetically, Japan (also here) and Korea (and here) are very similar in mtDNA and are each distinct from surrounding regions based on Y chromosome analysis, although not quite as similar in Y chromosomes as in mtDNA.
For example, mtDNA type C is common in Siberia and found in smaller numbers in Mongolia, but rare elsewhere in East Asia, including Japan and Korea. mtDNA types F1 and M are rare in Japan and Korea but common in Southeast Asia and the adjacent Indonesian islands.
Y halotypes that are found in two-thirds of Japanese men (D-M174 and O-P31) make up on the order of a third of Korean men and are rare in Siberia, Mongolia and China. The D-M174 halotype found in about a third of Japanese men and perhaps one in twenty Korean and Thai men is the dominant haloptype in Tibet where it foud in perhaps 85%-90% of Tibetan men. It is virtually absent everywhere else in East Asia and Siberia. "D-M174 is derived from African haplogroup DE-M1 (Yap+), is found at highest frequency in Andamanese, Tibetans and Japanese, and only sporadically elsewhere, and has been dated to about 60 kya."
The O-P31 halotype that is found in about a third of Japanese men is found in about a quarter of Korean men and is common in Southeast Asia and neighboring Indonesian islands; but is rare in Siberia, Mongolia, China, Tibet, Taiwan, the Phillipines and the Moluccas.
Subscribe to:
Posts (Atom)