01 October 2021

What Expertise Matters When Making Good Predictions?

 Smart generalists rock!

This is just one chapter in a larger story. 
At many points in the war, the coalition had access to the insights of people who had graduated from the world’s best universities and brought highly specialized knowledge to issues (state building, counterterrorism) that the United States was facing in Afghanistan. The last president of the American-backed government, Ashraf Ghani, has a Ph.D. from Columbia and was even a co-author of a book titled “Fixing Failed States.” But for all their credentials, they were not able to stop a swift Taliban takeover of the country.

What Afghanistan shows is that we need a new definition of expertise, one that relies more on track records and healthy cognitive habits and less on credentials and the narrow forms of knowledge that are too often rewarded. In an era of populism and declining trust in institutions, such a project is necessary to put expertise on a stronger footing. 
It’s true that many experts also opposed the Afghanistan war and thought that the United States was seeking unrealistic goals. But individuals with the most subject-matter expertise often tended to get things the most wrong. This included generals with experience in counterinsurgency in Iraq and Afghanistan as well as many think tank analysts with the most focus and interest in those conflicts.

Perhaps we shouldn’t be surprised. Philip Tetlock, a psychologist, has famously shown that subject-matter experts are no better at accurately forecasting geopolitical events relevant to their field than those with training in different areas. Similarly, in a different study, the intelligence community, with access to classified information, proved less accurate than an algorithm weighted toward the views of amateurs with no security clearances but a history of making accurate forecasts.

So “just trust the experts” is the wrong path to take. But simply deciding to ignore them can lead us down rabbit holes of conspiracy theories and misinformation. The subject-matter experts in Mr. Tetlock’s research couldn’t beat informed amateurs, but they did defeat random guessing, or the epistemological equivalent of monkeys throwing darts.

This is in part because the divisions we create between fields are, in a sense, artificial. As radical as it sounds, just because someone has a Ph.D. in political science or speaks Pashto does not make that person more likely to be able to predict what is going to happen in Afghanistan than an equally intelligent person with knowledge that appears less directly relevant. Anthropology, economics and other fields may offer insight, and it is often difficult to know ahead of time which communities of experts have the most relevant training and tools to deal with a particular problem.

Academia is in some ways nearly ideally suited to produce the wrong kinds of expertise. Scholarly recognition is based on high degrees of specialization, obtaining the right pedigree and the approval of colleagues through peer review rather than through an external standard.

From the New York Times

No comments: