The Specter of Civil War
The end of World War II in 1945 was, and continues to be, a major line of demarcation in twentieth-century political studies for several important categories with, perhaps, none so important as that of intrastate conflict. In the decades since the end of World War II, geopolitical conflict has shifted dramatically away from major interstate conflicts in favor of an alarming upward trend of intrastate conflicts – also known as “civil wars.” As noted in the study by James Fearon and David Laitin, roughly 25 interstate conflicts – wars between two nations – have been waged between 1945 and 1999 with a death toll of approximately 3.33 million. In the same period there were 127 intrastate wars with more than 16.2 million dead; a ratio of roughly 5:1 in both the numbers of conflicts and loss of life. A related study found that the there is a considerable difference between the duration of interstate and intrastate wars, with the former lasting, on average, 480 days and the latter 1,665 days: a ratio of 3.46:1. In short, civil wars have become the most deadly form of warfare on planet Earth since the end of World War II. Perhaps more daunting than the statistics associated with civil war is the simple fact that the causes of intrastate conflict are still not very well understood. Despite some genuine, well-funded and initially promising efforts to unlock the keys for forecasting and preventing civil wars, the “silver bullet” of forecasting and prevention remains elusive. The specter of civil war continues to haunt this planet, from the policy-makers in the most powerful halls of government to the simplest citizen trapped in a region of political instability.
In political science, there are generally three different approaches to understanding political conflict: a. the qualitative method of interviewing, aggregating and analyzing reports and predictions provided by subject matter experts (SMEs), b. the quantitative method of formulating robust statistical probability models that measure key factors (usually by proxy markers for larger political indicators) and, c. a combination of both. There are, to be sure, both advantages and disadvantages to each method with regard to political conflict in general and intrastate conflict in particular. The first method has the advantage of first-hand, experiential knowledge of a political hot spot. SMEs are paid to be “on the ground”, out and among the local population, reading the newspapers, keeping a close eye on potentially inflammatory situations. They provide a potentially critical human intelligence factor to information gathering. The major disadvantages to relying primarily on SMEs are that they, of course, cannot be everywhere at once. Their perspective, while usually trained to be heightened, is still in the limited first person. S/he must act as their own filter of information and, as such, the reliability of providing pertinent information to the proper individuals is understandably hampered. There is also, of course, the cost associated with keeping these people on the payroll and the hidden cost of keeping them hidden in plain sight. Moreover, the motivation of money can be potentially double-edged as a SME can be under a lot of pressure to provide useful information in exchange for payment, thus providing less-than-reliable information or, worse, be paid to provide false information to authorities and agencies. Moreover, the potential payoff may be very limited. As Phil Tetlock notes in his book, Expert Political Judgment, the discrimination and calibration scores of SMEs – or “Hedgehogs” as he refers to them – are barely higher than the average scores of a UC Berkley political science undergraduate.[1]
All of these factors led many countries, like the United States, to develop quantitative methods for capturing and analyzing data in hopes of predicting in a more accurate and cost-effective way than using SMEs. These methods usually come in the form of highly sophisticated statistical probability matrices developed by mathematicians and computer programmers. It would be unfortunate to revert to the extreme of a stereotype in order to prove a point, but it should be painfully obvious what limitations mathematicians and computer programmers may have in the political sciences. There should be no doubt that the quality of a model is not only determined by the quality of the code language it is written in but, more importantly, its quality is determined by the “statistical significance” associated with each variable as well as the information fed into the model itself. “Garbage in, garbage out,” as the old saying goes. And, as with many contrasting methods, the best answer probably lays somewhere in combining the two: striking the “perfect” balance that maximizes the advantages of both while, simultaneously minimizes the disadvantages. This is, as you may suspect, not nearly as easy at it sounds. Finding the researcher that has equally elite parts of mathematical brilliance and geopolitical knowledge who is also willing to tackle what, so far, appears to be as impossible elusive as intrastate conflict is, in many respects, very similar to drafting an NFL Hall of Fame quarterback. The statistical odds of success decrease dramatically as one attempts to find enough of these individuals to fill an entire research group. What few research groups and studies that currently exist in this field are complicated by an understandable, yet highly counter-productive, competitive antagonism between those in the quantitative and qualitative methodological camps. Factoring in personality differences, competition in academic pedigree and prevailing ideological worldviews, the apparent hopelessness of the task at hand becomes nearly overwhelming. Yet in the midst of such impressive obstacles, several groups have emerged with promising studies. The papers and studies I will be discussing are: “How Much War Will We See? Explaining the Prevalence of Civil War” by Ibrahim Elbadawi of the World Bank and Nicholas Sambanis of Yale University, “Greed and grievance in civil war” by Paul Collier and Anke Hoeffler from Oxford University, “Ethnicity, Insurgency, and Civil War” by James Fearon and David Laitin from Stanford University, “When and How the Fighting Stops: Explaining the Duration and Outcome of Civil Wars” by Patrick Brandt from The University of Texas at Dallas, T. David Mason from The University of North Texas, Mehmet Gurses from Florida Atlantic University, Nicolai Petrovsky from Cardiff University and Dagmar Radin from Mississippi State University and, finally, “The Perils of Policy by P-Value: Predicting Civil Conflicts” by Michael Ward and Brian D. Greenhill from the University of Washington with Kristin Bakke from Leiden University.
Each of these essays contributes – in varying degrees – to the third, combined method I discussed earlier in the introduction. I say that they contribute “in varying degrees” mostly because all of them fall much harder on the quantitative side of the question than the qualitative. Moreover, each study is one that centers on the whole of civil wars rather than a particular “hot spot” or theatre of conflict, as one would come to expect from a report filed by an SME. The qualitative contributions of each study come mostly from each group’s interpretation of the data and the policy recommendations that accompany it. With that being said, each paper is devised with the underlying premise that quantitative modeling is a crucial element to understanding political conflict, an opinion that must be noted up front, since political scientists across the globe do not universally share it. With this essay, I hope to present and analyze both the groundbreaking contributions as well as the problems of each study, in conjunction with a recommendation for the direction of further studies into the causes and, ultimately, the prevention of intrastate wars.
Before I discuss the findings specific to each study, it might be helpful to go over the obstacles that present themselves in any attempt to compare or contrast these studies. The first obstacle is that each study is, at least in some ways, built on the back of the research, findings and recommendations that came before it. Like most of the applied mathematical sciences, these projects and studies are never done in a vacuum. Every researcher brings a lifetime of accrued knowledge and imports, for good or bad, the history of conflict into the projections. Moreover, when dealing with projects of this kind it becomes quickly apparent that there is a definite chasm of opinion between the groups. That is to say that each paper is delivered from a different literal and metaphorical page as the paper before it. As the reader will see early in the comparison, there are deep-seated divisions between the groups not only on methods and models, but on foundational worldviews that speak to each researcher’s own theory on the function of the government and, in some studies, of what it means to be human. Since this is a paper neither on political theory nor on foundational ontological philosophy, I hope to present these interpretations only in a limited sense as they pertain to the studies themselves. The second obstacle, and perhaps the hardest one to address, comes in the same vein as the previous one: none of these research groups uses the same data sets as the others. For instance, Collier and Hoeffler’s data on civil war is from 1960 to 1999 and analyzes only 79 civil conflicts. Fearon and Laitin’s research is from 1945 to 1999 and analyzes 127 civil wars, an increase of 28% in the number of years and 38% in the number of cases over Collier and Hoeffler. Brandt et al’s data is from 1945 to 1997 and the number of civil wars is 108, slightly smaller than Fearon and Laitin but much larger than Collier and Hoeffler. The paper delivered by Elbawadi and Sambanis does not provide specific data sets but appears to use a similar data set to Brandt. Finally, the critical analysis provided in the Ward, Greenhill and Bakke paper discusses the data and findings of both Collier and Hoeffler and Fearon and Laitin, but uses a third model to analyze the effectiveness of the first two models as if they were studies done on equal terms, which they are not.
In conjunction with a lack of consistency in the data sets used between studies, there is also – though less surprising – a lack of consistency in the models that are used. No two groups rely on the same model to analyze the data. Each of these papers not only presents their own set of data, but a whole new proprietary model with which to analyze it. As one might expect, the statistical likelihood of two research groups providing similar results or recommendations when different data and models are used is “zero”. The meaningful quality of these papers is immediately hampered by the lack of consistency in the data sets and the models. This, of course, is not to say that they cannot be useful on their own terms, or even compared between them, but it is important for any researcher to keep this in mind when addressing these papers critically. Another common problem with each study is that they all begin with the presumption that major factors for political stability (or instability) can be understood by substituting “proxy” indicators. Specific examples of these will be discussed later in the paper, but it is of the utmost importance to understand that each of these models was developed: a. with the assumption-driven hypotheses of the researchers in mind, b. under the assumption that the factors that contribute to in/stability actually do and, c. that the proxies set up for each factor are both generally accurate, as well as appropriate with respect to the proportional significance of the factor. Simply put, these models are founded on the assumptions of the researchers and the empirical measurements set up to test these assumptions may be pointed in the “wrong” direction. Perhaps more dangerously than the charge that these models produced inaccurate findings is the overarching charge that the models themselves are designed to produce findings that only confirm the initial hypotheses. Again, crudely stated, the potential output of any model is inherently limited by those predetermined and hypothetical factors on which the model is based. In other words, each of these research groups must be able to demonstrate – and, to-date, has not demonstrated – that the model was designed to produce only those results which the researchers wanted it to. While I do not intend to charge any of these groups with outright falsification – nor would I agree with anyone that levies that charge without indisputable proof – the phantom of impropriety must be resolutely dispelled in any quantitative analysis before the veracity of its results can be relied upon for policy decisions that affect the lives of millions. For the purposes of this paper, however, I will address each study on its own terms without an undermining suspicion of its authenticity. It should be noted that while there is an obvious chronology to each of the papers I will discuss, I will not be discussing them in chronological order. Some of the papers have gone through several revisions from their earliest forms and I do not intend to present a historiography of the research but rather a qualitative analysis of it. In this respect, I hope to show that chronology is not as material as content.
Ibrahim Elbawadi and Nicholas Sambanis’ paper “How Much War Will We See?: Explaining the Prevalence of Civil War”, appeared in the Journal of Conflict Resolution in 2002. While this paper is dated as the earliest delivered out of the five discussed in this essay, the date of this paper is not simply to establish chronology but context, the reason for which will be explained later. In this paper, Elbawadi and Sambanis do not provide sample information for their analysis but they do mention that they are looking at 108 civil wars. While their paper was released five years before the Brandt study, it appears likely that Elbawadi and Sambanis used a similar or the same data set that the Brandt paper used in 2007.[2] As mentioned before, each of these papers have gone through a number of different versions and updates, including this study and the papers of other scholars that are sourced in this essay, such as Collier and Hoeffler and Fearon and Laitin. What is interesting about the date of publication for this particular article is that it comes on the heels of al Qaeda’s attacks on New York, Washington and Shanksville on September 11, 2001 and the subsequent involvement of U.S. and N.A.T.O. forces in the Afghan Civil War between the Taliban and the Northern Alliance. While I will address the significance of the timing later in this section, I believe that the recommendations that Elbawadi and Sambanis make are not done in a vacuum.
According to the introduction of the project, Elbawadi and Sambanis set out to highlight – in their opinion – a major flaw in overestimating the economic impact on civil war prevalence at the expense of studying ethno-religious fractionalization.[3] The study operates on three fundamental hypotheses: a. An increase in economic opportunity within a state will decrease the prevalence of civil war, b. An increase in democratic polity ratings within a state will decrease the prevalence of civil war and, c. The closer that ethnic fragmentation approaches a middle amount, the higher the increase of civil war prevalence will manifest.[4] In their study, they explore the hypothesis that all rebellions are beholden to the amount of financing they can secure; therefore natural resources and the availability of finances is the largest contributor to civil wars.[5] They claim that despite “rising averages in world income and democracy levels” the world is “less safe [now] than 40 years ago”.[6] What is important about this particular admission is that it appears to go counter to their claim that the quality of democratic polity ratings are central to the prevalence of civil wars? It appears as though the closer the ethnic diversity ratio between two groups within a state approaches 50% apiece, the higher the risk of civil war manifests, especially in conjunction with a high national population. While the significance of ethnic fragmentation seems to be the heaviest statistical factor weighed by this study, it is not the only factor discussed. In analyzing the polity rating for states with previous civil war data, the study suggests that a low democratic polity rating also implies that the population of the state has a significant lack of options whereby they may peacefully articulate grievances. [7]
This political reality, according to the study, is tantamount to inevitable military conflict. It is from this logic-inspired deduction that Elbawadi and Sambanis arrive at the pre-eminence of democratic regimes in preventing political conflict, admittedly in spite of this particular factor having been dismissed by a number of previous – and subsequent – studies.[8] It is interesting that when the results of the model do not ultimately prove out against their initial hypotheses (or support their subsequent conclusions and recommendations), they argue that the factors in their models that don’t appear to be statistically significant may be so interdependent on factors that are statistically significant that the insignificant variables are, themselves, significant.[9] A confusing and potentially counter-productive argument, to be sure, and especially even more so considering they do little in the way of clarifying or adjusting for this claim in their own research. In the end, it appears that Elbawadi and Sambanis were convinced from the beginning that certain factors were undeniably involved in political in/stability and ignored the lack of evidence in their own research to arrive, in circular fashion, no further than where they began. While I don’t call into question the sincerity of their convictions, I do believe that their conclusions were too strongly influenced by the political climate of the U.S. and Europe in late 2001 and early 2002 to provide anything of lasting utility to the question of political and intrastate conflict. Indeed, their recommendations on this question represent the politically “safe” position that would ultimately form the foundation of the so-called “Bush doctrine” of foreign policy: a. it is important to improve the number and quality of democratic regimes in the world, b. it is important to improve the economic opportunity and growth of impoverished nations, by way of increasing a nation’s per capita income, and c. that improving the political conditions for a state is more feasible than attempting to develop advanced and high-yield economic infrastructure in a repressive regime with a low polity ranking.[10] The essence of their policy recommendation is that, despite any statistical correlations between a high polity ranking and lack of intrastate conflict, policy-makers should engage in what amounts to the age-old practice of “nation building” in order to decrease the overall risk of civil war. Of course, the two primary efforts of the United States to increase economic and democratic values within troubled regions of the world – Afghanistan and Iraq – are still ongoing and, currently, without any substantial exit strategy. More than preventing a civil war in either country, the U.S. appears to have created or prolonged two of them by following this recommendation.
The next study I wish to discuss came from the Oxford Economic Papers is an article entitled “Greed and grievance in civil war” by Paul Collier and Anke Hoeffler from Oxford University in 2004. The work by Collier and Hoeffler is, without a doubt, mentioned most often by the other sources in this essay and it appears that the work established by these two scholars is highly respected, even if those citing them disagree with their findings. In the paper published in 2004, Collier and Hoeffler attempt to offer an “econometric” model of predicting civil war, relying on a blend of “motive and opportunity” believing that rebellion, like murder, requires both.[11] Collier and Hoeffler use a data set that includes civil wars only from 1960-1999. This time period includes 13-15 less years than contemporary models, which constitute 25-28% decrease in sample size over other the other four studies in this essay. Similarly, they only analyze 79 civil wars, as opposed to 108 civil wars from Brandt et al and 127 from F&L, which constitutes a 27-38% smaller sample than other contemporary studies.[12] They also do a particularly good job of summarizing the ongoing debate between political scientists and economists on the question of intrastate conflict, claiming that political theorists have long argued that civil wars happen due to grievances, while economists have begun to theorize that the causes for civil wars are more akin to new industry that uses political violence as a means to collect resources.[13]
As would be the case in Elbawadi and Sambanis’ argument, Collier and Hoeffler argue that grievance may not be the best explanatory factor for rebellion since all countries have social groups that have grievances against the predominant regime.[14] Instead, they want to analyze “quantitative indicators of opportunities”, including primary commodity export dependence, rebel financing from foreign diasporas, and rebel financing from hostile governments.[15] As I alluded to before, some of the factors and proxied indicators can potentially leave policy-makers dissatisfied. These are certainly included in that risky group. Their proxy for the measurement of a diaspora’s effective range of influence on the country of origin is the number of immigrants from that country currently living in the United State as provided by U.S. census data.[16] I wonder if this really the best data to use? What about diasporas living in Europe and industrialized countries in East Asia such as China, Japan and Korea? It appears that this proxy could be accused of myopia and a narrow scope. Moreover it appears to betray a belief that international citizens find the United States the most attractive place to live and the U.S. census provides an accurate picture of non-American diasporic communities. Again, I question using data from the Cold War as a proxy for financing from hostile governments in a post-Cold War environment.[17] While the data is certainly analogous, the motivation and the amount since the dissolution of the U.S.S.R. can only certainly be different. Yet even if they remain constant after 1990, Collier and Hoeffler do nothing to reinforce the accuracy of this proxy or assuage any doubt against potential concerns.
While many of the opportunity factors listed are eventually whittled down to measure their statistical significance within a combined model, the final factors that remain in the model are: economic dependence on a primary commodity export, funding for the insurgency provided by foreign diasporas and/or foreign governments, per capita income, male secondary schooling, economic growth rate, and population demographics.[18] The most statistically significant factor of the group is, by far, state dependence on primary commodity exports. According to the published study by Collier and Hoeffler, the “risk of conflict peaks when [primary commodity exports] constitute 33% of GDP”, thus, “primary exports are highly significant.” At “peak danger” of 33% of GDP there is a 22% chance of civil war, compared to 1% for countries with no primary commodity dependence.[19]
In the end, Collier and Hoeffler appear to end their recommendation on a peace keeping note, urging policy-makers to help maintain peace in those countries that have seen intrastate conflict, since – as they interpret the final factor – by way of a cliché: “time heals.” In other words, the longer the duration since the last intrastate conflict the more chance a state has to recuperate and reinforce those positive forces that discourage civil conflict.[20] While running the risk of oversimplifying each series of policy recommendations to dominant ideological worldviews, it would not be inappropriate to suggest that these recommendations are strongly pro-U.N. and pro-N.A.T.O. The underlying suggestion here is that international organizations and governmental coalitions should engage actively in peacekeeping operations in troubled regions of the world in order to reinforce “peace episodes”, thus reducing the likelihood of resurgence in rebellion. It would also not be inappropriate to tie this opinion and the timing of this paper’s release to a growing international criticism of the foreign policies of the Bush administration in the Afghan theatre and freshly started Iraq war.
In their 2003 paper from the American Political Science Review, James Fearon and David Laitin took up the question of what extent ethno-religious fractionalization plays in the development of intrastate conflict. Inspired in part by the ethno-religious conflict in Afghanistan, India and Pakistan, the Kurds in Turkey and Iraq, the Arab-Persian wars of the 1980s, the ethnic violence in the Balkans and eastern Europe and the continued ethnic violence in Africa, there has been a significant scholarly opinion that the high prevalence of conflict and civil war since 1945 owes much to long-standing ethnic and religious fractionalization in under-developed nations. Fearon and Laitin disagree with this assumption and set out in this paper to test it against a model that they developed to measure “conditions that favor insurgency”.[21] According to the model developed by Fearon and Laitin, the problem of civil violence cannot be attributed simply to democracy, religion or ethnic composition. Instead, the seemingly elusive reasons for civil war arise from a deeply complex and integrated set of conditions that contributes, according to their theory, to political instability and civil violence of all kinds. In short, regardless of ethno-religious diversity or antagonism within a state, the higher the per-capita income, the less risk there is within that state for insurgency. It seems that, in the view of Fearon and Laitin, the old colloquial adage that you can simply rub the money “wherever it hurts” holds true. Moreover, Fearon and Laitin are not particularly convinced that those factions that identify themselves as “ethnic” or “nationalist” have any substantial difference from any other form of insurgency.[22] It would be misleading, however, to classify Fearon and Laitin’s views of ethnic diversity as wholly insignificant as they do argue that ethnic diversity can indirectly lead to conditions which provide prime ground for insurgency, even if they don’t influence the insurgency directly.[23] This position on regime type and ethnic fragmentation, then, is a noteworthy shift away from the belief that grievances are the main motivations for intrastate conflict, rather than opportunity. Their model for insurgency is based almost entirely on the premise that wherever there are opportunities for (i.e. a sufficient number of factors favoring) insurgency, regardless of the motivation, an insurgency will emerge to challenge the government. These factors include, but are not limited to: a newly independent state, a politically unstable central government, a substantial national population, a separate territorial base from the central seat of government (such as East Pakistan/Bangladesh), the willingness of foreign governments or diasporas to provide funding and/or weapons to insurgents, the presence of low-weight, high-value natural resources for exploitative use by insurgents to fund activity, and/or the presence of oil.
The foundational premise for using per capita income as a proxy for government strength comes is based on the notion that the lower a nation’s GDP is, the less money there is for the government to tax/appropriate for public services, police and infrastructure. Tellingly, the Fearon-Laitin model shows that a decrease of $1,000 in GDP can result in an increase of 41% in the chance for civil war. Furthermore, Fearon and Laitin argue that weakened economic conditions (including low per capita income) make recruiting easier for insurgent factions.[24] Nearing the conclusion of their study, Fearon and Laitin suggest that these states with low economic production should be classified as their own kind of regime type (regardless of polity rankings) known as “anocracies”, a government whose central authority is either weak or non-existent. These anocracies, because of a lack of government revenue by way of economic production, do not have the resources – outside of foreign aid – to stamp out the insurgency.[25] The only obvious exception to this hypothesis are those states with extensive oil reserves providing the central government with sufficient “easy” money without the need to develop a strong social infrastructure, leaving dispersed populations, in theory, isolated enough to develop an insurgency without the watchful eye of the government.[26]
All of these findings are wholly contingent on the accuracy of the proxies they assign for each factor and – as one might expect – the proxies are certainly not bulletproof. The counter-insurgency power of a central government is proxied by the state’s estimated GDP. The most obvious problem is that these numbers, while generally reliable, are still estimates. For the countries found most often under the lens of research, official records and data collection is very difficult. Most of the research is done in highly industrialized Western countries where – as Fearon and Laitin put it – “socially intrusive” infrastructure and bureaucracies reach far and deep. Researchers will quickly find, however, that the availability of reliable records in under-developed countries like Afghanistan or Zimbabwe are slim-to-none. A wholly acceptable criticism would be that if an “anocracy” doesn’t have the resources to provide police or military defense forces enough to protect its citizens or interests, then why should they be expected to keep accurate records on national finances? Moreover, this proxy assumes that the funds that make up GDP are available for state collection or that the state has a mechanism in place to collect taxes and fees. Afghan farmers that produce poppies for opiates are generating revenue, but does that revenue always translate to tax revenue for the state? Considering these factors, it seems that GDP as a proxy for the strength of a state is probably not as accurate as Fearon and Laitin would like it to be for the purposed of their model. Coming to this initial conclusion is important since Fearon and Laitin’s model is almost entirely predicated on this one “statistically significant” variable. This is not to say that the predictive power - as the Ward, Greenhill and Bakke study addresses – is completely lost, but it is important for researchers in this field to understand how thin the ice on which they stand is. As I crudely put it earlier in this essay, “garbage in, garbage out.” With that being said, the suggestion that “anocratic” states are most susceptible to insurgencies seems to be on sure footing as the incidence of countries with a strong central government that faced insurgent forces and rebellions have been very few in history, especially since 1945. Where Fearon and Laitin’s anocratic governments seem to be most vulnerable does not seem to be as dependent on GDP as it does on the age of the state and the time with which it was given during “peace periods” to develop “socially intrusive” infrastructure and bureaucracies. States/governments that are younger than two years are 5.25 times more likely to have a civil war than others.[27] This is best seen, perhaps, with the formation of the UN and the disintegration of most European colonial systems at the end of WWII.[28] In addition, those countries that contain at least 50% mountainous terrain have twice as much of a chance to experience a civil war (from 6.5% to 13.2%) as similar countries.[29] The terrain itself provides little explanation aside from being a proxy for the difficulty and expense involved in the government’s attempts to expand infrastructure outside of major population zones. If a government is unable to penetrate the hostile topography within its own borders, it is unlikely that this country will: a. be able to develop those lands effectively for economic exploitation and, b. patrol those lands in counter-insurgency and police operations.
The conclusions of this study, such as it is, do appear to be reasonable ones, even if the questions and concerns with their model go unresolved. They find that in history that it was decolonization in the aftermath of WWII and the early decades of the UN that created poor and weak states. These “anocracies” were susceptible to all kinds of insurgencies, regardless of the motivation for that insurgency. As such, to focus on ethno-religious fractionalization or polity ratings for a given state is a mistake. Rather, the spread of democracy and policies that reflect ethno-religious tolerance should be encouraged because they are generally good for people, not because they are believed to be “magic bullets” for ending or preventing civil wars.[30] The recommendation of the Fearon-Laitin study is: in order to decrease the statistical likelihood of civil war within a country, the central governments in high-risk scenarios must be strengthened, well funded and aided in the development of a socially “intrusive” bureaucracy.[31] In the end, countries that have proven incapable of successful self-government should be candidates for UN “neotrusteeship”.[32]
The study done by Brandt, Mason, Gurses, Petrovsky and Radin takes the question of civil wars in a slightly different direction, hoping to show that understanding how civil wars end can be as significant as how they begin.[33] In the paper published for Defence and Peace Economics in 2008, “When and How the Fighting Stops: Explaining the Duration and Outcome of Civil Wars”, Brandt et al. establishes that, without question, civil wars are not only the costliest form of war since 1945 in terms of casualties but also the most disruptive in terms of time spent on violent conflict. The first thing Brandt et al. does is establish firmly that the data they are using for their model comes from the Correlates of War (COW). This is important for purposes of transparency, which – according to the footnotes from the first page of their introduction – appears to be important to this ongoing research question. They both acknowledge the many suggested revisions from readers and editors as well as accept responsibility for any remaining errors in the research, model and/or published material. According to the data in the COW, they found that civil wars have caused almost four times the amount of deaths as interstate wars since 1945 and have lasted almost four times as long on average. They record 23 interstate wars between 1945-1997 with a casualty total of 3.3 million and an average duration of 480 days. The only data point in this study where interstate fighting seems to be higher than its civil war counterpart is in the total average deaths per conflict series, with interstate at approximately 143,000 per incident. Intrastate conflict, on the other hand, was seen in 108 different episodes, saw 11.8 casualties and lasted on average 1,665 days. The average casualty rate for a civil war topped out at approximated 105,000. All of these statistics point, unquestionably, to the overwhelming deadliness of civil wars since 1945. Correspondingly, any efforts or success in shortening the duration of civil wars will significantly decrease both the overall conflict occurring at any given point in time, but will also decrease the political, collateral and life losses that accompany those conflicts.[34]
These statistical points direct Brandt et al. to the suggestion that it is just as important to analyze how – and when – civil wars end as it is to analyze other factors about them, since when they end usually has a significant impact on how they end.[35] Setting up a cross-section of possibilities, Brandt’s team argues that the four potential resolutions are: a. the rebels quit fighting and the government crushes the insurgency, b. the government quits fighting and the rebellion takes control of the disputed facilities and infrastructure, c. both parties choose to quit fighting and the conflict ends in a negotiated settlement or, d. neither party quits and the fighting continues.[36] The rest of their argument is predicated on the assumption that negotiations are always less preferable to combatants than total victory, not in the least part because negotiated settlements usually take the longest to reach, thus draining both sides of precious resources, population (i.e. military manpower and economic production) and continue to raise the risk to necessary infrastructure.[37] Beginning with these assumptions, the team devises a number of hypotheses to test for the purposes of measuring the correlation between the duration of a war and its outcome. These hypotheses contribute, in part, to how the model produces conclusions, but it seems not nearly so much as the previously examined models. In any event, the model provides the following rules for civil war duration: a. The larger the casualty rate, the shorter the war will be as the existing pool of resources will be depleted quicker, b. the larger the government forces, the shorter the war will be, c. the involvement of external financial and military forces will increase the duration of the conflict regardless of which side the support is given to, d. the percentage of mountainous terrain positively affects the duration of conflict and e. civil wars of secession last longer than civil wars of revolution. The last point is the first and only mention among all of the analyzed models that takes into consideration the difference between an insurgency of secession and an insurgency of revolution which is surprising considering the “end goal” of the insurgency is a primary factor for understanding how the begin in the first place. The motivations, regardless of “greed” or “grievance”, are very different for each kind of rebellion, especially when one considers that some of the factors in previous models have dealt with questions of ethno-religious fractionalization, democratic polity, access to wealth and “socially intrusive” infrastructure. As such, it is highly significant that the Brandt study revealed that secessionists appear to be much more determined and patient than revolutionaries with regard to extended conflict.
As such, the following conclusions are draw from the Brandt data: a. in the first five years of a conflict, government and rebel victories are equally as likely, b. from five to seven years government victory is the most likely and, c. from seven years onward the most likely outcome is a negotiated settlement. The most obvious fact provided by these conclusions is that a rebel force is never given the best odds of victory. At no point in the “under seven years” time-scale of a civil war are rebel forces given a high likelihood of victory against the government. Indeed, their best chances are in the first five years and, comparatively speaking, this is not a particularly large window of time with which to recruit, engage and overthrow even a generally weak government. The policy recommendation that this paper relies on is that our attitudes should never reflect a belief that civil wars simply “burn themselves out” or that we should just “give war a chance”. What should be noted is that there is a potentially unspoken recommendation from this paper, one that policy makers are sure to “read between the lines”: in order to keep civil wars short (decreasing casualties and keeping destruction of infrastructure to a minimum), external governments should do what is necessary to bolster countries that are in danger of developing an insurgency to ensure that government forces in those countries are overwhelmingly stronger than any potential opposition and that the infrastructure necessary to create a “socially intrusive” infrastructure is available for those countries in the form of foreign aid. This underlying recommendation, intentional or not, flies directly in the face of Brandt et al.’s secondary expressed recommendation that external involvement in the war only serves to increase its duration.
The final paper I wish to discuss is the one study I can find the least fault with as it was designed to be a critical study of Fearon and Latin and Collier and Hoeffler’s models. The study, titled “The Perils of Policy by P-Value: Predicting Civil Conflicts” is slated for publication in the Journal of Peace Research and was written by Michael Ward, Brian Greenhill and Kristin Bakke.[38] This aptly titled study focuses on the problems that arise from relying on models that “postdict” statistical significance at the expense of producing out-of-sample predictions.[39] Ward’s group argues that without interpretation, statistical summaries can easily be oversimplified and definitely misleading, and that the true value of a model is based solely on how well it makes predictions.[40]
Turning to the papers published by Fearon and Laitin and Collier and Hoeffler, they surmise that those statistically significant factors of existing models produce interesting results aren’t worth much if the predicting power of the model does not translate to out-of-sample conflicts.[41] By making an argument about out-of-sample conflicts, Ward et al. bring to the forefront an issue that has not been addressed by any of the previous models, including Brandt’s work: that all of these models’ “statistically significant” factors – along with the highly-rated success of these models’ indicators – are based on a compilation of all of the existing data that went into the model. If, as Ward et al. suggest, a case that does not belong to the existing data group, the model’s ability to predict is cut significantly. Arguing that the models of Fearon and Laitin and Collier and Hoeffler suffer from research design flaws, Ward’s group demonstrates that both models – when asked to predict at high error tolerance thresholds – produce more “false positives” than real ones and – when asked to predict at low error tolerance thresholds – produce no real positives at all, suggesting that the discrimination power of the models are abysmally low.[42]
The key, according to Ward et al., to refining the models is the incorporation of out-of-sample counterfactual data in order to keep the models from “overfitting” and improve both discrimination and calibration.[43] While Ward’s group only intended to discuss the problems of design flaws for the model in-depth, they also conclude that there is room for other researchers to look to improve upon the problems of mis-specification with both models – particularly with regard to incorporating the diffusive effects of regional stability patterns, as well as lowering the level of data aggregation to a more localized level instead of treating all nations as though they adhere to the, mostly, arbitrarily set political boundaries of post-WWII decolonization.[44] In that respect, they are pointing to the importance of not only looking at national levels, but at more localized data, trends and patterns as reflecting conflict epicenters that spread outward rather than everywhere in a country at once.[45] This appears to be a very appropriate recommendation, especially when one considers that the violence between narco-cartels in northern Mexico and federal police agencies have resulted in more casualties than all of the official U.S. casualties in Iraq. Mexican officials have been assassinated, “insurgent” forces from the cartels have directly engaged military personnel from both Mexico and the U.S., and – at least rhetorically – the Mexican government has declared a “war” against these very dangerous organizations. Despite the technical criteria that produce conflict classification – and the vocal objections of the Mexican government – to the contrary, the situation just south of the United States’ border with Mexico shares a number of the most dangerous similarities with previous and current intrastate conflicts. As became obvious in the aftermath of the September 11, 2001 attacks in the U.S., many of the classifications of conflicts and combatants need to be updated in order to adequately assess a changing field.
What can be seen from a brief comparison of these various studies, scholarly criticism can often appear to be sharp and unforgiving. With years’ worth of work and reputation balanced against the potential accolades of developing the next great leap in political theory, political theorists can become unintentionally rigid in their methods and conclusions. Despite the myriad temptations to lose sight of this goal, the eye must remain fixed, the mind open and the will determined in order to meet the challenge laid at one’s feet by generations of failed attempts. The projects, papers and studies represented in this essay represent the furthest step toward a goal of protecting and cultivating peace. As far as these efforts have brought the study of political conflict, further steps are necessary. Further refinement of the models – from assumption to design, data to specifications, conclusions to recommendations – is still necessary. I concur with Ward’s group that the incorporation of regional influence must be taken into consideration and that conflict zones must be understood in a more local, epicentral way rather than relying on potentially misleading national aggregates. I also agree with Ward et al. that “statistically significant” is ultimately insignificant if the model cannot produce accurate predictions for out-of-sample or counterfactual cases. While the choice of proxy indicators for major factors are limited, and the data to feed those proxies is even more so, researchers must do better than use rough estimates and pre-Cold War figures to proxy twenty-first century factors. Perhaps most important of all is that they must work harder to keep foundational assumptions about how the world works from influencing the models they build. Another potentially helpful method would be to work less with those that they see eye-to-eye with and more with those that represent opposite, and even hostile, worldviews in order to achieve a broader vision of conflict inauguration and conflict resolution. As we have come to expect, technology is a powerful assistant to these efforts, but it cannot overcome all of the obstacles on its own: it is important to retain and reinforce human involvement where necessary, bolstering the apparatus of SME-dom with the most sophisticated probability forecasting models. Ultimately, however, it seems that the possibilities of these accomplishments are capped only by human imagination and the resolve to put that imagination to work.
[2] Elbawadi, Ibrahim, and Nicholas Sambanis. “How Much War Will We See?: Explaining the Prevalence of Civil War.” Journal of Conflict Resolution 46 (2002): 307-334. Print. (Henceforth referred to as “ES” in footnotes.
[11] Collier, Paul, and Anke Hoeffler. “Greed and grievance in civil war.” Oxford Economic Papers 56 (2004): 563-595. Print. (Henceforth referred to as “CH” in footnotes.) CH, 563.
[21] Fearon, James, and David Laitin. “Ethnicity, Insurgency and Civil War.” American Political Science Review 97.1 (2003): 75-90. Print. (Henceforth referred to as “FL” in footnotes.) FL, 75.
[33] Brandt, Mason, Gurses, Petrovsky and Radin will be referred to as “Brandt”, “Brandt’s group” or “Brandt et al.” henceforth in the body of the text.
[34] Brandt, Patrick, T. Mason, Mehmet Gurses, Nicholai Petrovsky, Dagmar Radin. “When and How the Fighting Stops: Explaining the Duration and Outcome of Civil Wars.” Defence and Peace Economics 19.6 (2008): 415-434. Print. (Henceforth referred to as “Brandt” in footnotes.) Brandt, 416
[38] Ward, Greenhill and Bakke will be referred to either as “Ward’s group” or “Ward et al.” henceforth in the body of the text.
[39] Ward, Michael D., Brian D. Greenhill, and Kristin Bakke. "The Perils of Policy by P-Value: Predicting Civil Conflicts." Proc. of 50th Annual Convention of International Studies Association, New York. 2009. Print. (Henceforth referred to as “Ward” in footnotes.) Ward, 2, FL, 76
No comments:
Post a Comment