Weekly Response Paper: why ‘hard evidence’ in the ‘soft sciences’ has been taken as true, and why it matters

Graduate school generally involves a lot of writing. For most of my classes, with the notable exception of “Fundamentals of Modelling”, I am required to submit a 2-5 page response on a weekly or bi-weekly basis. These responses generally review the assigned readings in the context of major themes explored in class. In the last week of my core Comparative Politics seminar, we were assigned a set of papers, interviews, essays, etc. that took on the ‘controversial’ topic of quantitative evidence in the social sciences. I wrote the following response and thought it had a sufficiently “blog-able” style as to be worth posting here. Please note that if my professor didn’t think I was being overly glib, neither should you!

Why ‘hard evidence’ in the ‘soft sciences’ has been taken as true, and why it matters

In this brief response paper, I discuss research concerning the flawed – and potentially harmful – assumption that empirical evidence in the social sciences constitutes ‘hard science’. I focus on two important themes in the extant literature problematizing the way that quantitative data and formal theoretic methods are used in the ‘soft sciences’: intellectual hubris and hasty innovation.

Hubris, Ambition, and Cleverness

Jon Elster (2009) makes the claim that intellectual hubris and ambition have driven economists – and other publishers of formal models and quantitative data – to the top of the intellectual food chain, an apex from which they cast down neatly packed but ill-understood nuggets of wisdom to be adopted as methodologically rigorous and empirically sound dogma. The Drunkard’s Walk (2008) provides a particularly good example in support of Elster’s claim that over-reliance on so-called ‘hard’ evidence can have disastrous results. Therein, Mlodinow suggests that DNA evidence may incorrectly match the perpetrator to the crime in 1 out of 100 cases, primarily due to human error. Thus, “the oft-repeated claims of DNA infallibility are exaggerated” (Mlodinow, 2008, p.40); as are claims that models in the social sciences can accurately predict or explain economic, social, or political outcomes. Elster’s argument is presented in such a way as to critique both the egotistical model builders, and the wider public, for accepting – often blindly – the products of their ambition. This is not to say that formal models have no place in contemporary academic studies, rather that “the main value of the bulk of rational-choice theory is conceptual rather than explanatory[i]” (Elster, 2009, p.4).

Elster ultimately argues that economists – and others in the hard/soft school as I shall henceforth call it – suffer from an excess of hubris that drives them to pass off models as dogma much the same way that a pious missionary might hand-out the bible. Scheiber (2007) makes a related claim: that economists suffer from an excess of cleverness that drives them to study arbitrary, random, and often silly policies and situations with a greater possibility of uncovering causal mechanisms. With particular reference to economist Stephen D. Levitt, Scheiber bemoans the resulting surplus of “papers on such topics as point-shaving in college basketball, underused gym memberships, and the parking tickets of U.N. diplomats… Japanese sumo-wrestling…. [and] racial discrimination in the ‘Weakest Link’” (ibid, p.2). He describes this as a trend towards “cute-o-nomics” (ibid, p.6), a trend that simultaneously distracts from serious research on questions of great normative and positive interest[ii] and creates the false impression that economists are capable of explaining any phenomenon if the causal mechanisms are clear enough. After all, what could be more dangerous than a conflagration of hubris and cleverness (other than perhaps a conflagration of hubris and ignorance)?

Ignorance or Haste

Stepping back from the issue of over-confidence in the development of explanatory models, Christopher Achen (2005) takes on another troubling aspect of hard/soft science: the ubiquity of ‘kitchen-sink’ regressions in quantitative research. Looking carefully at the types of large-N regressions favoured by quantitative social scientists (so-called ‘quant-jocks’) over the past decade, Achen notes that “big, mushy linear regression and profit equations seem to need a great many control variables precisely because they are jamming together all sorts of observations that do not belong together” (ibid, p.337). The backlash against kitchen-sink regressions has, to my mind, produced 3 distinct reactions: (1) methodological entrenchment – particularly at universities where undergraduates are often exposed to nothing but linear regression models, (2) methodological innovation – which may lead to researchers guilty of the sort of cleverness and irreverence described by Scheiber (2007), and (3) improvement of regression tools – as Achen notes, “a preferable approach is to separate the observations into meaningful subsets—internally compatible statistical regimes” (ibid, p.337).

The first reaction – methodological entrenchment – is illogical at best. While there is certainly intellectual merit to understanding the linear regression – particularly in the social sciences where it remains the primary tool of empirical analysis – undergraduates are rarely exposed to a critical analysis of the tool itself. Basic methods classes at this level should include a more thorough review of controversies surrounding the use of particular methods. Indeed, no scholar should expect to include a quantitative analysis of any form without addressing the relative merits of the tool used; though said discussion should not overshadow substantive questions of interest lest technique be a substitute of “substance and thought” (Deaton, 2009, p.1).

The second reaction – methodological innovation – is to be praised in that it has given rise to novel tools capable of unpacking causal mechanisms in a manner never before seen in the social sciences. On the other hand, innovation has led certain academic and policy circles to adopt methods as dogma without just consideration. For example, development researchers are now largely concerned with the use of randomized controlled trials (RCTs) “to accumulate credible knowledge of what [policies] work, without over-reliance on questionable theory or statistical methods” (Deaton, 2009, p.1). However, as Deaton argues:

Randomized controlled trials cannot automatically trump other evidence, they do not occupy any special place in some hierarchy of evidence, nor does it make sense to refer to them as “hard” while other methods are “soft”. These rhetorical devices are just that; a metaphor is not an argument.

This same basic logic applies to other methods (field experiments, laboratory experiments, quasi-experiments, etc.) recently adopted to replace the ‘kitchen-sink’ regression in, amongst other fields, political science and economics. Often times, such techniques come along and wipe out or marginalize others in a sort-of semantical and methodological blood bath. Such appears to have been the case with qualitative methods in the study of American politics. As Pierson (2007) notes, the adherence of scholars of American politics ‘hard science’ innovation and methodological rigour – particularly of the ‘clever’ variety described above – has led to the marginalization of ‘soft science’ and multi-method approaches. Similar to Scheiber, Pierson warns that the promotion of style and technique over substance and thought renders the field of American politics vulnerable:

Big questions, larger contexts, and long-term transformations have receded ever farther from view. This kind of truncated political science risks cutting itself off from the concerns that justifiably engage the interest of broader audiences.

Perhaps more optimistically, Przeworski states his own interest in a mixed-methodological approach and in the pre-eminence of substance over technology:

I don’t think everything should be done with game theory, or with statistics, or with structural analysis, or with stories. Methods are tools, and some methods are good for some questions and other methods are good for other questions. I am driven by substantive questions, and I try to answer them as well as possible. This leads me to use different methods. (Snyder, 2003, p.33)

However, methodological reductionism and movement away from questions of normative and positive interest is not the least of our worries – particularly where methodological innovation is concerned. For example, Miller (2008) calls attention to a political science study extrapolating the attitudes of undecided voters towards presidential candidates from fMRI images of their brains taken while the subjects were studying photographs and videos relevant to the 2008 elections. What is remarkable about this study was “the way the authors inferred particular mental states from the activation of particular brain regions” despite the fact that scientists believe they are years off being able to predict behavioral patterns or beliefs from fMRI images. In this case, it is unclear whether the authors suffered from hubris, ignorance, or an abundance of cleverness; but whatever the case, they got things impressively wrong and were quickly cut down to size by mobs of angry neuroscientists.

What should concern us is that other tools (other than fMRI images, that is) used to draw causal links between behaviour and belief are not subjected to the same rigorous methodological critiques, especially in cases where the tools are developed and used exclusively by social scientists. However, as Kruger notes, top students often enter the social sciences “from the technological end” (2012, p.1) and as such, are driven to innovate methodologically rather than substantively – a trend that defies logic given what is at stake. This is particularly true in economics and to some degree, political science. Kruger further notes that those students “who reflect glory on their teacher…. often prefer advisers who are more methodical and less intuitive, and [that he] all too often scare[s] students off by demanding that they use less math and more economics” (ibid, p.7). Whether ignorance or haste (generally haste to publish) drives scholars to use the right tools incorrectly, new tools too soon, or the wrong tools altogether, probing issues of methodological entrenchment and methodological innovation must be prioritized if substantive and not technological questions are to remain at the forefront of social scientific research.

WORKS CITED

Achen, C. (2005). Let’s Put Garbage-Can Regressions and Garbage-Can Probits Where They Belong. Conflict Management and Peace Science, 22(4), 327–339. Retrieved from http://taylorandfrancis.metapress.com/Index/10.1080/07388940500339167

Deaton, A. S. (2009). Instruments of Development: Randomization in the Tropics, and the Search for the Elusive Keys to Economic Development. Cambridge, MA. Retrieved from http://www.nber.org/papers/w14690

Elster, J. (2009). Excessive Ambitions. Capitalism and Society, 4(2). Retrieved from http://www.degruyter.com/view/j/cas.2009.4.2/cas.2009.4.2.1055/cas.2009.4.2.1055.xml

Krugman, P. (2012). http://web.mit.edu/krugman/www/howiwork.html. Retrieved December 1, 2012, from http://web.mit.edu/krugman/www/howiwork.html

Miller, G. (2008). Growing Pains for fMRI. Science, 320(June), 1412–1414. Retrieved from http://www.sciencemag.org/

Mlodinow, L. (2009). The Drunkard’s Walk: How randomness rules our lives. New York, NY: Pantheon Books.

Pierson, P. (2007). The Costs of Marginalization: Qualitative Methods in the Study of American Politics. Comparative Political Studies, 40(2), 146–169. Retrieved from http://cps.sagepub.com/cgi/doi/10.1177/0010414006296347

Scheiber, N. (2007, April 2). Freaks and Geeks; How Freakonomics is ruining the dismal science. The New Republic. Washington, DC. Retrieved from http://www.tnr.com/article/freaks-and-geeks-how-freakonomics-ruining-the-dismal-science 8

Snyder, R. (2003). Adam Przeworski: Capitalism, Democracy, and Science – Interview with Adam Przeworski. New York, NY.

Snyder, R. (2007). The Human Dimension of Comparative Research. In G. L. Munck & R. Snyder (Eds.), Passion, Craft, and Method in Comparative Politics. Baltimore, MD: The Johns Hopkins University Press.

 


[i] As Elster notes, “scholars are [often] happy if they ‘get the sign right’” (2009, p.7).

[ii] “How was it that these students, who had arrived at the country’s premier economics department intending to solve the world’s most intractable problems–poverty, inequality, unemployment–had ended up facing off in what sometimes felt like an academic parlor game?” (Sheiber, 2007, p.2)

 

Snyder (2007) also bemoans social scientific scholarship which focuses on technological innovation at the expense of substantive questions exploring the “human dimension”. To explain the copy-cat behavior of social scientists – such as that observed by Sheiber – he quotes Yale Professor James C. Scott: “… if you’re just reading in political science and only talking with political scientists, it’s like having a diet with only one food group. If that’s all you do, then you’re not going to produce anything new or original” (ibid, p.17).

Advertisements

2 responses to “Weekly Response Paper: why ‘hard evidence’ in the ‘soft sciences’ has been taken as true, and why it matters

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s