To draw conclusions about individuals from aggregate data—that is, to commit the ecological fallacy—is scorned by the academy, seen as a misguided use of evidence. The microtargeter lives and breathes an understanding of the ecological fallacy, even if sometimes judging that aggregate data is good enough, especially in uncompetitive contests or when money is short.

The attention that microtargeting gives to data quality and appropriate analytical techniques warms the hearts of academics. Data giant Catalist talks about the long-term process of building and refining a list; Malchow emphasizes the importance of documentation, and maintaining a codebook. The statistical analyses that form the basis of the predictive models are those same techniques that the scholarly community highlights and teaches to its students.

If social science lurks under the surface of microtargeting, it’s out in plain sight in experimentation. Look no further than the ever-present catchphrase “randomized, controlled experiments” that marks the coverage of the 2012 contests. But what’s noteworthy about the push for experimentation in campaign politics is that there is a direct path from the academic world to the real world.

The Role of Experimentation

Experimentation has been a hallowed technique in social science for a very long time, due in large part to its ability to isolate causation and to measure the effect of a causal mechanism. However, a traditional experiment, structured by random assignment of subjects into control and experimental groups and the application of a “treatment” to the latter, is just not practical regarding many political questions.

Some political questions have been more amenable to experiments, especially to field experiments. Among these are questions of voter mobilization, which were explored as early as the 1920s by Harold Gosnell. After a long hiatus, the field-experimental approach to voter turnout had new life breathed into it, with the work of Donald P. Green and Alan S. Gerber, and scores of graduate students they trained and others they inspired. Gerber and Green’s seminal work, “Get Out the Vote,” first published in 2004, represents the rare case of academic scholarship making a splash in the practical world of politics. People actually read it and took its lessons to heart.

Field experiments undertaken by the academy represent one catalyst for the new experimental practices in politics. The other is the digital sector. For over a decade, Silicon Valley has been preoccupied with A/B testing, the basic, single variable experiments with random assignment and control. Google, Amazon, Netflix and other web giants are “A/B addicts,” according to Brian Christian of Wired. The Obama campaign was introduced to this experimental approach by Dan Siroker, then the campaign’s director of analytics, who first used it in December 2007 to optimize the Obama website.

A/B testing is now a mainstay of email, video, web and mobile strategies in the political world. It’s also a much-promoted component of products offered by vendors. Read the position descriptions for the jobs in the industry, and you’ll see plenty of references to web analytics, A/B testing, and even SPSS skills.

The online world emphasizes the role of experimentation in optimizing the impact of appeals to the voter, donor, activist and the consumer. But perhaps nowhere is the nod to social scientific norms more pronounced than in Catalist’s own description of its 2008 voter contact in the offline world on behalf of a number of progressive organizations: “Rigorous proof of … effects, and measurements of [the contact’s] actual magnitudes, can only be shown through carefully designed experiments using appropriate control groups.”

A Cautionary Warning

Most academics don’t set out to train campaign professionals, preferring to leave that to, well, the campaign professionals themselves. But in emphasizing the basic tenets of social science, the professors may be doing a little more of that training than they realize. Still, there may be even more that the scholarly community has to offer: a cautionary warning.

Abraham Kaplan, philosopher of science, quipped that if you “give a small boy a hammer … he will find that everything he encounters needs pounding.” This law of instrument applies well to political scientists. Give a political scientist training in survey research, then every question posed gets addressed by surveys.

Or consider the corollary: only questions answerable by surveys get posed. There is an obvious analog in campaign politics, with the new tools of evidence-based decision making conceivably driving decisions on strategy.

On the surface, microtargeting and experimentation are tactical tools, adopted after the decision is made, for instance, to engage in a targeted mobilization campaign or to wage a mobile effort. They are tools of refinement, insofar as they help the campaign make decisions based on data that optimize impact once the strategy is determined. But there is potentially some risk in the emphasis that the new analytics put on results that are trackable.

Techniques associated with readily-generated metrics, and even with the capacity for testing, might become the equivalent of the “small boy’s hammer.” This is especially the case in the online world, given the warp speed at which these procedures can be applied and the real-time adjustments that they permit. It may also be the case in the context of campaigns for which donors are particularly focused on results; the shroud of scientific efficiency and measurable impact could be especially compelling. All of this is to say that some careful thought should be given to the ramifications of evidence-based practices.

The business world has already grappled with some of these questions. Over the last decade or so, the pages of the Harvard Business Review were peppered with articles that considered the merits of decision making based on “gut versus data” or “intuition versus evidence.”

Conceivably, some things are just not measurable. Of course, intuition and evidence may even pose a false dichotomy. Nobel Prize-winning economist Herbert Simon held that intuition is nothing more than “analyses frozen into habit”.

When the dust settles in November, and after the first-semester grades are in, practitioners and academics alike might sit back for a moment and think about the possible flip side of these data-based practices.

Barbara Trish is an associate professor and chair of the political science department at Grinnell College.