It often seems that academics and practitioners reside in parallel universes, both interested in the same phenomena, yet seeing things differently.

But with the advent of new evidence-based practices and their intersection with practical politics, there is a bit more overlap than usual. It just may be that training in social science is directly applicable to campaign politics.

After a 2008 presidential campaign that was considered a watershed for data-driven politics, it is clear already that 2012 will again revolutionize the way campaigns are conducted. Microtargeting and experimentation, prominent techniques developed and refined over the past 15 years, are marking the intersection of social science and practical politics, rendering academics and operatives not-so-strange bedfellows.

Campaigns driven by data are nothing new. In fact, it’s difficult to imagine even a 19th century party machine without information stored, at the very least, in the heads of the party workers. But these days, evidence-based practices represent a cultural phenomenon of sorts—a seismic shift in decision making that extends widely from the world of business where it began to sports and politics. It even extends to the personal sphere, with people tracking their diets, fitness regimens, work habits and even doing a little experimentation—all as an approach to personal discovery.

In the world of politics, the Obama and Romney campaigns are leaders in the new campaign analytics, using data to guide decisions. The extent of microtargeting in Obama’s 2008 effort is legendary. Lest the Democrats think they have it wrapped up in predictive modeling or targeted voter appeals, candidate Romney brings a long history to the enterprise, microtargeting first in his 2002 run for Massachusetts governor. The candidate himself shows an affinity for evidence, revealing in 2007 to The Wall Street Journal editorial board, “I love data.”

Both campaigns are also known for their experimental methods, used to measure impact and refine appeals. In 2008, the Obama campaign and the Analyst Institute were doing randomized, controlled experiments in both the online and offline worlds. But two years earlier in Texas, Gov. Rick Perry had essentially run his first reelection campaign as an experiment, tailoring his 2010 effort to respond to the findings of a set of experiments from 2006.

Now in 2012, this mood of experimentation has extended even to broadcast appeals, having previously focused on direct voter contact and the digital world. Sasha Issenberg’s reporting for Slate documents much of this. His book on the new science of campaigns, “The Victory Lab,” is due out this fall.

It’s tempting to consider this approach to campaigns as a natural progression in politics—the newest set of techniques fueled by ever-sophisticated lists and technologies that permit real-time feedback and refinement. In fact, the trajectory of the approach is marked by a familiar pattern. First, the business world stakes out new ground, and then entrepreneurial political types follow suit.

However, this science of campaigning is a little different in the extent to which it is also rooted in social scientific practices and norms, even influenced by academicians. In many ways, a user guide for campaigns under this model would read like a political science methods text.

Hal Malchow’s “The New Political Targeting” has a clear social-scientific subtext. The Democratic direct mail and microtargeting guru touts the merits of individual-level data for understanding the voter. He emphasizes the development of good data and the application of appropriate statistical tools to identify targets, and he conveys the importance of returning to the data after the election to update them with new information gathered about the voter. Though Malchow’s message is primarily pragmatic, it is couched in social-scientific values and practices.

The microtargeter operates at the individual unit of analysis, sharing the bias of many scholars who try to understand political behavior. The voter lists, the canvassing and consumer data, the scores derived from predictive modeling—all of these reflect individuals or apply directly to them. Neither the practitioner nor the scholar dismisses the vote distribution at, say, the state level or the demographic breakdown of a census tract. However, for both the practitioner and the scholar, the attitudes and behaviors that mark individuals are key.

To draw conclusions about individuals from aggregate data—that is, to commit the ecological fallacy—is scorned by the academy, seen as a misguided use of evidence. The microtargeter lives and breathes an understanding of the ecological fallacy, even if sometimes judging that aggregate data is good enough, especially in uncompetitive contests or when money is short.

The attention that microtargeting gives to data quality and appropriate analytical techniques warms the hearts of academics. Data giant Catalist talks about the long-term process of building and refining a list; Malchow emphasizes the importance of documentation, and maintaining a codebook. The statistical analyses that form the basis of the predictive models are those same techniques that the scholarly community highlights and teaches to its students.

If social science lurks under the surface of microtargeting, it’s out in plain sight in experimentation. Look no further than the ever-present catchphrase “randomized, controlled experiments” that marks the coverage of the 2012 contests. But what’s noteworthy about the push for experimentation in campaign politics is that there is a direct path from the academic world to the real world.

The Role of Experimentation

Experimentation has been a hallowed technique in social science for a very long time, due in large part to its ability to isolate causation and to measure the effect of a causal mechanism. However, a traditional experiment, structured by random assignment of subjects into control and experimental groups and the application of a “treatment” to the latter, is just not practical regarding many political questions.

Some political questions have been more amenable to experiments, especially to field experiments. Among these are questions of voter mobilization, which were explored as early as the 1920s by Harold Gosnell. After a long hiatus, the field-experimental approach to voter turnout had new life breathed into it, with the work of Donald P. Green and Alan S. Gerber, and scores of graduate students they trained and others they inspired. Gerber and Green’s seminal work, “Get Out the Vote,” first published in 2004, represents the rare case of academic scholarship making a splash in the practical world of politics. People actually read it and took its lessons to heart.

Field experiments undertaken by the academy represent one catalyst for the new experimental practices in politics. The other is the digital sector. For over a decade, Silicon Valley has been preoccupied with A/B testing, the basic, single variable experiments with random assignment and control. Google, Amazon, Netflix and other web giants are “A/B addicts,” according to Brian Christian of Wired. The Obama campaign was introduced to this experimental approach by Dan Siroker, then the campaign’s director of analytics, who first used it in December 2007 to optimize the Obama website.

A/B testing is now a mainstay of email, video, web and mobile strategies in the political world. It’s also a much-promoted component of products offered by vendors. Read the position descriptions for the jobs in the industry, and you’ll see plenty of references to web analytics, A/B testing, and even SPSS skills.

The online world emphasizes the role of experimentation in optimizing the impact of appeals to the voter, donor, activist and the consumer. But perhaps nowhere is the nod to social scientific norms more pronounced than in Catalist’s own description of its 2008 voter contact in the offline world on behalf of a number of progressive organizations: “Rigorous proof of … effects, and measurements of [the contact’s] actual magnitudes, can only be shown through carefully designed experiments using appropriate control groups.”

A Cautionary Warning

Most academics don’t set out to train campaign professionals, preferring to leave that to, well, the campaign professionals themselves. But in emphasizing the basic tenets of social science, the professors may be doing a little more of that training than they realize. Still, there may be even more that the scholarly community has to offer: a cautionary warning.

Abraham Kaplan, philosopher of science, quipped that if you “give a small boy a hammer … he will find that everything he encounters needs pounding.” This law of instrument applies well to political scientists. Give a political scientist training in survey research, then every question posed gets addressed by surveys.

Or consider the corollary: only questions answerable by surveys get posed. There is an obvious analog in campaign politics, with the new tools of evidence-based decision making conceivably driving decisions on strategy.

On the surface, microtargeting and experimentation are tactical tools, adopted after the decision is made, for instance, to engage in a targeted mobilization campaign or to wage a mobile effort. They are tools of refinement, insofar as they help the campaign make decisions based on data that optimize impact once the strategy is determined. But there is potentially some risk in the emphasis that the new analytics put on results that are trackable.

Techniques associated with readily-generated metrics, and even with the capacity for testing, might become the equivalent of the “small boy’s hammer.” This is especially the case in the online world, given the warp speed at which these procedures can be applied and the real-time adjustments that they permit. It may also be the case in the context of campaigns for which donors are particularly focused on results; the shroud of scientific efficiency and measurable impact could be especially compelling. All of this is to say that some careful thought should be given to the ramifications of evidence-based practices.

The business world has already grappled with some of these questions. Over the last decade or so, the pages of the Harvard Business Review were peppered with articles that considered the merits of decision making based on “gut versus data” or “intuition versus evidence.”

Conceivably, some things are just not measurable. Of course, intuition and evidence may even pose a false dichotomy. Nobel Prize-winning economist Herbert Simon held that intuition is nothing more than “analyses frozen into habit”.

When the dust settles in November, and after the first-semester grades are in, practitioners and academics alike might sit back for a moment and think about the possible flip side of these data-based practices.

Barbara Trish is an associate professor and chair of the political science department at Grinnell College.