To subscribe to the monthly C&E email newsletter and event announcements click here.
In a campaign, tension tends to surface between the creative types and the numbers people, particularly when it comes to television advertising. But the Obama campaign, true to its “no drama” motto, managed to diffuse that tension during the 2012 cycle thanks to a rigorous system of ad testing.
“With advertising, it’s always more effective and more memorable to show somebody something than to tell somebody something,” Larry Grisolano, whose firm, AKPD Message and Media, led Obama’s media effort in 2012, tells C&E. “The problem with showing somebody instead of telling them is there are ambiguities, and you’re not always sure how it’s going to be received and how it’s going to be processed.”
The Obama camp’s TV advertising targeting abilities have been well chronicled. Using a system dubbed the “Optimizer,” the campaign was able to, in the words of Republican ad buyer Will Feltus, “put more lead on the target” than Mitt Romney’s camp, and in so doing reached more of those hard-to-find persuadable voters in swing states. But that was just half the battle.
Obama’s team needed the right combination of images and message to help sway undecided voters once it reached them. Does the president seem like a strong leader in this ad? Does this spot convey that he has a jobs plan? Ask a room full of consultants—Obama’s media team consisted of AKPD, GMMB, Adelstein Liston, Dixon Davis, Putnam Partners and Mike Donilon—and they’ll come up with any number of conflicting answers. So the campaign turned to the Internet—online focus groups, specifically.
“The volume was absolutely unprecedented,” says David Binder, whose San Francisco-based firm was contracted to conduct the testing. For each test, Binder’s firm assembled online groups of 400-500 volunteer participants with the desired voting records and who were demographically representative of the electorate in, say, Wisconsin. Some participants were shown the ad and asked for their responses while a control group that wasn’t exposed to the spot was asked the same questions.
“The control was essential to tell us whether there was effectiveness at all,” explains Binder. The online testing was a departure from what Obama’s 2008 campaign had done.
“In 2008 we did a little bit more dial-group testing,” Grisolano recalls. “They’re larger groups, they’re in person, and so they dial up with what they like and they dial down with what they don’t like.”
Part of the reason for the change is that the campaign needed to be more nimble. In 2008, Obama focused its advertising on 15 states. The focus narrowed to nine states in 2012. But the campaign had a similar amount of money, “so we were burning through spots faster,” says Grisolano.
The change meant the campaign needed more creative—online testing proved the most useful because the campaign could have a three-to-four day turnaround. Grisolano singled out Wisconsin, which was kept on the sidelines of the presidential race by its late primary and gubernatorial recall races.
“We had already run five months of advertising in all the rest of the [swing] states” once Wisconsin’s primary and recall ended, Grisolano says. “So we went back to the past ads that we did. We did a really extensive test in Wisconsin alone to see what would be the best sequence of ads to bring them up to speed on all the stuff they’d missed over the summer.”
The campaign also did Ohio-specific ad testing because the issue spectrum for voters there was unique. Obama won both states handily.
“You never want to test message in ad tests, so we developed our message with our normal research tools of polling and focus groups,” says Grisolano. With only minor tweaks, the Obama campaign charted out a narrative for its advertising that ran from May through September. “Then we used ad tests to test different creative treatments to see if they delivered on moving the needle,” he says.
A control group also took part in the tests to make the data as scientific as possible.
“You’re not really looking for validation all the time,” says Grisolano. “You’re kind of looking for, ‘where’s the fine tuning that can really make sure this hits?’”
That wasn’t all the campaign did to test-drive its advertising. The Obama camp also employed a series of mini focus groups to tweak the message.
“Separately we almost always did mini-focus groups with three or four people,” says Grisolano. “There is something about being able to see how people react. They can type in a score or a number, but when you’re sitting there in a conversation with somebody and they’ve seen an ad and you see them squirm, or you see a furrowed brow, or you see a smile come up, that is something that is as telling as, you know, 6.2.”
The campaign would run through a “large number” of the mini-focus groups in a day. “We rarely ran spots without having two or three options like that,” Grisolano says. Mark Putnam, whose media firm worked for Obama in 2008 and 2012, says the testing meant the ads that made it to air were rock solid.
“They had the resources and the ability to be very rigorous about testing and they had a lot of confidence in every ad they put on,” Putnam says. Online testing is “really quantitative and that’s what’s helpful to a campaign.”
Focus groups are an artificial environment where people can sometimes turn into Roger Ebert and feel they have to hunt for things to criticize in the ad or the group can wind up swayed by the loudest person in the room. “You don’t watch TV with six or eight of your neighbors you don’t know,” Putnam says.
Going forward, Grisolano believes campaigns are going to “take the rigor of creative testing very seriously.” But that’s not to say that the creatives have totally lost out, or there’s no risk to over-testing a spot.
“Somebody in the campaign has to be the message cop who is singularly concerned with the message track that the campaign needs to be on,” Grisolano says. “The message cop has to basically decide even before you go to testing, ‘does this spot serve the narrative and this step in the narrative that’s being created?’ I would never test something that I don’t want to put on the air.”
Sean J. Miller is a contributing editor to Campaigns & Elections magazine.