Multivariable testing is a statistical tool that can gauge the impact of changes on processes from oil refining to retail sales, and it’s the most powerful tool for gaining knowledge I’ve ever known. But I can’t get most political consultants to believe that. In fact, most don’t even return my calls. 

You can accuse me of hyperbole, but MVT works because it’s driven by data. I realize that scares some people, particularly those who have a vested interest in the sanctity of their expertise. Still, the numbers don’t lie.

We’ve done some 250,000 experiments with more than 1,000 organizations and have found a nearly inviolable pattern: fully half of suggested changes have no impact, 25 percent have a positive impact, and 25 percent have a negative impact. The constant: no one can consistently guess which changes are going to be in which group.   

I founded my company—QualPro— after many years as a statistician and division quality manager at the National Nuclear Weapons Complex in Oak Ridge, Tennessee. As a statistician, I rely on complex mathematics to quickly and efficiently solve problems that appear intractable. The term multivariable testing was first coined by Forbes magazine, which has studied and reported on it.

MVT uses carefully designed statistical experiments to achieve results from small, quick tests involving many variables. We can then take apart the results analytically and isolate what variable or combination of variables caused the effects. The result is a powerful and efficient way to test potential improvements to complex processes and then learn which changes have the most impact.    


Despite the potential widespread benefits of the process, the obstacle is that many experts don’t want to find out that what they thought they knew is actually wrong, nor do they want to think a mathematical process can replace their expertise. But we use all sorts of tools to hone our understanding and MVT is one that political consultants should embrace. Like scientists who come up with ideas, test them and then revise their theories, consultants should be more effectively testing their own ideas to achieve more consistent outcomes for their clients.

It’s why I jumped at the chance to work on a political campaign that appeared to be a lost cause: Tennessee Republican John Ragan’s 2010 bid for state House. 

A Campaign Approach

In Ragan’s case, he found himself facing a well-funded Democratic incumbent in state Rep. Jim Hackworth. He was running uphill in a traditionally Democratic district and he was losing. Ragan was polling at just 32 percent, and he didn’t have the time or money to turn his campaign around. So Ragan decided to send out one direct mail piece to voters in his district, and it had to be good. He refined his message line by line based on the results of a QualPro-led experiment.

Though I have a strong interest in politics, my opinions and biases were not important in this case, nor are they in the application of our process in any other case. The MVT Process scientifically determines what content catalyzes a respondent’s reaction. It doesn’t say why something works or even whether it should, only that it does. Our tests generate information about how people respond to various words, images and layouts at a particular time.

In Ragan’s case, we were looking at two different mailer formats and testing the display of various pieces of information, including party affiliation and logos; descriptions of Ragan’s background, values, and legislative priorities; quotes and endorsements from other politicians, and selected facts about Tennessee state politics. The mailers weren’t flashy and they weren’t going to be. It meant that finding the optimal message would be key. 

We decided to test 15 different variables: the type of card, the use of a follow-up phone call or visit, and 12 variations in the content and look of the mailing. A randomly selected group of 320 likely voters were shown the different variations of the mail pieces.  

We lay out these variables in mathematically determined combinations to the right number of subjects. This allows our statisticians to reliably estimate the individual impact of each variable and the impact of different combinations of variables. Thirty-two different versions of the postcards were mailed to the 320 subjects. Telephone polls by a professional survey firm—before and after the mailings—measured the impact of the mailings on the likelihood that recipients would vote for Ragan.

The results on voter intentions as reported by the experiment’s subjects were dramatic even as some changes were barely noticeable to a casual observer. For instance, one variable compared a series of “Did You Know?” statements about illegal aliens to a list of Ragan’s attributes, including experience and responsibility. The use of that section seemed to focus voter anger and increase their inclination to vote for Ragan by a few percentage points.

Combining this format with an endorsement from Bill Haslam—the Republican candidate for governor—increased the likelihood of a vote for Ragan by a full 8 percent. Additionally, we found the more expensive self-mailer format had no advantage over a cheap postcard. We did the test mailing, surveying, and analysis for the Ragan campaign in less than five weeks. The optimum mailers were then sent out three times starting the second week of October.

After our tests, we predicted that Ragan would win with 52.8 percent of the vote. He won with 54 percent. And Ragan got his results for less than $2 a vote. The Hackworth campaign and the local media were blindsided by the results.

A Fundraising Approach

Along with message testing, we’ve seen results in the corporate world using the MVT process to help increase corporate donations and charitable giving in higher education. There are clear applications here for political fundraising, too.

One example comes from a large telecommunications firm looking to increase donations to their in-house political action committee. The company wanted to ramp up its PAC efforts in just five months to engage an increasingly competitive landscape, so it asked its PR pros to work with its lobbyists, in a combination not used before, to set up a series of fundraisers. They asked for our help. We tested these variables on a group of employees invited for the purpose of supporting the PAC but clearly told that their giving decisions were not part of their performance review.   

We involved their PR pros and lobbyists in talking about everything from the location of the event and the style of the invitation, to which refreshments to serve and who the pitchperson should be. The group came up with an initial list of 103 ideas that we then whittled down to 19 factors that were easy, fast, and inexpensive to implement. The team ran events for a week, testing different combinations of variables on only a small percentage of the employees.

As in Ragan’s campaign, many of the findings were counterintuitive. Serving alcohol and suggesting a level of giving both had a negative impact. Having the company lobbyist give the pitch with a basic script, but one they could infuse with a little personality, was actually the most effective. Our efforts in refining their message and their fundraising process helped increase donations by 238 percent, according to their numbers.

We employed the same process in a similar effort for Lincoln Memorial University, which tested 30 ideas involving their mail, email, telephone, and face-to-face solicitations. The experimentation identified a slew of helpful changes in the content, format, and timing of their mail and emails.

In the real world, good ideas are incredibly hard to separate from bad ones, and the benefit of being able to focus only on the good ideas is tremendous. It’s no different in the campaign world. Our experience shows that no one—executives, political consultants, professors, or subject-matter experts—is able to reliably determine which ideas are the helpful ones. By testing dozens of ideas at once, however, and then determining the 25 percent that should be implemented, positive outcomes are highly likely.

It was Mark Twain who said, “The trouble with the world is not that people know too little, but that they know so many things that ain’t so.”

By testing improvement ideas, we can avoid the hurtful and inconsequential ones. And the results are based on science, not intuition.

Dr. Charles Holland is CEO and Founder of QualPro, Inc., a Knoxville-based consultancy. The firm has conducted more than 16,000 business improvement projects with more than 1,000 companies, including many of the Fortune 500.