Direct marketers love testing. It’s how we establish causality and know with certainty that results can be repeated across campaigns. It’s how we mitigate risk across our programs before rolling out with new ideas. It’s how we gain insights about donor and customer behavior that allow us to build effective communication strategies that yield results. The list goes on and on.
And while most of us are likely in agreement that testing is “the bee’s knees,” it’s easy to fall into a trap where we are testing just to test, and we aren’t always planning our testing in a thoughtful way that will consistently allow us to maximize results across our programs.
Testing is far too valuable to be an afterthought. Planning and prioritizing your testing agenda in advance is just as important as executing the tests themselves.
In the DMAW Webinar How to Prioritize and Maximize Your Testing, we cover testing from A-Z and offer new information for beginners and experts, alike. Here’s an overview of our discussion.
When setting up each of your tests – make sure that you know the goal of the test and how you will measure success before the test is developed and executed. Determine if you will get a good, solid directional read on results based on the quantities you’re testing. It’s even better if you can achieve statistical significance of 95%+ so ask yourself in your planning, what do I need to do to get there? You may decide, for example, that a test needs to be performed over a series of communications to build up enough responses and data to achieve that significance and can then map out the test plan from there.
As your tests are developed, isolate variables so that you are accurately able to establish causality and know without a doubt that a certain change in creative/message/targeting is what caused the outcome. If you are looking to test multiple variables, consider multi-variate testing as an option.
And, when it comes to selecting your test audience, it’s best to use simple random samples for testing the overall effect on a population. But, if you suspect that your test will disproportionally affect one segment of your audience, consider using a stratified sample, and testing on segments of your audience, to increase your confidence in the results.
And lastly, when it’s time for the data to come in and you’re analyzing the results, let the data tell the story. We’ve all been there before. We just know a test is going to do exactly what we want it to do. And then, it doesn’t. But that doesn’t mean all is lost. Sometimes, even if you haven’t proven your hypothesis, there are other gems in the results that may lead you to a new, winning strategy. So even if you don’t have a winner this time, you’re definitely one step closer.
Courtney Lewis is the Vice President of Client Services at Chapman Cubine and Hussey and can be reached at 703-248-0025 or firstname.lastname@example.org.