You have a new product or campaign and it is time to execute a DRTV test, but where do you start? How do you ensure you get a good read and actionable results?
With every product in the direct response marketplace, there are two types of seasonality that are at work: product seasonality and marketplace seasonality. It is tempting to set yourself up with good testing conditions by, for example, waiting until Q1 to begin a DRTV test. This appears to be the smart approach on paper but, in reality, you may be wasting opportunity. If January is your peak seasonality but you are ready to test in August, go ahead and test in August. Waiting until January may improve your test results but you will spend the bulk of your peak seasonality period determining the best media mix for a roll-out when you could figure that out in Q4 and hit the ground running in Q1. If you test in a non-prime time, you have to be prepared to use the test as only a directional read, but it will enable you to make tweaks to your approach and set you up to take full advantage of Q1 once it rolls around.
There are, however, some notable caveats to consider when planning a test, namely heavily contested elections and major television events like the Olympics. Because of the reduction of inventory in these windows, it’s best to avoid testing during such events because it will be impossible to get a fair read on results.
A common mistake advertisers make when approaching a new test is not setting aside enough budget. They feel it’s best to dip their toe in the testing water and not risk too much of their total budget so they can dedicate more to the roll-out. What they end up with is a test that is too short and a result that can’t be accurately interpreted or built upon. Advertisers ultimately spend more money to determine if the first week’s results were an anomaly or a trend. The more data points you can gather during testing, the more stable and actionable the read will be.
Companies on the verge of launching a new direct response campaign often get overzealous and test everything at once: stations, programs, creative treatments, lengths, offers and premiums. The problem with this approach is that, although you will get results, you will have no idea which variable delivered the best results. It’s best to determine all the variables in advance and develop a phased testing strategy to help you determine the best approach on all fronts.
When referring to rotation, a common mistake advertisers make is not rotating test spots properly. If your spots have different creative treatments (e.g., male host vs. a female host), it’s best to rotate the spots evenly within the same time period. This will give you an idea of which spot resonates with your target audience. If you are price testing via multiple offers (e.g., $19.99 vs. $29.99), you will want to run the first offer for a period of time and then run the second offer for a consecutive, equal amount of time Offers that may be perceived better than others should not be aired within the same week; viewers may feel they cannot trust the advertiser.
Once the test begins, it’s important to interpret the results correctly. Many tests are misread by interpreting the bottom-line “Spot A vs. Spot B” results. Looking at the aggregate does not take into consideration if there was one daypart that did particularly well for one of the spots where no spots ran in that daypart with the alternate copy. The best approach for a fair read is to align the results and remove anything that aired for one creative but not the other. You should also remove spots that stand out from the average as markedly better or worse (typically a disparity of 50% or more). One spot that performed much better or much worse than the average return will skew the entire read and may send you in the wrong direction when it comes time to roll-out.
When building a test for a new product or service, it’s important to begin with the end in mind and work back from there. The goal is to gather as many data points as possible without muddying the water by testing too many variables at once. Tests are not typically perfect, but thorough planning can help ensure you will be on the road to a successful roll-out in no time. Once you are in roll-out mode, you should continue testing with the objective of beating your control creative or offer. Continuing to test will ensure you will always be a step ahead of the dreaded creative fatigue.
Let us start planning how to gain more market share for your brand.