How to design a successful test

Test design principles will guide you away from pitfalls or bungled unclear results. Kinase’s David Reed lays out how to plan properly - and how to test for success

 

“Sure, let’s just make that change and see what happens!” 

Sound familiar? Whilst this approach is common and seems like maintaining momentum, it puts businesses in danger of making poor marketing decisions. Piecing together results from a test without considered planning can lead to conclusions that are faulty, imprecise or both. At best you end up, for example, showing less engaging creatives. At worst, you decide to reduce investment in a profitable channel and lose out on a multitude of sales.

So, how do you set up a test correctly? As outlined by Kinase founder Richard Brooks, testing and experiments have never been more crucial for successful digital marketing strategy. So here are 3 key steps to cover in your test set up.

ONE - DETERMINE A MEASURE OF SUCCESS

The first step should be to ask, “What outcome would give us confidence that this test has been successful?” (Or similarly, “what outcome would convince us that it hasn’t?”) This step allows us to determine the appropriate KPI to measure the success, or failure, of the test hypothesis.

For example, the success of creatives is commonly measured via click through rate. This is because CTR indicates how engaging the ads were (or how relevant the ad placements were). However, ultimately, we may be looking to drive more leads or sales. So would you accept a lower CTR if you could drive more conversions for the same cost? Or to rephrase that, would you prefer more people click on your ads, or the right people click on your ads?

We might be able to get a test result (see image left, below) but, we want to aim for the result we need! (image right below).

 

Hitting the target (left) but was it the target you need (right)?

 

We need the test to be meaningful because it will cost time and money - even if you’re not putting additional budget behind a test, you have the risk that what you’re testing might drive worse performance. That risk can also be considered a cost.

TWO - DEFINE A CONTROL

Before a meaningful test is launched, the next element we need to decide on is, “What are we benchmarking against?” 

The downside of ‘just seeing what happens’ is that you won’t know if results were caused by your test, or by something else, such as seasonality or competitor activity, or a combination of other changes on your website.

You can overcome this by setting up a control group. A control group could be a set of campaigns or a region of your target market that is expected to behave in the same way as the test group. Generally, using geographic regions for test and control groups ensures that you meet that requirement. The chosen group can be verified with historical data to demonstrate how it behaves in sync with your proposed test group. Similar analysis can also determine the control group - for example a scattering of postcodes across the UK - which align in the data with a similarly configured test group.

Alternatively, you could test something across half of your product categories. This requires caution and verification with your marketing plan. For example, if the categories in the test go into a sales promotion and the half that we aren’t testing don’t, we won’t know if the results of our test were caused by our test or the promotion.

THREE - MAKE IT SIGNIFICANT (AND ESCAPE THE NOISE)

To know if a change is statistically significant, we need sufficient data. This is achieved by a combination of volume and time. Volume is a significant change in our independent variable (e.g. spend) and time is how long the test runs for.

On one hand, we could reduce spend by 50% for 5 weeks. This is a dramatic change and so we can expect to see results very quickly. However, assuming this spend is worthwhile, this could lead to a significant drop in performance for those 5 weeks.

 

Alternatively, we could cut spend by 25% and run the test for 10 weeks. This reduces the week-to-week impact on performance but gives us more time to determine if there is a consistent change in performance.

What’s key here is that the total difference between test and control groups for our independent variable needs to be large enough for us to see a notable difference in our dependent variable. For our two examples above, we have roughly the same total difference between test and control groups for our independent variable in both tests (as shown by the shaded areas below).

Whilst test spend has decreased (red line) and we can see a total reduction in spend (shaded area), spend never exceeds the maximum or minimum of what the grey line had achieved before the test. This means that our results may be in line with non-test performance and give us no significant result.

NEXT STEP - IMPLEMENTATION

With success clearly defined; a control group analysed and agreed; and a degree of significance defined and built in, you’re ready to go.

Once designed properly, you can start working out the details of your test implementation - timeline, targeting, control, and reporting set up. Kinase Analytics team run tests on marketing activity using Skai’s Impact Navigator platform. This is purpose built for easy implementation and reporting on your tests, once you’ve designed it. Kinase can also design the KPIs, control group and significance parameters, too.

Once a well designed test concludes, new information and insights will be the result - giving your test the impact it deserves.



 
Previous
Previous

Five Predictions for Black Friday

Next
Next

Why you must experiment