So, you’re probably here because you’re trying to figure out how effective your marketing automation campaigns really are and how you can improve upon them.
And let’s be honest, it can be a bit of a headache trying to figure that out. But fear not, in this article we’ll dive into the options and give you some examples too.
- What does Control Group Testing in marketing automation mean?
- What does A/B Testing in email marketing mean?
- What’s the difference between Control Group Tests and A/B Tests?
- How to determine the effectiveness of your marketing with testing
- The benefits of using Control Group Testing in addition to A/B testing
- Examples of Control Group Testing to try out for marketing automation
- Examples of A/B Testing to try out in email marketing
- Avoid these common mistakes
What does Control Group Testing in marketing automation mean?
Now, you might be wondering, what is a control test or a control group and how it’s relevant for marketing automation? However, it’s really important and you should be using them!
In its purest term, it’s basically a segment of your customer data that will be excluded from whatever it is you’re testing, including the likes of channel effectiveness or campaign effectiveness.
By comparing the performance of those who do receive your communications to those who don’t, you can get a much clearer picture of the impact that you are directly having on your customers behaviour and conversion.
Control Group Testing is quite a strategic activity. More on that later. Let’s take a look at the Control Tests more popular sibling the A/B Test
What does A/B Testing in email marketing mean?
You’ll most likely have heard of A/B testing, especially for email marketing as it has been around for a long time.
A/B testing is a cool and very accessible method for figuring out which version of your campaign is more effective. It’s basically like a science experiment, but for email marketing.
Here’s how it works: you take the segment for your campaign and then split that into two groups. One group gets Version A and the other gets Version B. You track the performance of each version by measuring metrics like open rates, click-through rates, and conversion rates.
It’s like a battle between versions A and B to see which one is most popular. You can split the list evenly 50/50 and take the results from there.
You can use that information to optimise your future campaigns and make data-driven decisions about what works best for your audience.
Or, if you’re more of an impatient type, you can use automated winners where the most popular will be sent to the rest of the segment.
To do this you’ll run the A/B test on a certain percentage of the total segment, let’s say 30% So this is split into two. 15% will receive Version A and 15% will receive Version B.
You’ll select the winning metric – whether that be opens, clicks or conversion and then run the test for a few hours – normally between 2-4 is a decent window.
The winner will be crowned and then the system will automatically send out that version to the rest of your segment – in this example 70% of your audience.
What’s the difference between Control Group Tests and A/B Tests?
So now we know that Control Tests and A/B Tests are both types of experimentation used in marketing automation to test different variations of campaigns and determine which version performs the best. But what’s the main differences and how should we use them in tandem?
The purpose of Control Testing is to determine the impact of campaign activity by comparing the performance of those who receive the campaign to those who don’t.
The purpose of A/B testing is to compare the performance of two different variations of a campaign (version A and version B) to determine which one is more effective.
Group size and testing windows
Control Testing involves creating a control group that is typically a smaller percentage of your total customer database size, let’s say 10%, and excluding them from your campaign whilst the remaining 90% are included in the campaign. Control Group testing tends to occur over a longer test window period, sometimes a couple of weeks, sometimes a few months.
A/B Testing involves splitting your segment and audience evenly into two groups and sending Version A to one group and Version B to the other. A/B Testing windows are narrow. Many tests are completed within 2-4 hours or 24 hours at most.
Method and application
Control Testing involves measuring the performance of the control group and the test group (those who are part of the campaign) to determine the impact the campaign as a whole has on customer behaviour and conversion. It plays an essential role in data-driven decisions about how future campaigns should be developed and implemented.
A/B testing typically measures metrics such as open rates, click-through rates, and conversion rates to determine which version of an email campaign is more effective. A/B testing is often used to test specific elements of an email campaign such as subject lines or call-to-action buttons.
How to determine the effectiveness of your marketing with Control Group and A/B Testing
Quite simply, and like with any project you embark on, it’s important to establish clear metrics and goals before you begin.
This will allow you to measure the success and performance to make data-driven decisions about your future email marketing strategies.
The most common metrics used to measure effectiveness and success of campaigns include:
The number of people who opened the email divided by the number of people who received it. Although, these days with Apple MPP open rates are no longer considered an accurate metric.
Click-through rate (CTR)
The number of people who clicked on a link in the email divided by the number of people who received it.
The number of people who made a purchase after clicking on a link in the email.
Once you have established which metric your test is going to measure against you can then compare the performance of your control version against the test version.
The benefits of using Control Group Testing in addition to A/B testing
Whilst A/B tests are akin to the cool kids finding out which fashion item will generate the most likes on TikTok, Control Group Tests are the geeky kid, book worming away in the corner researching history books on why things work the way they do.
Larger sample size
Control Group tests will use a larger sample size, which helps to ensure that the results are statistically significant and representative of your entire customer database.
Elimination of variables
In a Control Group test, the only variable being tested is the one being manipulated, while all other variables are held constant. This eliminates the possibility of extraneous variables influencing the results or mudding the water.
Control Group tests allow for greater control over the test environment, which helps to ensure that the results are more accurate and reliable. They also provide a known standard against which to compare the results, which helps to ensure that the results are meaningful and relevant.
Identifying the true cause
Control Group testing helps to identify the true cause of any changes observed in the results, which helps to ensure that the results are accurate and reliable. This gives greater confidence in the results to inform your future marketing automation strategies.
Examples of Control Group Testing to try out
With Control Group Testing, you can run statistically significant experiments that provide insights into the effectiveness of your entire campaign strategy or elements of that strategy.
Then you can analyse conversion rates of those included and the Control Group which are excluded. This will reveal the impact of the activity and how customer behaviour is affected. It’ll also enlighten you on how customer interact when excluded from these activities.
All marketing automation activity
At the top of the scale is excluding a percentage of your customer database from receiving any campaigns at all. This is an excellent test to really find out the true value of your campaigns and test attribution models.
Overall value of marketing channels
Run a test for between two to four weeks where a portion of your customer database are excluded from a particular channel e.g. Email, SMS or social channel retargeting.
Run a test on your abandonment series including abandon basket, product and category to find out how many of your customers were convinced to purchase from your campaigns in comparison to those would have come back to the site and bought regardless.
Workflow length tests
Use Control Group testing to determine the successful length of a customer journey for various automations. For example, the Welcome Journey. Run a Control over a 2 – 3 month period by sending a certain percentage of your segment down a new journey path which has more multiple touchpoints compared to the original.
The influence and costs of using voucher code
Run a Control Group test by excluding a percentage of your segment which would normally receive a voucher code. Whether that be from an abandoned basket or win back campaign. Assessing the cost voucher redemption has on the profitability of a customer is invaluable. It may enlighten you to change when, where and how often you voucher codes.
Your tests shouldn’t come at a cost
It’s important to remember that you don’t adversely affect your revenue and conversion when undertaking Control Group Testing.
Don’t run tests that will decrease your revenue earning potential. Use randomised samples of customers that rotate between testing periods at set times of the year – making sure to avoid peak sales periods. Excluding subscribers over a long period of time will not make anyone happy!
With Control Group testing you’ll have the power to paint a vivid and precise picture of just how much revenue your marketing automation strategy and email marketing campaigns are generating.
Examples of A/B Testing to try out
A/B Testing is only limited by your own imagination and creativity levels. There’s so much to test over a wide range of variables.
Subject line are the most common A/B tests done in email marketing and with good reason. It’s the first thing a recipient will see in there inbox. Experiment with different copy, switch the sale percentages or brand names around, try a different tone of voice, use emojis, alter the length – this list can go on and on!
Another thing the recipient will see in the inbox first is the sender’s name. Many brands play around with different sending names. It’s important to always identify your brand name though so they know it’s coming from you. Try out different names for your newsletter, send the email from a person e.g. Andy at RedEye – just never use the sender name DoNotReply – please!
Working in tandem with the subject line is the preview text. Not as many email marketers A/B test on this but it’s just as important to do so. With subject line space at a premium the preview text is another golden opportunity to entice that open.
Images vs text
If you’ve always sent out image heavy email creatives out then running a test period of creatives that use more text is a worthy experiment. Finding the right sweet spot between images and text for your customers could be a real winner.
Calls to action
Probably the most cited A/B test on the internet – the CTA. And it’s justified. Getting that all important click through and purchase is the reason for the campaign in the first place.
Testing out CTA design in terms of height, length, position and the most importantly the copy used will be a far more valuable experiment than simply testing out the colour, which is the usual recommendation.
Test out the length of a creative, short single actions, versus long many actions. Test different email layouts, such as a single column versus a multi-column layouts. Experiment with the hero image, large vs small – animated GIF vs static image.
Test the use of personalisation throughout. Add personalisation to subject line such as their name or, even better, the brands and products you know they purchase. Test out personalisation in the hero image by showing the last brand they viewed.
Depending on how much customer behavioural data you hold you could test out generic content creatives vs fully dynamically generated content creatives.
Test sending emails at different times of the day or days of the week to see which time results in higher open and click-through rates.
Avoid these common mistakes
Before conducting your A/B test, it’s important to have a clear hypothesis about what you’re trying to achieve and which variables you want to test. Without a clear hypothesis, your test will not yield meaningful results.
Test only one variable at a time. If you test more than one, then you won’t know which had was responsible for the success or failure.
Make sure your tests are statistically significant. If they aren’t then the results can not be 100% trusted to be correct. This means you’ll often need to run the same tests multiple times over a longer time period to increase the confidence level.
Too many email marketers make the mistake of testing a variable once, selecting the winner and rolling it out on all future campaigns.
Another popular mistake to make is testing on a specific customer segment, and then rolling out that optimised campaign to all other segments too. It’s likely your other customers will not react in the same way.
Handpicking A/B test recipients can lead to bias. It’s more reliable to test on a larger, more varied customer segment than smaller sub-sets.
Don’t stop testing! Always look to improve the effectiveness of your campaigns through optimisation. Marketing automation platforms make it so easy to continue testing all year round.
Enjoyed this article? Sign-up to receive updates