What is A/B Testing?
Have you ever wondered if a certain piece of content on your website is performing to its full potential? If the subject line on your email campaign is compelling enough to maximize open rates, or even if a button on your website is the best color to attract a user to click? Many businesses and their marketing teams ask these and similar questions every day. Thankfully these questions can be answered by utilizing A/B testing.
A/B testing, or split testing, as some people like to call it, is a controlled experiment whereby two or more variants are tested against each other to find which performs better. This commonly used approach allows marketers to make the most of existing traffic that has usually taken a lot of time and money to get in the first place.
The Stages of A/B Tests
There are a handful of stages when running a successful A/B test. These stages can vary depending on who you ask but in general, four show up time and time again. These stages can be represented in four questions.
Do I need to conduct an A/B test?
Testing random ideas just for the fun of it will more than likely be a waste of time. For this reason, it is highly advised to create a hypothesis first. This hypothesis must be based on research into where the problem lives. For example, “if I make this change I expect to see this result”. This will help you gain information on not only what needs to change on your site but valuable information on your customers and their behavior.
What metrics will define the success of this test?
This stage is the most important, and thus should be given the most time and focus. It is important to define what metric which will be used to measure if the experiment group is better than the control group or not. To help you decide this, you need to ask yourself what are you going to use the metrics for. There are two main categories of use that you will be using your metric for; Invariant checking, variants that should not change across your experiment and control, and evaluation of metrics and evaluation. These can be either high level such as increase in revenue or percent of market share. On the other hand, matrices can be more finer and look at user experience.
It is important to note that some metrics may not be able to be completely measure correctly due to factors such as technology and demographics used. For example, Java may not run on certain web browsers resulting in incorrect CTR. As a result, filters may be needed to ensure data is not skewed and the metrics chosen can actually measure correctly.
How to design an experiment?
Designing the experiment includes deciding on a unit of diversion, deciding on the size and characteristics of the population and how long the experiment will run for.
- The unit of diversion is what units you are going to run the test on and comparing. Commonly, these can be event based (e.g. pageview) or anonymous ID (e.g. cookie id) or user ID. It’s important to ensure when you have a user visibility change to assign people instead of events. This is so the user will not get confused if they see a change, refresh the page and then see that the change has disappeared. If measuring latency change, other metrics like event level diversion might be enough.
- The population of subjects that are eligible for this test is then selected. Everyone who visits your site may not be eligible for this experiment as you might be only looking at US traffic, of wanting only students depending on what and why you are experimenting.
- Timing in a/b testing can be a deciding factor whether the experiment has been carried out correctly or not. When best suits to run the experiment? During the holidays? At night? Weekdays vs the weekend? This will depend on who the population is and what you’re looking to achieve. Making sure the experiments run long enough to gather a sufficient amount of data but not too long to miss out on the opportunity to use the better performing page with all of your site visitors.
How to analyze data?
Tests can end in three different ways, either the control wins, the experiment wins or there is no change. Reading this much is for the most part easy but it is important not to pat yourself on the back just yet. It is essential to dig deeper into these results and find out more about the behavior of your customers. As Bryan Clayton, CEO of GreenPal explains “Only with A/B testing can you close the gap between customer logic and company logic and, gradually, over time, match the internal thought sequence that is going on in your customers’ heads when they are considering your offer on your landing page or within your app.”
A/B Testing Examples
The Retention of customers is an issue for many watch companies as a customer is usually only in the market for a watch every few years. MVMT faced this issue and introduced a selection of interchangeable watch straps to their site. To ensure these straps increased consumer retention the way in which they were presented on the site was tested. The control in this experiment was with no cross selling of the straps with two test variations, one with the straps above the watches and one with the straps below the watches. By doing this test, MVMT were able to increase conversions by 5.5% for mobile shoppers and 2.2% for shoppers on desktop.
Teamwork-trafficking software Asana used a/b testing to successfully redesign and rebrand their website, improving user experience along the way. To ensure consumers were not surprised with a big website design, Asana implemented these changes slowly over time to gradually optimize the site for the best user experience. By breaking their tasks into two categories, Asana’s product team were able to first focus on core functionality features, implementing them once they had performed well in their test segments. After this, their rebranding team implemented the overall new look of the site and new branded look.
A/B testing is just one of the way’s that our PPC team ensures our clients campaigns are optimized and operating to their fullest potential. You can check out or PPC services here.
Aidan graduated with a Master’s in Digital Marketing from the National University of Ireland, Galway in 2016 where he gained a strong understanding of online marketing strategies, and marketing performance and productivity. Prior to his move to the US to work with Circa interactive, Aidan gained his experience in a variety of industries from festivals to medical devices. His current role within the Circa team is as the Jr. Digital marketing specialist, working with both the SEO team and the Marketing Analytics team, ensuring the service we provide is above the high standard expected from our clients.