The Top 3 A/B Testing Challenges That Prevent Marketers From Getting Big Lifts

A/B testing can generate impressive results because it allows marketers to discover what really works. And the results can be clearly measured and communicated to clients, business leaders, or partners – whether it’s an ecommerce test that generates 36% more cart completions or a healthcare marketing test that produces 638% more leads.

But those lifts don’t come easy. In working with companies to optimize conversion using A/B testing in MECLABS Institute Research Partnerships, we’ve noticed a few commonalities in the challenges companies face – whether big or small, B2B or B2C, ecommerce or lead gen.

Challenge 1: Knowing What to Test

You can’t just put any two landing pages into a splitter and expect a lift. Marketers quickly learn that some small changes that are appealing because they are easy to implement just aren’t impactful enough to generate a lift. Or big changes might result in a loss (that’s not entirely bad, see Challenge 3).

To know what to test, first you must know where to test. This is where your customer data can be so powerful. Run a funnel metrics analysis to pinpoint where your sales funnel leaks. Where are customers dropping out of the funnel?

This is where you run your A/B tests for the opportunity to have the biggest impact.

Once you’re identified the where, then you want to ask what is keeping customers from taking the next step in your funnel. There shouldn’t be a definitive statement. These are educated guesses (hypotheses). Here’s a framework that might help with those “guesses,” The MECLABS Conversion Sequence Heuristic.

The heuristic is not a formula to solve but rather a thought tool, and it gives you a language with which to discuss test ideas. For example, Aetna’s HealthSpire ran a test in which it decided to further emphasize the value of the conversion action (contacting call center agents) and reduce anxiety at the expense of increasing friction. It was a challenge to their previous approach, which is why they tested it and didn’t risk just implementing it straight away.

The result: that 638% increase in leads mentioned in the beginning of the article.

Challenge 2: Running Valid Tests

Marketers will discover Challenge 1 pretty quickly when they aren’t generating results or valuable customer insights from their testing. Challenge 2 is pernicious, though. It could cause marketers to think they’ve discovered a way to increase conversion when they really haven’t. Or cause them to overlook a conversion increase.

A/B testing is a successful tactic because of its predictive power. For test results to truly have predictive behavior, you have to make sure they reflect customer behavior and that the change you made in the test is what actually caused the results. To achieve that you have to set up and monitor the experiment in a scientific fashion, and avoid validity threats like:

  • Instrumentation effects: For example, 10,000 emails don’t get delivered because of a server malfunction, a piece of hidden code causes an abnormally long load time on one treatment.
  • History effects: For example, unexpected publicity around the product at the exact time you’re running the test, a marketing campaign that skews demand temporarily in one direction, running a test for only 20 hours on a Tuesday when weekend traffic behaves very differently, or running a test on your ecommerce site with highly motivated December holiday traffic and expecting to get the same results in January.
  • Selection effects: For example, another division runs a pay-per-click ad that directs traffic to your email’s landing page at the same time you’re running your test or customers self-select which treatment they see.
  • Sampling distortion effects: This is a failure to collect a sufficient sample size to overcome random chance. For example, determining that a test is valid based on 100 responses.

Challenge 3: Interpreting Test Results

Let’s say you do 1 and 2 correctly and get a huge result. There’s still a fundamental question that needs to be answered – why? Why did customers behave that way? What did you learn about the customer and how can you use this knowledge?


Daniel Burstein is the Senior Director, Content and Marketing at MECLABS Institute. Daniel oversees all content and marketing coming from the MarketingExperiments and MarketingSherpa brands while helping to shape the marketing direction for MECLABS — digging for actionable discoveries while serving as an advocate for the audience.


Publish date: September 27, 2018 https://dev.adweek.com/media/the-top-3-a-b-testing-challenges-that-prevent-marketers-from-getting-big-lifts/ © 2020 Adweek, LLC. - All Rights Reserved and NOT FOR REPRINT
{"taxonomy":"","sortby":"","label":"","shouldShow":""}