Things You Should Consider Before A/B Testing
A/B testing. It’s something we’ve probably all heard of, perhaps even used, but it can often be misunderstood and misused as well (even when approached with the best of intentions!). Today I'm going to be taking a look at what A/B testing can be used for,
some of the potential challenges you’ll face along the way, and some of the other strategies you can employ to achieve similar outcomes.
Consider The Purpose Of Your A/B Test And How Long It Might Take
The purpose of A/B testing is to determine which variant (“A” or “B”) of something (think app landing page, signup flow, etc.) is better than the other by running experiments. In these experiments, some users are shown variant A, and others are shown variant
B. Conversion rates are then measured for each variant and can be compared to determine which one performs better.
Conceptually, it’s reasonably simple and usually quite easy to implement technically, but the real issues that can be encountered with A/B testing are with experiment design and interpretation of the results. The first big issue with experiment design is the
To consider a result from A/B testing to be conclusive, you should be aiming for statistical significance in the measured outcomes. However, if you don’t have a suitably large userbase, it may take a long time to achieve statistical significance in practice
because you lack the traffic to achieve the necessary statistical margin more quickly. The recent example of Optimizely shutting down their free tier to focus on their enterprise product line certainly suggests that small sample sizes really can make A/B testing
Another consideration with the experiment design is the time period it will run for. How do you measure when the experiment will be considered complete? If you are running it to the point of statistical significance then this may be many months. During this
time, how do you approach changes to the feature you are testing? You effectively have to stick with the variants you designed up front until the test concludes or you risk invalidating the outcomes by changing the variants as the test is running.
With this in mind, conducting meaningful A/B tests can be harder than it sounds when you’re building out a mobile app and trying to respond to the market quickly. Modern agile development practices are at odds with locking in long-running test variants before
making decisions for the next changes.
WHAT ARE THE ALTERNATIVES TO A/B TESTING?
Following the above, I would suggest you explore some of these options as an alternative (or as a supplement) to more traditional A/B testing:
- Agile development with short release cycles, frequently combing your backlog to improve the app rapidly
- Conducting user testing at the design stage to iron out any issues early on
- Using an ASO tool to stay on top of search rankings, ratings & reviews, and the competition
- Integrating a crash reporting tool to alert you to errors as they occur so you can respond before your users notice
- Staying on top of retention stats and key conversion metrics to see what impact your changes are having in real time
- Using automated push campaigns to re-engage users as they drop out
- Integrating a user feedback tool to gather actionable insights from real users
- Adopting a proactive stance to change, whilst engaging your users in conversation about what works for them, and doesn’t
Whilst A/B testing might be a good choice for your project, it is worth checking your experiment design for soundness and considering how you might add other tools and techniques to your process to help you adapt to change more quickly.
What's been your experience to date?