Community
Things You Should Consider Before A/B Testing
A/B testing. It’s something we’ve probably all heard of, perhaps even used, but it can often be misunderstood and misused as well (even when approached with the best of intentions!). Today I'm going to be taking a look at what A/B testing can be used for, some of the potential challenges you’ll face along the way, and some of the other strategies you can employ to achieve similar outcomes. Consider The Purpose Of Your A/B Test And How Long It Might Take The purpose of A/B testing is to determine which variant (“A” or “B”) of something (think app landing page, signup flow, etc.) is better than the other by running experiments. In these experiments, some users are shown variant A, and others are shown variant B. Conversion rates are then measured for each variant and can be compared to determine which one performs better. Conceptually, it’s reasonably simple and usually quite easy to implement technically, but the real issues that can be encountered with A/B testing are with experiment design and interpretation of the results. The first big issue with experiment design is the sample size. To consider a result from A/B testing to be conclusive, you should be aiming for statistical significance in the measured outcomes. However, if you don’t have a suitably large userbase, it may take a long time to achieve statistical significance in practice because you lack the traffic to achieve the necessary statistical margin more quickly. The recent example of Optimizely shutting down their free tier to focus on their enterprise product line certainly suggests that small sample sizes really can make A/B testing tricky. Another consideration with the experiment design is the time period it will run for. How do you measure when the experiment will be considered complete? If you are running it to the point of statistical significance then this may be many months. During this time, how do you approach changes to the feature you are testing? You effectively have to stick with the variants you designed up front until the test concludes or you risk invalidating the outcomes by changing the variants as the test is running. With this in mind, conducting meaningful A/B tests can be harder than it sounds when you’re building out a mobile app and trying to respond to the market quickly. Modern agile development practices are at odds with locking in long-running test variants before making decisions for the next changes. WHAT ARE THE ALTERNATIVES TO A/B TESTING?
Following the above, I would suggest you explore some of these options as an alternative (or as a supplement) to more traditional A/B testing:
Conclusion
Whilst A/B testing might be a good choice for your project, it is worth checking your experiment design for soundness and considering how you might add other tools and techniques to your process to help you adapt to change more quickly.
What's been your experience to date?
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Prakash Bhudia HOD – Product & Growth at Deriv
11 July
Srbuhi Avetisyan Marketing and Communications Manager at Owner.One
09 July
Parminder Saini CEO at Triple Minds
Francesco Fulcoli Chief Compliance and Risk Officer at Flagstone
08 July
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.