StitchFix: Suffering from a Non-inferiority Complex?

What if you don't need version B to be better than version A?

This is an amazing post that I'm surprised I've never seen written before. It goes fairly deep into the math, but you don't need to follow it there—the most important part is building the intuition.

Most A/B tests are in the service of conversion optimization: making your website push users to achieve some quantifiable goal more effectively. We therefore want to set up a statistical test to conclude that the new version is superior to the old version. But there are many instances where what you want to do is prove that the new version is no worse than the old version. This is not covered in the standard "Implement Optimizely and go to town" playbook.

If you've ever been involved in A/B testing, you'll likely have run across these scenarios. I have, often. This is the best post I've ever seen outlining how to effectively construct a test for them.


Want to receive more content like this in your inbox?