A/B Testing on in App Purchase

In the last few months I have been asked a lot of questions about A/B testing.  The questions have come from all different perspectives.  The questions go something like this:

Have you ever done and A/B test?

Are A/B tests complicated?

Do you think that A/B testing is worth it?

So what I thought I would do is create a little public experiment with one of my apps.  These are the results.

 

The setup

I have an app called 1RepMax.  It is a weightlifting app that is used when you are doing a percentage of max lifting program.  The app has been around since 2010 and is fairly popular.  It has some competition in the category, but I believe it is one of the top apps for this purpose.

This past year, I added three consumable in-app purchases as a way for users to appreciate the work I have done over the years by giving me a tip.  I have received some tips, but nothing significant.  So I thought I would look at using an A/B test to see if it improved my numbers a little.

Currently the in-app purchases look like this:

2

 

The New Look

Very recently, Marco Arment, the owner of the Overcast podcasting app changed his business model to go from an in-app purchase feature unlock to a patron model.  The patron model is much like what I am doing in my app, but I am calling it a tip.  It quickly struck me that calling this patron supported was a much nicer way to ask for money.  So I thought I might switch to this language.  So why not turn this into an A/B test?

After some quick research, I decided to follow Marco’s lead and layout my current three options just as he did.  Using the patron terminology and the 3 multi-month tiers to describe what users are giving towards.  The new screen looks like this:

img_0129

One other thing I did before submitting the app to the Appstore was I changed the plus icon that I used to launch the in-app purchase screen to a variation of the app icon itself.  I did this to grab the attention of long time users of the app.  Like I mentioned, this app has been around since 2010 so I thought it might make sense to make this change.

img_0437

I am using Optimizley, a tool that I am most familiar with for a/b testing, as the testing tool for this experiment.

The Results

Upon initial release I have loaded this test as a 50/50 split between the A and B segments.   I saw a twenty percent increase in the number of in-app purchases made.  The distribution between the three tiers is roughly the same.  With an improvement like that, I will be moving everyone over to the patron language very soon.

The Conclusion

Looking back at some of those initial questions, I have some answers.  The first one, have I done any A/B testing, even though I have done tests in the past, here is a specific example.  Are A/B tests complicated?  There are varying levels of complexity to tests, so it is hard to have a yes or no answer to this question.  However, if the test is set up correctly and has some clear goals in mind, I think they are relatively simple to implement.  Lastly, do I think it is worth it?  In this case it certainly was, the changes in language directly added to the money the app is generating.  However, many times, this is not the case.  I would recommend using A/B testing with a strong dose of intuition.