Quickies : A/B testing in the real world

£200 million button

Most web marketers know the story of how Amazon changed their check-out process; the wording on the button was changed from “Register” to “Continue” and a simple message was added (“You do not need to create an account to make purchases on our site. Simply click Continue to proceed to checkout. To make your future purchases even faster, you can create an account during checkout”). The results were staggering as first-time customers purchasing went up 45% which in turn generated an additional $300,000,000 in the first year. If you’re not familiar then the full story is here.

“One accurate measurement is worth more than a thousand expert opinions”Admiral Grace Hopper

What is A/B testing?

A/B testing is a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response or action rates. This is a classic direct mail tactic and has been recently adopted within the digital world to test tactics such as buttons, banner ads, emails and landing pages.

Significant improvements can be seen through testing elements like copy text, layouts, images and colours. However, not all elements produce the same improvements and by looking at the results from different tests, it is possible to identify those elements that consistently tend to produce the greatest improvements. Compounding those improvements across a number of online properties frequently makes a significant difference to overall improvement. Most experts agree that A/B testing is NOT optional, it should be deeply ingrained in your web strategy.

Employers of this A/B testing method will distribute multiple samples of a test, including the control, to see which single variable is most effective in increasing a response rate or other desired outcome. The test, in order to be effective, must reach an audience of a sufficient size that there is a reasonable chance of detecting a meaningful difference between the control and other tactics.

We are huge fans of A/B testing when it comes to getting quick and simple performance improvements to web-sites. There are a number of different methodologies available to use when it comes to testing including Google’s free Website Optimizer and if you are serious about using A/B testing as a weapon then you need to understand some basic before diving head-long into random testing. To start with, there are some golden rules that are useful to understand.

1. What is the objective of the test

Before you even start thinking what to test and how to run it or even start planning the myriad of items that need to be done to run a test you must start with a strong statement of what the test is trying to prove and a way to absolutely measure it. In fact you have to be a little bit anal at the stage because if you are not careful you can make costly mistakes by not defining well enough the purpose of the test.

BAD: We want to improve conversion rates from our site

GOOD: We want to increase the number of consultation requests from our PPC Landing Page

The enemy of testing here is the vagueness of the bad objective. Unless you closely define the measurement criteria you cannot really decide which is the winner. Get your aim clear and don’t change it during or after the test, ever.

2. Keep it simple, stupid

There will be a day when you have run out of all the simple tests you can do and when that day comes you can start to explore weird and wonderful testing ideas – but until that day keep it simple. Nothing is more likely to stifle your testing regime than slow, cumbersome tests with vague outcomes. Start by looking at your web-site touch points (the places and areas that visitors interact with your site) and thinking how you can try new stuff.

Start with buttons, forms, navigation, layout and position. Change the colours, sizes and wording on all your Calls-to-Action (CTA) and think about re-writing your copy.

3. Don’t be a HiPPO

Most senior marketers have some sort of qualification, maybe attended a university to study the subject and might have 10 or 20 years of experience. They also have an opinion so adding the two together gets you “Highest Paid Persons Opinion”. In most cases this will ruin the very best testing campaign as very good ideas won’t get tested and very bad (but cool?) ideas will be tested and fail. On balance what we see is that the overall testing regime is deemed to not work.

Senior marketers have to learn to let go and let the tests flow and not express an opinion unless it has it’s basis in hard fact. My recommendation would be to ignore any sentence that starts “I think”or “In my opinion” and encourage sentences that start “Why not” and “If we did…”.

Better still, crowd-source the ideas from outside the marketing department.

“Almost any question can be answered cheaply, quickly and finally, by a test campaign. And that’s the way to answer them – not by arguments around a table. Go to the court of last resort – buyers of your products.”Claude Hopkins, Scientific Advertising, 1922

4. Embrace Trial and Error economics

There is no such thing as a failed test, all tests (done properly) will give you learning. Foster a culture where people across your organisation are encouraged to come up with lots and lots of ideas. Start a big pot of ideas and randomly go through the pot picking out ideas to test. You must establish your bankers (the current “winners”) but EVERYTIME you change or add something to your site you should test the new content/layout/copy against the banker otherwise you will never really know if your new stuff is working.

“Enlightened trial and error outperforms the planning of flawless execution” — David Kelly, founder of Ideo

5. Do the math

Make sure you have a good understanding of random probability or least read about the null hypothesis. The winner in a test is not the first past the post but the higher performer given a certain confidence level that the result is not the effect of randomisation. The maths might look a little tricky at first and may seem counter intuitive but trust me on this; get the maths right.

What about your competition

It’s unlikely they are doing this and even if they are testing there is a good chance they are going about it the wrong way. Testing is a great chance to put some clear space between you and your competitors and the sooner your start the further in front you will be and the harder it will be for them to catch you up.

How much?

Not less than 20% of your online budget. A prolonged and systematic testing should be able to double your conversion rates and reduce your traffic spend by around 50%. Potentially you could reach a point of ever-decreasing returns but if that happens in the first couple of years then you’ve probably not widened the testing net far enough. It should pervade across ALL you do online.

More reading?

If you are interested in a more in-depth look at A/B testing and how to conduct proper controlled experiements on the web then I highly recommend the seminal 2007 white paper “Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO“. Friend and colleague Bryan Eisenberg produced a great article in 2004, titled “A/B Testing for the mathematically disinclined” and also wrote a number of ace books (waiting for your cat to bark, Calls to Action and Always be testing). “Putting A/B testing in it’s place” by Jakob Nielsen is worth the 5 minutes it takes to read. Finally, “the (ultimate) guide to A/B testing” is great skim reading although I would not call it ultimate by any stretch but certainly worth 10 minutes.

Enhanced by Zemanta