Split Testing

This section illustrates how web analytics and split-testing can be used to confirm whether your application of our website optimization recommendations has been effective both for your customers and for your overall business performance.

The world of web analytics has been shaken up since Google Analytics became available for open sign-up since August 2006.  This means that all e-commerce sites can have some sort of web analytics system installed, and that is before we count in enterprise solutions like Adobe SiteCatalyst or open source analytics platforms like Piwik and Open Web Analytics. As a consequence, a tipping point has been reached in e-commerce. Whereas previously a site with good analytics insights might have had a competitive advantage over its competitors, now a site without analytics is at a substantial competitive disadvantage. And the analytics arms race is heating up for the market leaders as well. Good analytics enable more effective marketing, better conversion and higher order values – all of which mean more money to spend on further efficiency gains, including even better analytics insights.

While there are many amazing claims made for the impact of split-testing (can a simple change in a single button actually generate an extra 300%
growth in revenue for one retailer?), split-testing has the potential to be as transformative of e-commerce as web analytics, yet it does not appear to have got anywhere close to realizing this potential. There are several likely reasons for this – split-testing technology is relatively new and often requires changes to e-commerce backend code and hence a software re-deployment for the launch of each new test. Also, the skills needed to identify substantial, insightful, evidence based hypotheses for split-testing are significant and far from widespread in the e-commerce community. As a result, few tests are undertaken and those that are, either give insignificant results or are trivial, with little financial benefit or limited generality beyond the specific circumstances of the test. Where split-testing does have a high likelihood of making substantial improvements is in checkout – the losses are large, easy to measure and confined within a short process over few pages. We suggest, therefore that while the hype over split-testing may be overblown for many aspects of e-commerce design, it holds true for checkout improvements and even lead generation.

Split-testing can vary from a test of a specific control on a single page all the way through to changes that apply to the entire checkout. At Soliber Net we undertake split-testing for our clients on very clear terms – the website must receive enough traffic and start with high enough conversion to promise true ROI potential from implementing and running the test. As should be becoming clear by now, split-testing can get quite complex quite quickly. It is all too easy to get drawn in to big multi-variate experiments only to find none of the combinations yield significant effects. This is something that often be avoided by a simple and quick A/B test to begin with, as suggested below:

  • Button test – there are several ways to get button design wrong, and in most cases we don’t need to run a test to prove. When we do, though, we suggest testing whether call-to-action buttons will be more effective the more visually striking they are and the more compelling their call-to-action message is. This can be a simple enough test to implement and has the advantage of giving a clear and quick indication of whether button design has the potential to increase clickthrough and hence revenue.
  • Banner test – this is a very effective and simple to implement split-test designed to select the best creative and callouts for banner retargeting within the website.
  • Checkout isolation test – any links from the checkout to anywhere else on the site are simply temptations to abandon checkout. So, a testable hypothesis for split-testing is that an isolated checkout, with all the links to other parts of the site removed will have a higher rate of checkout completion than a checkout where such outbound links remain. This is already quite a demanding test – you need to tests two different headers (one with and one without links), two footer sections (one with all the site-wide links and another with only checkout related links that open in a pop-up layer over the checkout). The test also needs to be run over all of the checkout pages.