Conversion rate optimisation is an essential part of any digital marketing strategy. But there are a lot of myths surrounding this crucial aspect of the industry. Here are the top ten myths, busted!
1. It’s all about the conversion rate
I don’t like the acronym CRO, or even the name, conversion rate optimisation. It’s misleading in that it suggests it is all about the conversion rate. While the conversion rate (number of conversions/number of sessions) is commonly used and referred to in the context of website optimisation, it should by no means be the only metric used in an on-going campaign.
That’s why I prefer calling it Conversion Optimisation… but the acronym CO, is pretty lame.
Still, I prefer this name as there are other, equally important metrics such as revenue, profit and average order value which provide important context for the conversion rate.
For example, a website could decide to run an experiment testing price points and the winning variation sells said product considerably cheaper than all its competitors. The conversion rate is likely to go through the roof but the business makes less money because profitability has been eroded. For this reason it is important to keep an eye on other financial metrics when running experiments so when you are optimising for conversion rate also measure your average order value and vice-versa.
Or, even better, use metrics which combine all the important KPIs. A good metric to use would be revenue per visitor – this takes into account both the conversion rate but also the average order value.
If profit margins can be applied to the measurement/reporting plan, one could even go another step further and measure profit per visitor. But you should also consider the impact on metrics which are in-directly linked to your experiment such as returns and customer service emails or phone calls. All these have a cost to the business and must be considered and ideally measured.
2. CRO is a project or campaign
Conversion optimisation is not a project or even a campaign, it should be a culture embedded within the culture of your organisation. Whether your business is an online retailer, a charity, a blue chip organisation or a digital marketing agency – the desire to optimise the performance of a website must come from within and involve every department.
Our friends at Optimizely used the fitting analogy of the ‘conversion flywheel’ – a wheel that spins at the centre of the organisation which starts to transfer its momentum to its surroundings as it gains traction.
If you want to know more about Optimisely watch this video:
But what does that have to do with CRO best practice you may wonder? I would argue that there is only one piece of CRO best practice advice you should follow and that is ‘always be testing’. In a constantly evolving, fast-paced trading environment, best practice advice can become out of date overnight. More importantly, conversion optimisation is all about context – what works for one website probably won’t work in the same way for another site.
There is a myriad of UX best practice and principles out there which every good designer would be well aware of and I do not want to dismiss their value. A sound understanding of these principles will help us to form clearer hypotheses and build better experiments – and ultimately to improve the performance of our website.
3. Statistical significance is the only metric you need to decide when a test is done
When running a content experiment, statistical significance is reached when the confidence that a variation will beat the original has reached at least 95%. This means that there is a chance of 5% or less that the result of the test won’t be the same when applied to a larger sample group (such as all visitors).
Hence, the higher that confidence, the less likely you are to declare a variation a winner when it is not. However, there are a number of other factors to be considered before declaring a winner and implementing the winning variation:
– Sample size – for every experiment there are three possible outcomes: accurate results, false negatives and false positives. While false positives and negatives cannot be completely avoided but using a larger sample group will always help to reduce them. However, running an experiment for a very long time means potentially losing out on testing other things and delaying benefits from an uplift. So it is important to strike a balance that is right for your business.
– Sales cycles – an experiment should always run long enough to cover seasonal sales cycles like a full week or a full month to allow for the impact of these external factors such as weekends – pay-days and bank holidays also need to be taken into consideration. Even if your website has very high traffic and you can reach statistical significance within three days, if those days were Tuesday to Thursday at the end of a month or included a long bank holiday weekend it is unlikely that the result is sustainable in the long term. This is particularly important around Christmas time – experiments run in November and December may no longer be valid in January. Key here is to be aware of the sales cycles of your industry and to run your experiments accordingly.
– Inconclusive experiments – it is not always feasible to run an experiment until statistical significance is reached. We often see tests that do not produce a clear winner and usually stop them after a maximum running time of 4-6 weeks if there is no clear indicator of one variation performing better than another.
Running tests that do not have a clear impact for too long means wasting valuable time as you could be running a test with greater impact instead. Know when to cut your losses and move on.
4. An inconclusive or negative test is a wasted test
While a negative test result may leave a little dent in your ego it is by no means to be seen as a defeat or waste of time. Each experiment you run will teach you something about your website, your audience and their behaviour. These learnings form an important part of the optimisation process and will ultimately help you to formulate better hypotheses and run better, more successful experiments in the future.
Carefully segmenting your experiment audience once your experiment has concluded or was stopped can bring some valuable insights to the surface. For example, you can find out that your variations out-performed the original for tablet users but actually performed worse than the original for desktop users.
These two results cancelled each other out, leaving you with what appeared to be an in-conclusive result at first glance. Ensure you take the time to really understand what happened in an experiment, especially if the outcome was negative or in-conclusive. There is no such thing as a wasted test.
5. Following industry best practice always works
Yellow buttons, we have all seen them. They are everywhere. From Paypal to Amazon and eBay, all the big guys are using yellow buttons so surely yellow must be the best colour for a call to action (CTA). Well-primed by the first two ‘myths’ of this post you are probably now thinking ‘I know, it’s all about the context’. That’s right – the best CTA button colour depends entirely on the colour scheme used on the site.
On a website that predominantly uses the colours blue and white, yellow stands out as the complementary colour. But what about a website where a yellow button would just blend into the background and become part of the wallpaper, effectively making it invisible to the scanning eye?
6. Rotating home page banners are bad for conversion
Anyone who has engaged with CRO literature will know that there is a common belief out there that rotating banners on a home page are bad for conversion.
In the same manner that I like to respond to all difficult CRO questions, my answer would be ‘it depends on the context’. Rotating home page banners are often accused of being a compromise or a vehicle to please a number of internal stakeholders but we have found that this depends entirely on the website, the industry and the context. In an experiment we conducted for Nationwide Vehicle Contracts, the winning variation after a series of experiment was indeed one dominated by a rotating banner.
The moral of that story is that in conversion optimisation, even expert opinion and ‘best practice’ can lead you down the wrong path. Follow your instinct and ‘always be testing’.
7. Removing the navigation from the checkout will improve your conversion rate
This is a nice and straight-forward piece of advice. Don’t just assume that removing the main navigation from the checkout funnel will improve your conversion rate every time. We have run many experiments and while some of them have produced an uplift in conversions for our client, some have done the opposite.
The idea behind removing the main navigation from the checkout is that it removes distraction for the visitor who is about to convert. But what if the visitor requires an important bit of information such as shipping details or returns info half-way through the checkout? If that information is not easily available at this point and the navigation has been removed, we are forcing her to abandon the checkout entirely.
The best advice here is to analyse your conversion funnel in detail, look at the data to understand what users are doing. Use heatmap software and user recordings to get a better understanding of where they are not clicking. And whatever change you decide to make – don’t just implement it, test it.
8. The devil is always in the detail
There is a good chance you will have read many case studies about the amazing impact a moved button, a change in font or call to action have made. It is easy to get drawn into this level of detail when starting a conversion optimisation campaign on your own website, especially when you have been working with the site for a while and there is a risk you are potentially too close to it.
But these small changes often only produce small uplifts, especially if the issue is much more fundamental. We regularly come across websites that are unclear on what it is that they offer, do not have a clear value proposition or are simply more expensive than their competitors.
It is therefore critical to try and take a step back during an initial CRO review to understand the bigger picture, the competitive environment of the website and to assess its fit. Do your visitors immediately understand what your website is about and why they should buy from your site instead of any other? Does your website’s design represent your brand and service as well as they deserve? How does you website compare to your closest five online competitors? Is your website meeting your target audience’s needs?
Asking yourself simple, top-line questions such as these may highlight that the main hurdle to conversion is located at a much earlier stage in the buying cycle. Big improvements in conversion may require bold changes. The advice here is to ensure you understand the big picture before getting bogged down in the details of the conversion funnel.
9. An improvement you make now will last forever
A successfully run experiment provides a snapshot of which variation performed best at that time. All else remaining equal, the uplift generated by the winning variation should remain stable over time. But we know that almost nothing remains equal – especially in the digital space, change is the only constant.
There is seasonality, changes in technology and connectivity, changes in the competitive environment, changes in customer preference, changes in the economy and even changes in the weather which can impact how visitors to our website behave and buy. It is therefore important to never rest on your laurels but to make testing and improving your website an on-going campaign rather than a project.
Another way to keep an eye on how successful a variation is over time is to keep running a ‘stub’ – that is a small percentage of traffic which is still sent to the old original version of the page. This provides an on-going benchmark and some re-assurance that your winner stands the test of time but can be tricky to maintain over a prolonged period of time.
10. Your winning variation will be more satisfying for all your visitors
When split-testing a number of variations against the original page you are looking for the variation that provides the average best solution across your entire test sample. So while the new variation may be working best for the majority of visitors, there is likely to be a sub-segment of your traffic who find the site less satisfactory than the original. Their needs however will be ignored because they have been ‘over-ruled’ by the majority of visitors.
The only way to make it right for everyone is to go down the personalisation route, deploying machine-learning algorithms that learn user behaviour and tailor the website content to their unique needs but that is an entirely new blog post altogether.
A sophisticated split testing tool will allow you to take a few steps towards personalisation and thereby increasing the overall number of visitors who have a satisfying experience on your website. The key here is segmentation, segmentation and more segmentation.
Start by running an experiment on the entire audience, post-segment, run the next set of experiments on specific audience segments, post-segment again etc. tagging, flags and targeting criteria…
Author: Julian Erbsloeh
Courtesy: www.freshegg.co.uk