Pretty much any website can benefit from conversion rate optimisation. Blending copy, content, layout, and usability in the right way will encourage visitors to complete the goal you want them to complete, but finding the right blend comes from testing.
Whether CRO is new to you and your organisation, or you’ve already taken the plunge (good for you!) there are some common errors that even the best CRO consultants are susceptible to making. Avoid these potentially costly split testing mistakes and enjoy the revenue increases associated with an improved conversion rate.
1. Testing too much at once
Once you’ve started testing your site, it’s easy to get carried away! There’s so much you can be testing – copy, images, layout, buttons, navigation, the list goes on – but it’s important to be methodical and pace yourself.
Often, when I’m introduced to a new CRO client, and I’m taking my first look through their site, I spot loads of things I’d want to test. Loads. Now, we have a pretty robust CRO methodology at Browser Media, and so rather than just relying on gut instinct, we look at web analytics, gather user feedback, and conduct market research to formulate a test plan. The reason this is so important is because it prevents making too many changes at once. That’s not to say you can’t be bold with your tests, but if you change your headline and add a testimonial, you can’t really be sure which, if either, is having an effect on conversion rate. One variable at a time.
2. Disregarding seasonality
Many businesses experience peaks and troughs in website traffic, and conversions through the year. Christmas and New Year tend to be big ones, but there are many others. Mothering Sunday will be a busy time for florists, jewellers may see an increase in sales on the run up to Valentine’s Day and the beginning of the new academic year will be significant if you sell stationery. In order to really benefit from CRO testing, you need to be seeing a decent number of visitors so you can gather enough data to reach a statistically significant result.
It can be tempting to run a test when you’re enjoying high levels of traffic, because you’d expect test results to materialise more quickly. Problem is, you risk gathering skewed results due to external variables that are beyond your control. Testing outside of these peak times means you’re more in control of your variables, so your results are far more reliable. Look back over your web traffic to predict when these spikes are due, and then avoid them.
3. Rushing to finish a split test
The more data we have, the more reliable the results and the smaller the margin of error. You can’t expect the behaviour of just a handful of visitors to accurately mirror every visitor to your site. We run tests at Browser Media until we reach statistical significance, that is, until we’re 95% confident that one variation will outperform another. But that’s only part of it. The rules of seasonality apply here too, because the visitors to your site on Monday are unlikely to be the same as the visitors to your site on Saturday. Running split tests for a minimum of seven days will account for these changes in visitor behaviour.
Depending on your traffic levels, you could also consider waiting until you have tested a minimum number of visitors. Again, this helps account for having too small a sample size. There are some decent calculators out there to help you do that. VWO has one, as does Optimizely, and I like this one from MyStatCalc.
4. Misreading conversions
The reason you’re split testing is ultimately because you want to increase conversions for the main purpose of your site (sales, subscriptions, sign ups, etc) so keep that end goal in mind. Getting side tracked with micro conversions can have a negative impact on the overall success of your CRO efforts. It’s great that more visitors are watching your service video, but how does that affect the conversion rate of the end goal? Watching the video doesn’t necessarily mean they’ll go on to sign up to your service. Maybe the video actually puts them off and so although they’re engaging, and effectively moving along the conversion funnel, they’re not completing it, which is what you really want.
I’m not saying improving micro conversions is a bad thing (small gains are still gains), I’m just saying it’s not the only thing. Increased micro conversions doesn’t necessarily mean an increase in your final goal, but more people moving on to the next step means more people moving further along your funnel. Measure both. In fact, measure all the steps in your funnel to completion, so if your split test doesn’t go the way you hoped, it may reveal something else you need to consider testing next.
5. Not being fully committed to testing
Test everything. Everything. Swapping one image for another on that landing page might seem insignificant, but it’s likely to have an impact on your conversions. You might be dealing with someone from management insisting on making a change to the homepage because they believe it’s what visitors want. With CRO on your side, you can offer to make that change after ensuring it will have the desired effect; by testing it.
Getting your whole organisation switched on to CRO and split testing is to help them understand that is an essential step in improving conversion rate. Without test results to back it up, any change management suggest is guesswork, which means you’re running the risk of lowering conversion rate.
Data driven decision making
Although it may be tricky to interpret them, numbers don’t lie, and by avoiding the potential pitfalls outlined above, you increase the likelihood of conducting valuable CRO tests with genuinely useful results. Split testing, when done correctly, generates real conversion data that will help you make informed decisions about changing your website; improve conversions and remove the guesswork.
Also published on Medium.