Conversion rate optimisation is the relative new kid on the block when it comes to digital marketing; SEO and PPC are widely accepted as essential activities, and have been for ages, and with social media being such a huge part of many people’s lives, selling it in isn’t much of challenge, but engaging in CRO can take some persuading. If you’re embarking on a CRO journey, it’s tempting to rush into a/b testing to try and ‘prove’ its worth right away, but this ‘blind testing’ is a poor use of time and resource.
What is the goal of a/b testing?
Before you kick off any testing, you need to formulate a hypothesis. And a good hypothesis is based on research, data, facts, and findings. An a/b test based on a marketer’s hunch is one based on guesswork. The aim of conversion optimisation is to eliminate the guesswork. The goal of a/b testing is to offer a real solution to a genuine issue that you’ve uncovered through rigorous research of real users’ experiences.
And that takes time.
Identifying and prioritising areas for improvement on a website requires full understanding of not only the website itself, but also of users. By understanding each step a user must go through in order to complete a conversion, you’re better placed to reveal problematic touch points, and weaknesses in the funnel to conversion.
As an example; from e-shot, to email copy, to landing page, to CTA, to conversion, there are multiple points at which a user can drop out of that funnel. Moving through that journey may reveal that it’s not your landing page that’s to blame, but the copy in your email. Effective campaign tracking will help uncover the leaks in the funnel, and once you know where the biggest leak is, it’s time to plug it!
Conversion rate optimisation requires quantitative and qualitative research
Diving into Google Analytics identifies where your drop-offs are, it doesn’t exactly explain why they’re occurring. Sure, if users are bouncing after clicking through to your landing page, you might a have a problem with continuity in your email’s message and your landing page’s copy. And if users are navigating to other pages on your site instead of converting on that landing page, it could be that they need more information. But these are assumptions – and you know what they say about making assumptions.
User feedback surveys, a review of customer service/support enquiries, and speaking with the Sales team – the people that talk to users all the time – will offer real insight from real people. Maybe there’s email after email from disgruntled users who can’t complete their goal when coming to your site, maybe the feedback survey shows multiple users are struggling to find the same piece of information, maybe the Sales team are dealing with the same problem over and over… these are the foundations of your hypothesis.
What does a test hypothesis look like?
Obviously it will depend on your business model, your website, and the research you’ve conducted, but here’s a basic formula:
As our research showed <the findings>, then we anticipate that <the variable> will cause <the predicted result>.
So, using the hypothetical email campaign above, a test hypothesis might look a little something like this:
As our research showed, via Google Analytics data, that many users exit our landing page to visit the About Us page, then we anticipate that including more company information on our landing page will cause an increase in the number of users staying on our landing page to follow the CTA.
Ok, now you can kick off your a/b test
With a data-driven hypothesis comes direction, and with direction comes a solid testing schedule, and with a solid testing schedule comes efficient, effective CRO. Now, conduct a/b testing with purpose, and optimise with confidence.