Deep down, we conversion experts carry a secret that we don’t want you to know. Even though there are proven conversion principles which apply to any website, as well as many best-practices, which could be applied to improve most sites, there is a large grey area where we experts are forced to say, “well, that depends” instead of giving a definitive answer. Issues such as exact wording of text, size and placement of calls-to-action, length of testimonials, pricing, and a myriad of other minutiae can only really be determined by thorough experimentation.
This “confession” doesn’t invalidate the existence of conversion principles and best practices, nor de-value the role of consultants and experts. But at the end of the day, you don’t really care what the industry best-practices are — you only really care about what will work on your site. Let me illustrate how these principles, best-practices, and grey area all co-exist:
Principle: A page should create “instant affinity” with its target audience.
Best Practice: Utilize design and imagery to provide visual clues that the visitor has landed on the “right” page.
Grey Area: Specific selection of images, size and location.
Principle: A page should immediately communicate the value to the visitor.
Best Practice: Utilize a headline that immediately communicates your place in the market, and the value you offer.
Grey Area: Precise wording of the headline, position on the page, use of color, font, etc.
As a consultant, I spend much of my time explaining and implementing these conversion principles and best-practices. I also spend a great deal of my time helping clients experiment with the grey area. In my opinion, the best tool to use is split-testing.
Split-testing is a technique that was really pioneered in the direct mail field. It’s a simple premise — create two versions of a mailer, place a unique offer code on the response card, and measure which version gets more responses.
The same principle applies on the web, and split testing can be conducted at various levels of sophistication. Let’s take experimenting with a product’s price point to explain the two main methods of testing:
1. Crude testing can be performed by offering a product at a certain price for a specific amount of time (say two weeks). Track the results carefully, and then change the product price for the following two weeks. Which price point created more sales?
This form of testing is better than nothing, but certainly has some inherent problems. Unmeasured differences in the two time periods (such as changes in the industry, demand, competitors pricing, and other external factors such as major news events and even the weather) can influence results.
It makes sense to repeat a crude test like this perhaps three or even four times to ensure your decision is based on valid data.
2. More sophisticated split testing can be conducted using web tools designed for that purpose. In most cases you actually create and post two versions of the page (identical in all respects except for the element being tested) and a special splitter code divides traffic between the two pages.
This type of testing is much more satisfactory from a validation point of view, but is technically more problematic to set-up. Splitter codes can also interfere with SEO effectiveness.
If you’re using a paid search program such as Google AdWords, you can also split test your ads in many cases. You’d expect different ads to get differing click-through rates, but I’m often surprised at the extent to which different ads (for the same keywords, and with the same landing page) have drastically different conversion rates.
When it comes to conversion, experimentation is perhaps the key element that sets apart the superb from the mediocre. Remember that best-practices are a starting point. The implementation of them will often be highly subjective, and that’s where experimentation plays a critical role.