One of the most appealing aspects of web marketing is that it is so measurable; you can actually quantify what works. Because I spend a considerable portion of my work time studying web analytics and metrics, here is a distinction I have found useful over the years:
Some metrics are “indicators,” while others are “diagnostic.” The difference is important for the web marketer who wants to actually use web metrics for decision making.
Indicator metrics show the performance level for various areas of a site. They include:
A. Conversion rate – the best example of an indicator metric – indicates the overall effectiveness of the site, but does not explain why the number is at a particular level.
B. Number of visitors – indicates the popularity of a site and the growth of the visitor base, but gives little insight into why the site’s popular.
C. Time spent on the site – indicates how compelling visitors find the site, but this metric alone doesn’t help us determine why visitors are behaving in a certain way.
Diagnostic metrics allow the web marketer to understand why things are happening and provide clues on how to change things or improve results.
A. Bounce rates – show that visitors and content are somehow misaligned. That is: visitors didn’t see what they wanted to see, perhaps in content, imagery or professionalism.
B. Form completion rates – if a form has a low number of completions compared to views – show that the form and the offer are not in balance. Either the form asks for too much data (or the wrong sort of data), or the offer itself isn’t compelling enough.
I’m not suggesting that diagnostic metrics are better than indicator metrics. They’re both valuable. For example, a monthly senior-management report might be mostly indicator metrics, providing a good overview of a site and proving very interesting viewed over time, e.g.: a graph showing number of monthly visitors for an entire year. Whereas, diagnostic metrics can be considered “behind-the-scenes” tools that web marketers can use daily to drive decision-making or in combination with other techniques such as split-testing.