Cause, Effect, and Split Testing

Cause, Effect, and Split Testing

One of the topics web analysts often deal with is analyzing the relationship between what a visitor sees and what they do.

For example, let’s say you have a product detail page on an ecommerce site with three tabs:

  • Product Overview (which is chosen when you first view the page)
  • Product Videos
  • User Reviews

Your boss says she’s debating between putting more emphasis on the user reviews or the product videos, possibly integrating one of them into the product overview tab. She asks you to look into your crystal ball (Google Analytics, Omniture, etc) and come back with an answer.

Your stats show the following conversion rates (for purchases):

  • 3.2% for visitors who viewed the Product Overview tab (default tab)
  • 4.7% for visitors who viewed the User Reviews tab
  • 3.9% for visitors who viewed the Product Videos tab

What do you tell your boss? Think about this for a minute before continuing this article.

I’ll wait :)

If you said that the user reviews are more valuable than product videos, think again.
If you said that the product videos are more valuable than user reviews, think again.

The correct answer is:

I can’t know for sure until we’ve done a split test between the actual variations we’re considering.

This is a classic case where we see an obvious correlation between two events but we’re not sure what is the cause and what is the effect. In our case, there is a correlation between the content a visitor views (product videos or user reviews) and the conversion rate for that group of visitors.

The problem is that there are actually three possible scenarios regarding the relationship between these two events:

  1. Viewing the content (cause) is causing more visitors to make a purchase (effect).
  2. Visitors who are ready to make a purchase (cause) are more likely to view the content (effect).
  3. There is no cause / effect relationship between the two, even though there is a correlation between the them.

Lets look at some examples which could explain each of the three scenarios.

Viewing the content (cause) is causing more visitors to purchase (effect)

This scenario seems to be the most intuitive and many people assume this is always the case (though it’s not). Having a great product video or awesome user review can indeed do wonders for your conversion rates.

I’ve seen plenty of split test case studies where testing a product video or user testimonial on the homepage causes a significant increase in conversion rate.

Visitors ready to purchase (cause) are more likely to view the content (effect)

In this scenario, we have what is called a “self selecting group”. In other words, the fact that a visitor enters your site with a high likely-hood of purchasing (they’re ready to buy) is causing them to view the content, not the other way around.

Here’s an actual example of a self selecting group from when I was working on a web site for a medical device company.

Visitors had a form where they could request product information to be sent to them by mail. The form had two options for materials to be sent to them.

  1. The “standard package” with just consumer related information.
  2. The “doctor package”. This included everything in the standard package, but also included additional information they could take to their doctor.

After analyzing the conversion rates based on which package the visitor chose, we found that people who received the doctor package were almost twice as likely to convert than those who received the standard package; 9% for the doctor package vs. 5% for the standard package.

The question was - Is it the package itself that’s making the difference, or the fact that the visitor chose the doctor package simply means they are more serious about buying the product?

So, I did what any good web analyst would do - I tested it :)

We removed the option to chose which package you wanted from form, and then randomly sent the standard package to half of the people and the doctor package to the other half.

People who received the standard package were converting at 5.5%, compared to 5% when they could chose their own package.

People who received the doctor package were converting at 6.5%, compared to 9% when they chose their own package.

This tells us that a visitor's likely-hood to purchase a product is not determined primarily by the version of the package they received, but by the fact they wanted to take additional information to a doctor.

No direct relationship between viewing the content and conversion rate

While this is always a possibility from a theoretical perspective, it’s usually not the case when trying to measure the relationship between content on a web site and the conversion rate.

Here’s a possible scenario:

Super Blogger Jim just purchased one of your products. He really likes it and tells all of his readers that he already did all a ton of research before buying the product, the average user review is 4.5 out of 5 and that the product video page (which he links to) shows just how useful your product is.

In this scenario, you’ll probably see a strong correlation between users who viewed a product video and the conversion rate, though both viewing the video and the purchase are the effect and the blog posting is the cause.

So, what's a web analyst to do?

Discouraging you from trying to measure the value of content is not the goal of this article.

On the contrary.

I like to think of the clickstream data you’re getting from your web analytics tool as clues. Lots of clues. These clues answer the “what” question - as in “what are visitors doing on my web site”.
The next step is to take the clues and create a hypothesis (or two) as to why visitors are behaving in a certain way.

In our case, why is there a correlation between viewing some content on a web site and the conversion rate.

The only way to know for sure is to do a split test, testing your hypothesis.

From an academic perspective, everything should always be tested. In reality most of us don’t have the resources to test everything. Sometimes we need to make judgement calls without testing. That’s just the reality of business.

The important thing is simply to be aware of the different possible scenarios.

Online Behavior © 2012