Tuesday, September 9, 2008

Testing for Marketing ROI

Today seems to be testing day in marketing newsletters. I read an article in today’s SearchBuzz newsletter about multivariate tests and thought I should check out the tools. When I wrote about testing earlier I emphasized A/B splits (bivariate) because I think that’s an easier way to start. While I was working on that, along came this week’s chart from Marketing Sherpa that does a nice job of summarizing some of the strategy issues. But do you need a big budget to test?

In the earlier post I mentioned Offermatica (recently acquired by Omniture) as a testing tool. I really like their demos of both A/B splits and multivariate tests. They’re just a bit harder to find now. You have to look on Omniture’s product tours page and select Test & Target. But this is a hosted service, not DIY.

But the tools I wanted to take a look at are Google Website Optimizer and Widemile's Page Optimizer. Google’s is free. Google, of course, has a video.


Widemile’s business model is to provide technology to agencies and marketing services firms so they can offer testing services to their clients. They have a simple summary that describes the efficiency of A/B split testing against multivariate testing. You can test many variables at once when you use experimental designs (what we refer to as multivariate testing). That seems like stating the obvious. However, what may not be so obvious is that multivariate tests are much harder to “read.” The real question is, “Are these small differences statistically significant?” As the conceptual examples in my earlier post showed, that takes you right back to your statistics courses. And if you’re like me, you absorbed the fundamentals represented in the text box that starts “Choose the variables to be tested.” But you may share my reluctance to base serious marketing decisions on my personal statistical skills and my concern that the automated tools won't produce the right answers. Automated testing products need to take into account necessary sample sizes for statistical significance. They don’t want to burden users with that type of statistical detail (yes, I understand, it’s a bit scary), so you need to ask questions about statistical significance and confidence levels, even if it means you have to drag out an old stats textbook.

Since Google’s tool is DIY—and they don’t particularly want to talk about the significance issue either—I wondered how it was dealt with, if at all. They have a good blog that is well written and may help with some of your concerns. One of the entries I found was for a sample size estimation tool from one of their partners, a site called WebShare. If you followed my advice in the earlier post and read Paul Berger’s chapter on testing in our (free download) direct marketing text, you’ll find it follows the basic steps, although there are still issues. Specifically, it’s a one-tailed test; see the Improvement call-out on the tool page. It’s only looking for better results; not whether the results are either better or worse. That’s probably realistic for most marketing tests. They give you a nice comparison chart—various % improvement, various significance levels. Try it for yourself!

Most of us have a lot of questions about what will work best in our marketing programs. A/B splits will answer a few of these questions for us. Multivariate tests will answer many more questions in the same time period. Whether you are using marketing services of one kind or another, or whether you’re going to try it DIY, do it with care. I am reminded of the Mark Twain quote:

“It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so.”

Don’t accept insignificant results. And remember the old direct marketing rule to keep testing. If it’s an important marketing decision one test, especially one with a small sample size, is just not enough.

2 comments:

Anonymous said...

At Widemile, we're addressing exactly what you're talking about, reaching statistical significance in testing and making it easy for users to understand. We have a very strong and strict methodology and are building new tools everyday to help our partners and internal teams produce quality tests that create sustainable lifts.

I find that the test information provided by Google Website Optimizer to be misleading, and makes users feel comfortable with their data although it often has huge confidence intervals. This is something we want to prevent with our tool.

If you'd like to learn more about multivariate and a/b split testing in general, I talk about it exclusively on my blog. You can reach out to me directly from there as well.

Best,
Billy Shih
Optimization Analyst

Anonymous said...

Great post, very informative. Thanks, Mary!

We here at SiteSpect view Google Website Optimizer as a good tool to get online marketers started in website testing. Because it's free, it’s limited in scope, but that's where companies such as SiteSpect and others can help.

You raise an excellent point about statistical significance and sample size. One of the errors we see online marketers make is running tests too long or too short. For those readers new to testing, consider running tests at least two weeks or 2,000 visits.

Hope that's helpful and keep up the great work.

Jason O’Keefe (www.sitespect.com)