Benchmarking Your Campaign: Consider the Source

By Scot Riches of CMA's Customer Insights & Analytics Council

Does this scene sound familiar?  Your campaign results are in, and your performance analysis is done.  It looks like another successful campaign, and all the metrics are going in the right direction.  And then someone asks, “How are we doing relative to the competition?” or “How did we do versus the industry average?”  Or another familiar scene for marketers: you are planning a new campaign, a campaign you’ve never done before, and someone asks, “What can we reasonably expect as a response rate?”

The truth is, it’s impossible to know without benchmarking.

Wikipedia tells us that benchmarking is the act of comparing one’s business processes and performance metrics to industry bests and best practices from other companies.  And from a marketing analytics standpoint, benchmarking provides the competitive context that helps answer some of the tough questions clients often ask.  But beyond the numbers, benchmarking also provides a basis for larger insights. It allows us to start learning why a particular campaign sits either below or above industry norms.

But benchmarking also presents some challenges, and the biggest one is the source. 

Agencies that provide reporting and analytics may have experience across dozens of brands and several categories.  They might be able to say things like, “These are the highest response rates we’ve seen in a while.” But they provide a very limited perspective on how your campaign is performing relative to others, simply because they don’t work with all your competitors.  Most agencies are limited to working with one client per category at a time, further narrowing their perspective on success.

Some service providers or digital platforms will publish benchmark reports.  Many of these are fantastic resources that show performance based on industry.   Google provides benchmarks for their own services like AdWord and site traffic, and email deployment platforms such as Experien, HubSpot, Epsilon and IBM Marketing Cloud publish regular email metric benchmarks. All of these are worth a look because they are based on robust datasets, which certainly makes them more reliable than what many agencies provide. But a side-by-side comparison of some of these vendor-specific benchmarks can be revealing.  Industry “norms” can show wild variations from one vendor to the next, as we can see from an example of key email marketing benchmark metrics, all published in Q1 2016.  In this case, we were looking for Consumer Packaged Goods industry norms for click-to-open rates.

Clearly, the variance between sources is considerable. It could be that each vendor has a different category definition, or perhaps the sample sizes are not as robust as they need to be, or perhaps vendors are introducing bias into their benchmarks.  As an example, the campaigns from clients of an email deployment platform marketed primarily to SMBs may perform very differently from the campaigns from clients of an email deployment platform built for enterprises – even if we are looking at the same category, such as CPG.

So what does one do for reliable benchmarks?  How do we answer the tough questions about our performance vs. that of our competitors?

  • Try benchmark banding – my colleague Margaret Ngai pioneered this approach a few years ago in order to provide a more reliable (and realistic) context for success.   By selecting several benchmarks with a similar approach and methodology and then taking a look at the competitive benchmark corridor (the low vs. the high in the category), you develop a reasonable expectation band. 

Marketing performance analysis helps us make better decisions about a whole host of things, from how we invest our marketing budgets, to the messages and creative we use to influence consumer behaviour.  Benchmarking provides the competitive context to marketing performance analysis, and without it we are left blindfolded. But benchmarking isn’t easy. To answer the tough questions being asked of us, we need to ask tough questions about benchmarking – about the source, about the methodology, about the metrics definitions -- until we have realistic expectations of “industry norms”.

Real, unbiased third party benchmarks exist, but they are rare. The good news is that we, as marketers, can change that. The best industry benchmarks, such as those provided by Qoints, depend on us to participate – which means we have the power to improve the benchmarks available to us. As marketers, we tend to be a competitive bunch, not prone to sharing detailed campaign results, even when we know they’ll only be published as part of an aggregated study.  But the upside of sharing our performance data outweighs any cost, because we stand to gain something much more valuable than anything we give up: better, more usable benchmarks. And that benefits us all.

Scot Riches is the Chief CRM Officer at RI

Tell Us What You Think
  1. If you haven't left a comment here before, you may need to be approved by CMA before your comment will appear. Until then, it won't appear on the entry.
    Thanks for waiting. View CMA's Blogging Policy.

Tags: benchmarking, hubspot, marketingcloud, comscore, qoints, data, analytics