Real Magnet

Benchmarking Data: What to Use, What to Ignore

As email marketers, we naturally pay a lot of attention to data. It isn’t just our own data that is of interest, of course. Just as our own campaign analytics help us shape subsequent email messages, turning to benchmarking data can help us identify initiatives we ought to be considering for our email program. And also like analytics, benchmarking data is best used to change our behavior. Knowing a data point – whether it is your own open rate or the aggregate open rate of representative companies in your industry – does not make your email more effective. Understanding the data point – why it is that number, what direction it is trending, what factors have influenced it – is what makes the data valuable, and that value is only recognized if we act on it. If we believe our open rates can go higher, we need to know what is suppressing them so we know what to change.

Not all benchmarking data is helpful in this regard. It is easy to get wrapped up in benchmarking data because it helps us “measure” our own program, but if it does not help us identify what to change in order to improve it loses its value and its claim on our attention.  Here are a few examples of some benchmarking data we should use, and some we should ignore:

What to Use:

Competitive Spending: Seeing where other marketers are investing in the coming months or year is valuable because changes in marketing technology drive a change in consumer expectations. If the brands whose emails are book-ending yours in the inbox exhibit greater personalization and targeting, integrated multimedia or interactive elements, your subscribers will grow to expect the same. It is not about keeping up with the Joneses as much as staying in sync with the constantly evolving technology landscape.

Data That Reflects Consumer Trends: Knowing the average open rate doesn’t tell us what we should do to improve, but learning for example that 40% of all opens are now on a mobile device points out a consumer trend we should be prepared for. Another example was the recent report that 70% of messages marked as Spam are legitimate marketing emails. This tells us that consumers are not distinguishing between unsolicited messages and those they just don’t want to receive any more, which suppresses delivery rates through an increase in spam complaints. Data like this does not just tell us we need to improve; it clues us into consumer sentiment in a way that allows us to address it specifically.

Data on Message Type Effectiveness: A revelation to come to the fore in 2012 is that triggered messages wildly outperform business-as-usual emails. Unlike trends in subject lines or time of day, this is replicable for all email marketers: send more triggered messages (confirming a subscription, purchase, survey, etc.) and you will create more engagement.

What to Ignore:

Industry-level Engagement Benchmarks: The first problem with benchmarking engagement metrics is that it represents the industry average of a particular metric like open rate or click rate, and we all aspire to be better than average. But the bigger shortcoming is that it lacks context. Maybe the average open rate for your industry is published at 18.5%. Does it change for list size? Message frequency? Subscriber tenure? How is each brand represented in that average segmenting and targeting? How many are using subject line tricks that move the needle on open rate but suppress engagement afterwards? If the only point of commonality we can be sure of between the average and our own email program is that we are part of the same industry, we are learning very little from this data. Without context, we do not know if the comparison is valid, or what we should do to improve.

Micro-analytic Trends: For this example I’m going to pick on all the reports I’ve seen lately that promote the best time to send an email. No doubt there are some defensible studies that show open rate is higher at 4pm on Tuesdays than at 10am on Mondays. Does that mean a message sent at 10am on Mondays will fail, or that one sent at 4pm on Tuesdays will break records? Again, data like this lacks context. We don’t know if the brands aggregated in these studies were B-to-B or B-to-C, if the messages were promotional, newsletters or triggered, or if sending later in the day or week simply means the marketer had more time to create a more engaging message. The data may be interesting, but it is not instructive.

Seeing how your program is performing relative to others can be very valuable, but not because it gives us a new level to aim for. We should already be trying to improve our email program with each message, even if we don’t know how our click rate compares to the rest of the industry. The real value to benchmarking data comes from telling us which route to take to improve.