Brendan Hodgson » PR measurement At the intersection of yesterday & tomorrow Wed, 14 Oct 2009 12:08:41 +0000 en hourly 1 The Agreement Index… A Model for Measurement? Thu, 10 May 2007 17:42:00 +0000 Brendan Hodgson Not only do I like the platform and premise on which the Telegraph’s new blogging community has been created (and which appears to have already drawn a fair number of folk into the My Telegraph fold), I really like the notion of the Agreement Index as a means to visually capture the general sentiment of readers around a specific blog post.

I wish every blog had one, as I believe it would add an entirely new level of qualitative analysis to the rather sludgy state of social media measurement that we currently reside in today.

Good job.

Hat tip to Martin via Simon (who likes the site to)

]]> 1
PR Measurement Wars?… No good can come of this Fri, 12 Jan 2007 21:32:00 +0000 Brendan Hodgson A colleague recently forwarded me this, and I have to say I’m a tad disappointed. Sensationalism aside, and I’m as guilty as anybody since I certainly don’t have any real intention (in case you wondered) to strap on a mask and cape, but how can this kind of discussion be even remotely helpful to our industry?

Too lazy to click the link? Here’s the gist…

PR Measurement Wars: Who’s Winning?

With so many ways to measure media relations impact, who’s got the winning formula?

Is the old measure of advertising equivalency on its way out, or is it stronger than ever? What about MRP (media relations rating points) and CPC (cost per contact)? Do either measure online impact? With so many sources of impact data and research, who are we to believe?

They’re all here, so decide for yourself!

To suggest that one system of measurement is better than another simply doesn’t make sense to me. It’s like comparing apples to oranges. Campaigns are different. Objectives are different. Media targets are different. If MRP makes sense, do it. If CPC is the measure that speaks to the bean counters, do it. If it’s hits to a website, downloads, leads generated, positive messages, increased share price, blog mentions, impressions, registrations, buzz (however you define that), or a combination of all of the above, then do it. Establish the objectives, set the benchmarks, identify the tactics and the metrics by which to measure and which satisfy the expectations of the client, then go for it…

Now before you go calling me a party pooper, I recognize that a degree of sensationalism drives registrations, and I don’t deny the AMA the right to do what it needs to do to get bums in seats. But I worry that this trend toward putting one formula for measurement above another is less than helpful to the credibility of our industry. I hope the discussion itself will be more enlightening than the advertising for it. 

]]> 6
Measuring the impact of Viral is more than a numbers game… Wed, 01 Nov 2006 18:28:00 +0000 Brendan Hodgson Via Mathew Ingram’s blog, I read with interest Tony Hung’s interpretation of Dove’s viral Evolution campaign – which Adage recently suggested achieved greater ROI than a Superbowl ad – and specifically his thoughts around measurement. However, I tend to disagree with Tony’s assertions.

Without question, the ability to directly connect a PR or marketing initiative to a specific business outcome will always be the holy grail. But that isn’t always the full measure of success. Without having the insight of being involved in this campaign, I would suggest that there are multiple motivations to this exercise – donations to the Dove Self-Esteem Fund being only one, albeit perhaps the most important.

The value of viral – and particularly now with technologies such as blogs and YouTube – is, in my view, as much about being able to capture - through a single activity – raw audience insights, whether via feedback, comments and blog posts; all of which can serve to further enhance the overall perception of the brand.

Without these tools, we might only be able to share our collective admiration for the ad, and/or disgust at what it represents (if that’s how we, in fact, truly feel) with the person sitting next to us on the couch. It filters raw emotion and uncompromising feedback like no survey or focus group ever could, and becomes a powerful gauge – and potential influencer - for overall brand reputation (obviously, reinforced by the collective success of other “Campaign for Real Beauty” initiatives). It even feeds into those who would suggest that this campaign is, in fact, a subtle reverse psychological marketing ploy.

But I think that last point is important. Are we to measure this campaign from the basis of individual tactics, or do we need to look at it from the broader perspective of the overall campaign? According to Adage (quoting Todd Tilleman of Unilever):

…the emotional response the “Campaign for Real Beauty” has evoked from women has substantially strengthened brand loyalty, noting that two-thirds of brand sales now come from people buying more than one product, up from one-third three years ago.

“If you stood only for function, people would assess the brand based only on one category,” he said. While cross-marketing, new-product performance and other tactical appeals have helped build that number too, he said, “I’m convinced the real driver of it is that the brand has increased awareness of this mantra, this mission.”

It hasn’t hurt sales, either. Dove has gained share in the past year in four of its five major categories: personal wash (body wash and bar soap), hair care, deodorant and hand-and-body lotion. 
Personally, I also wonder what impact the potential for repeat viewing has on a specific audience… This video continually fascinates me. I’ve watched it a number of times now and, as a father of six-year-old twin girls, it has undeniable impact – perhaps even more so than for other audience segments. 

So what, then – to use Tony’s phrase? The fact that the article makes no mention of a spike in donations shouldn’t take away from other potential metrics of success - the numbers game being only one… 

]]> 1
Connecting Media Relations & Blogs – A new measurement dimension Tue, 24 Oct 2006 13:44:00 +0000 Brendan Hodgson I was intrigued by Tim Dyson’s recent post on “Blogs vs News”. And inspired by his findings, and a recent post by Josh (which is a good reminder to us all),  I conducted a similar (albeit cursory) experiment around the Garth Turner hullabaloo here in Canada – essentially, trying to identify which mainstream media articles received the most attention (ie. links) from bloggers.

What I like about this approach – and which we need to be doing more of than we are now – is that it offers a new dimension to how we as PR professionals can further measure the impact and influence of our traditional media relations activities. In this instance, it highlights an important linkage between a specific media relations activity and the reaction it incites.

It was also interesting to see which media articles ended up feeding through the blogosphere – and the outlets that people relied upon to provide context to their postings.

For example:

(It should be noted that some of these “blogs” are spam blogs – but I was too lazy to vet the results in any great depth)

Although it doesn’t appear that Tim, in his post, connects the blog mentions to actual media hits, the 10:1 ratio of blog mentions to media as highlighted in my cursory example (in addition to the 413 blog posts referencing Garth Turner and blogs) unequivocally reinforces the importance of traditional media relations in addition to targeting (and tracking) non-traditional channels such as blogs – and then analyzing the linkages between the two.

]]> 3
Media Relations Rating Points (MRP) in Action… So What Next? Tue, 20 Jun 2006 13:51:00 +0000 Brendan Hodgson H&K Canada’s marketing communications team recently completed a very successful product launch for one of our larger clients. For this program, the team opted to use the much talked-about MRP (Media Relations Rating Points) System by which to measure cost-per-contact and overall tone of coverage.

The results, in a nutshell, are as follows:

  • Total articles/stories: 624
  • Total impressions:  123,461,315
  • Budget: $149,050.00
  • Average Tone: 4.9 out of 5
  • Average Rating: 3.5 out of 5
  • Total Score: 84%
  • Cost-per-Contact:  $0.00121

Some key take-aways:

Our definition of “success”? The client was thrilled by the result, primarily for the reason that the numbers above satisfied the requirements and expectations of the executives to which that person reported. This is important, as it speaks to my earlier comments re. mapping to the expectations of the clients themselves. If this is how they define success, then run with it.

The time and effort to load 624 articles into the system individually was considerable – days – and must be factored into how the measurement function is budgeted.

The 5 rating point criteria used for this client included:

  1. Company/brand mention
  2. Spokesperson Quote
  3. Call-to-action
  4. Key messages / product mention
  5. 50+ words in broadcast segment / print / online

The ability to directly attribute the impact of PR on sales, although highly desired, is likely difficult given that this program was undertaken in partnership with a broader ad and online campaign.

So the next question is “what do we do with this?”… Is this now the benchmark against which future campaigns with this client are measured? One would hope not as this organization launches a wide variety of products annually, some more prominent than others. In such cases, clear expectations must be established at the outset. But, overall, the positive response from the client is a clear indicator of the impact meaningful metrics can have on demonstrating the value (as determined by the client) of PR.

]]> 3
Giving Method to our Madness… More on Media Relations Measurement Fri, 02 Jun 2006 13:23:00 +0000 Brendan Hodgson The Commission on Public Relations Measurement & Evaluation at the Institute for Public Relations recently published a new report on the “Perspectives on the ROI of Media Relations Publicity Efforts“. (thanks to the folks over at Corporate Engagement for highlighting it…)

It is an important read, as it outlines a number of approaches for measuring the ROI of media relations programs in the context of an independent activity, as well as within the context of a broader marketing campaign. Most importantly, it reinforces the importance of PR measurement to our business overall.

Why is demonstrating ROI so important today? According to the report’s authors – Fraser Likely, David Rockland and Mark Weiner – they suggest (and I hope they do not mind me pulling directly from their report): 

Resources are limited. In today’s economy, there is constant pressure on all marketing budgets, including media relations publicity. This means an organization will only invest in publicity activities that they know will make a direct contribution to increased revenues. Media relations publicity must prove it has an impact on the bottom line.

Scrutiny is increasing. Clients are increasingly holding their PR firms, departments, and consultants accountable for demonstrating public relations results. This accountability includes comparing those results against what was invested to obtain them. It is not enough to simply generate impressions through publicity; the quality of those impressions are equally important, as well as their impact on target audience behaviors and the resultant financial consequences.

Marketing has become more sophisticated. Public relations is expected to contribute to the execution of business strategy and thus the results obtained from that execution – not just create “noise” or “buzz” or “image.” The head of marketing is now asking: “Other areas supporting marketing campaigns can measure ROI, why not the PR function?” “What’s the ROI of our media relations publicity efforts in our marketing campaigns?” “Should I buy more or less advertising or media publicity, or invest it all in store promotions

Some additional and notable asides:

ROI is not the same as Cost-Effectiveness:
In its report, the Committee defines “return” as the “financial benefit derived by the organization… from the public relations or communications program or campaign.”  On the other hand, “Cost-effectiveness” as defined here is the “use of programs or campaigns to avoid costs in the first place by mitigating risk factors such as negative legislative, regulatory or legal actions through changes in stakeholder and/or organizational behaviours.”

On Ad Value Equivalency (AVEs):
AVE’s continue to be a much-debated measure, even among the Commission members: “…there are some on the Commission… who feel it is heresy. Other Commission members have done research to show the ability of AVEs to contribute to a ROI measure…” However, the conclusion is that “AVE’s really are a cost-effectiveness measure and not a true ROI measure.”

On measurement budgets as a percentage of overall budget:
According to the Commission: “generally speaking, measurement should be between 2% and 10% of a media relations budget.” 

On the ability to link media relations to sales and other financial results:
“This paper has not found that magic answer, but we are confident that there are models in existance that will work in the right circumstances and with the appropriate caveats.”

And that last point deserves to be highlighted. It is critically important, in my view, that the client fully understand at the outset that effective ROI measurement may require as much effort on their part as ours and that, without that effort, only so much can be achieved.

An example of this would be a US-based client we worked with recently who was attempting to market to a specific Canadian demographic in a specific region. Throughout the process we sought to ensure that, in tandem with our media relations efforts, mechanisms were put in place at various points of contact within the client’s operations that could be used to identify linkages between our efforts and the impact of those efforts on the client. In this case, we encouraged them to include a simple question: ‘where did you hear about us?’ on the back-end of every phone call received by a Canadian inquiring about the client and its services. Unfortunately, there was no real discipline in implementing this. As a result, and while we could correlate actual registrations during various campaign phases, we weren’t able to determine how many inquiries or leads the campaign generated overall.

But I also think that what is missing here is the notion of “Return on Expectation”. In the end, if the client is satisfied and feels value-for-money has been received. Is that not enough? Personally, I don’t believe it is. But it is an important consideration that we all need to examine more closely. Success to our clients may not always be what we think it should be. And understanding those factors can also play into the ROI/ROE equation.

]]> 2
Visibility = Insight = PR Value Tue, 30 May 2006 14:25:00 +0000 Brendan Hodgson I wrote a piece a while back about the Media Relations Rating Points System which, now that I consider it, may have missed a few points. So I figured I’d better talk to those now.

Like virtually every other PR firm, and for many of our own clients, we compile relevant media clippings for our clients, package ‘em up nicely, and then fire them off, either daily or weekly or whenever they appear. Sometimes, we include a bit of top-line analysis along with those clippings — essentially, a preamble that highlights the consolidated good, bad and ugly of what went on during that specific coverage period. And, if it makes sense, we’ll counsel the client to either respond to a specific article, or we’ll exploit opportunities to drive additional coverage around a key issue or trend introduced by a specific journalist or publication.

And while this analytic capacity is inherent – to a small degree – within tools such as MRP, it is not comprehensive. And to that point, I say that I think we need to do better – to expand how we demonstrate value to our clients.

I believe a key determinant of PR’s value - and a vital criteria for how we are measured – is not only tied to the outputs (ie. the reach and impressions) and outcomes (ie. the business impacts) of what we do, but also to the “visibility” we provide into the channels, relationships and issues that impact our clients’ business.

I’m not talking about post-campaign surveys or polls. ’Visibility’, in the context that I’m using it, is about making the ‘invisible visible’, to capture meaningful intelligence that has the capacity to shape or re-shape what we do and enhance the value of our service to clients. 

And while visibility does not translates directly into ”results”… it provides something else that, I believe, is equally valuable. It makes us and our clients smarter. And it therefore makes our campaigns that much more successful. To quote a classic Rumsfeldianism: “…there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”   

Capturing coverage is only the first step, and the easiest. More importantly, we need to be able to view that coverage within the context of our clients’ competitors, their issues or their brands and analyze it against a variety of key factors - region, publication, journalist, tone, share of voice or other categories as defined by the client.

Filters such as these I would classify as the known unknowns. We know the data exists, but  the ability to capture and visualize that data has largely been incumbent upon junior consultants spending gruelling hours sifting through masses of clippings. Either that, or many of the tools that exist today provide only a very basic top-line view of the metrics that our clients really need (which are typically tied to specific outputs), or offer limited flexibility in terms of customizing the metrics to specific client needs.   

Where the real value lies is when we can go deeper and pull even richer insights from that information – to discover the unknown unknowns – and share them, virtually in real time – be they the trends that we didn’t even know existed, or the relationships between reporters, issues or regions that we might not otherwise have captured.

With respect to blogs, we need to build on such tools as Technorati’s “Authority Filter” to be able to rank blogs (and specific posts) by such metrics as frequency, tone, relevance, and visibility or prominence, and to be able to view specific relationships between bloggers themselves, the media they rely on, and the issues or brands that they talk about.

The tools exist. We are using a number of them already, to analyze social and influencer networks (talk to Ted Graham over on the Networks blog) or to identify relevant ’sweet spots’ in media coverage that can influence ongoing campaigns. And for our clients, it is ensuring that we continue to meet their expectations in terms of delivering the insight they need to do their own jobs better without breaking the bank. 

]]> 5
MRP… Let’s not lose sight of the end result Fri, 05 May 2006 18:29:00 +0000 Brendan Hodgson So…. MRP… (otherwise known as the Media Relations Rating Points System)

First off, rather than me fill you in about what MRP is about (in case you didn’t already know), you can find out more about it here, and you can read their informative and extremely balanced blog here.

But back to me… 

I guess I’m of two minds about it (and I’ve now sat through two presentations). Yes, on the one hand, it adds an element of science to a profession that desperately needs it. And yes, smart people that the MRP folks are, they don’t position it as something it’s not – meaning the ‘Holy Grail’ of PR measurement. On the other hand, I’m slightly nervous that MRP could, in the wrong hands, diminish the perceived and/or real ‘value’ that good PR can deliver to clients – whether client-side or agency.

Why do I say this? (Note: these opinions are my own and may not reflect the opinions of everyone at H&K)

Don Bartholomew at CGI touches on similar concerns in his initial analysis of MRP, as does Jim Gruning of the University of Maryland in his summary outlined in KD Paine’s blog here. My point being that MRP – for all its good intentions – could easily become a crutch for those in the PR profession who have neither the time, budget, nor analytical capacity to recognize MRP for what it is, what role it plays, and where it fits within our profession.

Without question, it fits. But lets not sugarcoat it unnecessarily. (I should make a point of crediting the MRP team for acknowledging specific shortfalls from the outset, and also their enthusiasm in responding to both well-intentioned and not-so-well-intentioned criticism following the launch of the tool.)

In the parlance of marketing communications, cost-per-contact is a giant leap forward over traditional ad equivalency metrics and a useful ‘cost-oriented’ standards based metric (to use Bartholomew’s definition) for comparing PR to, say, advertising in terms of being able to reach the largest number of consumers at the lowest possible cost.

But for many other areas of our business, cost-per-contact is virtually meaningless – and potentially damaging. For companies facing downsizing, facility closures, acquisitions, changes in leadership, proxy battles, or some combination of these and others, cost-per-contact plays no role in the measurement equation. Media is still a critical element in communicating through many of these issues. However, the real measurement is the ability to demonstrate how effective media relations helped neutralize detractors, mobilize allies, or educate stakeholders.

Many in the PR profession get this. Many others do not. As the MRP folks have themselves admitted, the primary features of the tool are its ability to accurately determine what it costs to reach a specific number of eyeballs, and to provide a system that gives a communications team an ability to provide qualitative analysis of that coverage. What it doesn’t do is provide any capacity to measure the real impact of that outreach and analysis – did it impact sales, did it change perception, did it drive an action or prevent an action, etc. 

And the MRP team know this. Nonetheless, this, in my view, is where the danger lies for our profession. That we can now provide more accurate information regarding specific activities is, without question, a good thing. However, we must constantly be striving to deliver on the bigger metrics and business outcomes. Let me be clear, this is less a criticism of the tool and more a call to action to our profession to avoid – at all costs – the desire to make this tool something that even its creators profess it isn’t.

Which leads us to the point-rating element. Personally, I think Bartholomew gets it wrong when he asks “Are all MRP’s the same?” Who cares? In the end, does it matter if I use an MRP rating or any other kind of rating, so long as whatever I use is agreed upon in advance between myself and the client depending on their specific objectives, and that it makes sense to their business.

And that is a vitally important consideration, and one that H&K has – for the most part – recognized. So long as there is clarity in terms of expectations set at the outset of any program between the client and his communications, and that appropriately agreed-upon measures are put in place to validate those expectations, what more do you need?

So yes, to some degree Andrew Laing of Cormex is right when he claims that “MRPs are simply a way to let the Canadian PR firms continue to grade their own homework.” That’s true, but only if the client isn’t involved in defining the parameters of each of the rating criteria. H&K’s experience has demonstrated that criteria and systems for measurement cannot be created in isolation from the client.

Equally, and where we often fall short, we need to be more precise in terms of setting tangible objectives – Let’s not just say that we’re going to “generate buzz” or “drive awareness”, let’s say that we’re going to “increase awareness by 10% within a specific target group over a specific period” and then let’s tell them that we’re going to do this, in part, by achieving X million impressions and an MRP score of 85% or higher.” (In addition to all the other things we’re going to do)

Quite simply, we don’t all have the same start or finish line. Each client measures PR differently because their situations and objectives are different. Those who choose to measure by cost-per-contact will love this system. It’s cost-effective, easy to use, and standards-based in terms of audited numbers. And if the point-rating criteria allows us to establish a more qualitative benchmark against which to measure quality of coverage, then the MRP is as good as any. But let us not forget that when we tell our client that we hit 1 million eyeballs at a cost of 0.02 per eyeball, and that we scored a MRP of 82%, and they ask what impact that had on their business… we can’t afford not to have those answers.

]]> 6
So, here we go…. Fri, 07 Oct 2005 18:39:00 +0000 Brendan Hodgson So here we go… Just last week I attended a conference on PR measurement… Now typically I show up at these things with two objectives in mind, and if I achieve one or both, then I feel I’ve gotten my money’s worth, or should I say the company’s money (let’s call a spade a spade). The first objective is to actually learn something during the conference – which can often be difficult depending on the quality of the speakers and their presentation skills, and dare I say the amount of booze consumed at the networking events held the night previously… you probably know what I’m talking about… these are the typical two-day sessions involving a bunch of speakers and several hundred Powerpoint slides… The second objective, which tends to happen more than the first, is to simply re-affirm that I’m in the same boat as everybody else with respect to where we all stand on a specific issue or topic… If I can accomplish both, then I can really claim that this was money well spent.

Modesty aside, I think I’m a pretty sharp guy when it comes to communications… I’ve played in the PR and communications space in a variety of capacities for about 10 years now – both agency and client-side… media relations, writing and editing, crisis and issues management, marketing and branding, and online communications… so I think I get it (others may argue). So when the opportunity came to actually learn a bit more about how we measure the value and impact of the “dark science”, I jumped at the chance. The speakers looked good. The attendees included senior communicators from major Fortune 500 companies, some of the other big PR agencies, and even the US military to name a few. The topics of the presentations seemed interesting, and… well… sometimes it’s just kinda nice to get away and have the opportunity to really think about subjects like this… because let’s face it, measurement is an issue that is becoming an increasingly bigger blob on the radar screen. We’ve got to figure this out… individually, collectively… whatever. If we want to play at the c-suite and justify our role in an increasingly commoditized market, we’re going to have to do better than simply track media hits, ad equivalencies, and bums on seats…

Now of course, there were no shortage of vendors showcasing various flavours of automated dashboards and online tracking tools to the conference attendees. A large debate into which I waded a number of times was the question of whether you could really automate the process of assigning sentiment to an article (me, I don’t think you can, though am eager to be shown otherwise). In addition, much of the conference was spent discussing some pretty heavy statistical… um… stuff around demonstrating the impact of comms on bottom line revenue. And while I’m sure all of that statistical… erm…stuff is good and meaty (it certainly kept us all pretty enraptured), and when all the data is sliced and diced shows some niftly contours and graphs… is it something that can truly be applied to the situations that each of us (or our clients face) and the budgets that we have to deal with, notwithstanding the fact that we all are likely victims of Finagle’s law of information in some way or other, which certainly doesn’t help?

I guess the question is really around what is truly achievable based on the information we have and the time and resources available to do it. The principles are there… I’m sure we’ve all read or been exposed to the Institute for PR’s Guidelines for Measurement… but its how we apply them (and other measures) that will distinguish the truly strategic from the wannabes.

So that’s my starting point on this issue… stay tuned. I’ll be adding some links on this topic and others in the coming days…

]]> 1