
MEASURE FOR MEASURE
The reluctance of some internal communicators to devote serious time to evaluation is understandable – but not tenable, says Steve Doswell, chief executive of the Institute of Internal Communication
Internal communicators have been grappling with the challenge of providing measures of what they do for at least 20 years. That’s effectively a generation’s worth of practice and custom. Evaluation is our biggest weapon against the claims (they’re still made) about internal communication’s (IC) lack of impact on the world of work – or it should be. Demonstrating the bangs we achieve is what earns us the right to the bucks we claim in terms of budgets, salaries and fees. It’s just that the evidence suggests there is a lot more lip-service about evaluation than IC can be comfortable with as an emerging profession.
There are several sophisticated evaluation tools. However, practitioners sometimes baulk at the time required to deploy these tools because they’re busy people, rarely more so than now. Faced with a dam-like wall of work and pressure, the reluctance of some to devote serious time to evaluation is understandable – but not tenable.
It doesn’t have to be all or nothing, though. One way to get an independent assessment of whether your measurement really stacks up is the feedback given by judges in some awards competitions. The Institute of Internal Communication has just held the awards night for this year’s competition for internal communication ‘work’ and has now opened the window for the newly-named ICon awards that showcase the people behind the work. In both cases we direct judges to look favourably on submissions where entrants have clearly taken evaluation seriously.
Evidence of evaluation divides the apparently good from the demonstrably excellent. Of course, some IC activities are harder to evaluate than others. Values-based campaigns are an example. It’s relatively easy to get statistics effectively to measure employee awareness of the ‘name’ of values, but much harder to provide communicable evidence about those values in action. That raises a question about measuring output or impact. There can be value in measuring IC activity but it’s the outcome that rocks the world.
We all know that distinguishing the unique contribution that IC makes to a particular outcome can be notoriously challenging. It’s perfectly reasonable to ask employees to comment on how well a major office move was communicated, although it’s the new office’s own qualities that will determine employee acceptance in the long term. Similarly, did 86% of employees sign up to those new pension arrangements because of the impact of the IC campaign or the benefits of the new scheme, or simply because there was no alternative? That’s a hard one to gauge.
What do awards entries tell us about the state of the art in IC evaluation within Britain’s shores? Our findings point to a spectrum of practice - at one end, a determination to measure what matters and with some sophistication, at the other a lingering superficiality. Too many entrants continue to rely on assertions that ‘feedback was good’, or that ‘the client was happy’, or provide vox pops lacking credibility. In quantitative terms, many entrants like to quote percentage scores but only a hallowed few think to provide a sample size (90% of how many people?). But the good news is that the bell curve does seem to be moving in the direction where measurement is taken seriously and the measures themselves are meaningful.
Someone with serious brains – Albert Einstein – contended that not everything that counts can be counted and not everything that can be counted, counts. Whatever its current state, the art of IC evaluation lies in measuring the things that are truly meaningful and in ways that offer real insights to IC practitioners and their clients. When it comes to assessing our impact, nothing else counts.