Evaluation has been flagged as one of the major challenges facing communication professionals today. Assessing the impact and value of communication activities, gaining organizational support or simply learning how to work better are amongst the key benefits of evaluation. Evaluating communication activities built around programs or campaigns requires an adapted approach to evaluation. Most communication campaigns aim to change individual attitudes and behaviors or to mobilize public and decision-maker support for policy change or a combination of both.
Most communication evaluation focuses on output: measuring communication performance (e.g. number of press releases issued, events held, etc.). Although this can be useful initial feedback, far more important is to measure outcomes: did communication activities result in any opinion, attitude and/ or behavior change amongst targeted audiences? The aim of evaluation may not always be to prove communication efforts definitely caused change, but to assess the assumptions and quality of the communication activities. Methods to evaluate communication campaigns vary according to the objectives set and activities used.
What Is Research?
- Data collection that is controlled, objective, and systematic
- Attempts to explain, comprehend, forecast, and manage social and corporate phenomena.
- Attempts to provide answers
- A dependable and legitimate method of accessing data
- Data collection and interpretation should be done in a systematic manner.
Main Objectives of Research
- Monitoring developments and trends
- Examining public relations position
- Assessing messages and campaigns
- Measuring communication effectiveness
- Tracking studies
- Gap studies
- Evaluation research
A number of models have been developed to explain how and when to apply research and evaluation in PR and corporate communication. Five leading models have been identified.
- The PII Model developed by Cutlip, Center and Broom (1985);
- The Macro Model of PR Evaluation, renamed the Pyramid Model of PR Research (Macnamara, 1992; 1999; 2002);
- The PR Effectiveness Yardstick developed by Dr Walter Lindenmann (1993);
- The Continuing Model of Evaluation developed by Tom Watson (1997);
- The Unified Model of Evaluation outlined by Paul Noble and Tom Watson (1999)
Theory VS. Applied Research
According to Tom Watson’s history of PR measurement, the hunt for tools to quantify and demonstrate the worth of public relations only became a major emphasis in the 1970s. The International Public Relations Association issued a “Gold Paper” in 1994, which became a rallying cry from industry leaders for communications practitioners to perform credible and rigorous assessment and evaluation of their efforts. In a 1987 article on the influence of technology on media, John Pavlik, an American professor and novelist, stated that measuring had become the “Holy Grail” of public relations. However, despite experimenting with many models, the industry has not “cracked” the measuring and assessment “nut,” in the words of Prof Jim Macnamara.
According to renowned academics David Michaelson and Donald Stacks, “public relations practitioners have continuously failed to reach agreement on the fundamental evaluative measures or how to undertake the underlying research for evaluating and measuring public relations performance.” as recently as 2011.
Macnamara has highlighted three impediments to adopting PR measurement and evaluation that he feels are generating a stalemate.
- The first is a fixation with numbers. For example, the Institute for Public Relations (IPRpunch )’s line is “the science underneath the art.” This obviously reflects a belief that public relations should be supported by scientific information and quantitative research methodologies. However, such a viewpoint might be erroneous. The communications sector is attempting to quantify difficult-to-quantify results. “Human interactions; relationships, sentiments, attitudes, loyalties, impressions, and participation are not simply quantifiable,” adds Macnamara.
- Second, there is a confusion of measurement and assessment; the two are frequently undertaken concurrently or linearly, are based on a very limited range of data, and are frequently concentrated only on metrics as set by the concerned entity. Measurement and assessment, according to Macnamara, are two independent procedures. Measurement is the taking and analysis of measurements such as counting things, collecting ratings on a scale, or capturing remarks in interviews.
3 The third main impediment to establishing the effectiveness of public relations is that measurement and assessment systems are based on what has been done in the past. So measurement and evaluation often fail to give an organization anything other than a retrospective performance review of work done. Sometimes measurement & evaluation is seen as little more than an exercise in self-justification.