Part 2: Know your evaluation audience/s

For evaluation to live up to its potential to improve outcomes and ensure effective targeting of limited resources, evaluation communications must cut through.

The first step is to know your audience or audiences (as you will often have more than one). Knowing audience will inform the engagement channels you use, how you frame your message and the language you use, among other things.

Get to the heart of it

There’s a large base of literature on the demand and supply side factors that affect evaluation use – that is, the decision setting and user characteristics on the one hand and evaluators’ approach on the other. (For more, see my research on evaluation use in the Australian context). The demand-side factors affecting evaluation use include the characteristics of evaluation users, their commitment or receptiveness to evaluation, and the broad characteristics of the context in which the evaluation is being conducted, particularly the political climate, social climate, language and culture, values and interests, as well as the interactions between these.[1]

While you can’t do much about the political climate, you can get to know your evaluation audiences. This will help you to deliver content that is of value to them, in ways that they can take on board. It can also help you to overcome lack of commitment to or fear of evaluation.

Get into their shoes

Think like a market researcher. For each of your audiences, ask yourself these questions:

  • What is it that keeps them awake at night? What is most important to them and how is the evaluation helping with this?
  • How do they prefer to process new information? Are they for the visuals, the conversation or the written word? All three?
  • What evidence will they find credible? Will they only value the statistics, or will consumer stories be the language that speaks to them?
  • How much detail do they need? Do they need an audit trail, the headlines or something in between?
  • How often do they need information? Do they need regular updates to inform their practice and decision making?
  • What else is going on for them? Could this evaluation affect their job or the way they receive services? Is it high on their agenda or one priority among many?
Work out what this means for how you communicate

The questions above will give you a working profile of each of your audiences. You might like to keep track of this information in a table or matrix.

By now you may be asking yourself, ‘How could I possibly meet the needs of all of these audiences in one report?’ The answer might be that you can’t. You might need different communications tools for different audiences, such as video summaries, findings brochures, as well as slide decks and full reports.

But, in some cases, you will be able to meet the needs of various audiences through layering information in a single communication. At it’s most basic, this means starting with a one-pager of key findings, followed, by an executive summary, and a full report, with technical detail relegated to appendices. At the next level, it means using different modes of communication, recognising that the stats tables will reach some, while the stories will reach others. You can also layer in face-to-face findings discussions by providing visuals and written documents to complement your auditory communication, and using activities to enable people to engage with the implications. Stay tuned for more on structuring reports and telling evaluation stories.

The next article in our Communication for Evaluation series will cover evaluation as process rather than product.

[1] Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56, 331–364; Johnson, K., Greenseid, L.O., Toal, S.A., King, J.A., Lawrenz, F. & Volkov, B. (2006). Research on evaluation use: a review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410; Vo, A.T. & Christie, C.A. (2015). Advancing research on evaluation through the study of context. In P.R. Brandon (Ed.), Research on Evaluation. New Directions for Evaluation, 148, 43–55.


[1] Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56, 331–364; Johnson, K., Greenseid, L.O., Toal, S.A., King, J.A., Lawrenz, F. & Volkov, B. (2006). Research on evaluation use: a review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410; Vo, A.T. & Christie, C.A. (2015). Advancing research on evaluation through the study of context. In P.R. Brandon (Ed.), Research on Evaluation. New Directions for Evaluation, 148, 43–55.

Receive our latest news and insights
  • This field is for validation purposes and should be left unchanged.