Specify the key evaluation questions

Key Evaluation Questions (KEQs) are the high-level questions that an evaluation is designed to answer - not specific questions that are asked in an interview or a questionnaire.

Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it.

KEQs usually need to be developed and agreed on at the beginning of evaluation planning - however sometimes KEQs are already prescribed by an evaluation system or a previously developed evaluation framework.

Try not to have too many Key Evaluation Questions - a maximum of 5-7 main questions will be sufficient. It might also be useful to have some more specific questions under the KEQs.

Key Evaluation Questions should be developed by considering the type of evaluation being done, its intended users, its intended uses (purposes), and the evaluative criteria being used. In particular, it can be helpful to imagine scenarios where the answers to the KEQs being used - to check the KEQs are likely to be relevant and useful and that they cover the range of issues that the evaluation is intended to address. (This process can also help to review the types of data that might be feasible and credible to use to answer the KEQs).

The following information has been taken from the New South Wales Government, Department of Premier and Cabinet Evaluation Toolkit, which BetterEvaluation helped to develop.

Key evaluation questions for the three main types of evaluation

Process evaluation

Outcome evaluation (or impact evaluation)

Economic evaluation (cost-effectiveness analysis and cost-benefit analysis)

Appropriateness, effectiveness and efficiency

Three broad categories of key evaluation questions are often used to assess whether the program is appropriate, effective and efficient .

Organising key evaluation questions under these categories, allows an assessment of the degree to which a particular program in particular circumstances is appropriate, effective and efficient. Suitable questions under these categories will vary with the different types of evaluation (process, outcome or economic).

Appropriateness

Effectiveness

Efficiency

Example

The Evaluation of the Stronger Families and Communities Strategy used clear Key Evaluation Questions to ensure a coherent evaluation despite the scale and diversity of what was being evaluated – an evaluation over 3 years, covering more than 600 different projects funded through 5 different funding initiatives, and producing 7 issues papers and 11 case study reports (including studies of particular funding initiatives) as well as ongoing progress reports and a final report.

The Key Evaluation Questions were developed through an extensive consultative process to develop the evaluation framework, which was done before advertising the contract to conduct the actual evaluation.

  1. How is the Strategy contributing to family and community strength in the short-term, medium-term, and longer-term?
  2. To what extent has the Strategy produced unintended outcomes (positive and negative)?
  3. What were the costs and benefits of the Strategy relative to similar national and international interventions? (Given data limitations, this was revised to ask the question in ‘broad, qualitative terms’
  4. What were the particular features of the Strategy that made a difference?
  5. What is helping or hindering the initiatives to achieve their objectives? What explains why some initiatives work? In particular, does the interaction between different initiatives contribute to achieving better outcomes?
  6. How does the Strategy contribute to the achievement of outcomes in conjunction with other initiatives, programs or services in the area?
  7. What else is helping or hindering the Strategy to achieve its objectives and outcomes? What works best for whom, why and when?
  8. How can the Strategy achieve better outcomes?

The KEQs were used to structure progress reports and the final report, providing a clear framework for bringing together diverse evidence and an emerging narrative about the findings.

The Managers' Guide

Coming at this from a manager or commissioner's perspective? Step 2: Scope the evaluation in our Managers' Guide has some specific information geared towards making decisions about what the evaluation needs to do, including how to develop agreed key evaluation questions.

Resources

This guide from the Robert Wood Johnson Foundation was designed to support evaluators engage their stakeholders in the evaluation process.

This manual from the Swedish International Development Cooperation Agency (SIDA) is aimed at supporting staff in conducting evaluations of development interventions.

This site provides a step-by-step guide on how to identify appropriate questions for an evaluation.

This worksheet from Chapter 5 of the National Science Foundation's User-Friendly Handbook for Mixed Method Evaluations provides a template for developing evaluation questions which engage stakeholders interest in the process.

This worksheet from Chapter 5 of the National Science Foundation's User-Friendly Handbook for Mixed Method Evaluations provides a template which allows the organisation and selection of possible evaluation questions.

This checklist, created by the Centers for Disease Control and Prevention (CDC), helps you to assess potential evaluation questions in terms of their relevance, feasibility, fit with the values, nature and theory of change of the program, and the level

Created by Lori Wingate and Daniala Schroeter, the purpose of this checklist is to aid in developing effective and appropriate evaluation questions and in assessing the quality of existing questions.

This document contains example questions, many of which are drawn from country, regional, sector or thematic global evaluations undertaken by the Evaluation Unit.