Designing an Evaluation: Issues to Consider

Have you been asked to do an evaluation or hire an evaluator? Wondering where to start? The following questions may serve as the foundation for your initial thinking whether you are an internal or external evaluator (please see How to Do an Internal or External Evaluation for more information).

  • What are the goals of your evaluation?
  • What are the evaluation requirements from the funder or other groups?
  • Who is your audience for the evaluation?
  • Who are the stakeholders or interest groups of your evaluation?
  • What impact do you hope the evaluation will have on your organization or program?
  • How much time do you have to do the evaluation?
  • What type of evaluation design do you want to create?
  • How much would the evaluation you envision be likely to cost?

These questions may seem overwhelming, but they allow you to create the basis for your overall design of the evaluation. Understanding the needs, circumstances, goals and purpose of the organization will help you formulate effective data gathering tools.

Format and Evaluation Design

What format do you want your evaluator to use? Two evaluation types are common: formative and summative . You can decide which would make the most sense for you once you have identified the goals, needs and circumstances of this evaluation.

A formative evaluation focuses on providing information to staff to improve program functioning. A formative evaluation might include conducting needs assessment, monitoring program implementation, and assessing the extent to which a program is being carried out in ways that will help it achieve its goals at some time in the future. Formative evaluations are generally done during the program rather than after it is over. Formative evaluations can be very time-consuming because they require the evaluator to become familiar with multiple aspects of a program and provide program personnel with information and insights to lead to improvements. Logic models are often used in formative evaluations. For more information on logic models, see the tip sheet "Understanding the Overall Program: Logic Models Make it Easier to Run Toward the Goal."

If you are considering a summative evaluation, you are more likely to be oriented toward collecting and presenting information needed for summary statements and judgments about the program and its value. You are focused on answering the question, "Did the program work?" Summative evaluations often start at the program's beginning and go on until after the program is completed. They are focused on identifying changes that have occurred as a result of the program. Not all programs lend themselves to a summative evaluation. The more a program has clear and measurable goals and consistent replicable materials, organization, and activities, the more suited it is for a summative evaluation.


Examples of tools for data gathering include pre- and posttests, interviews, and questionnaires (please see "Ways to Collect Evaluation Data") . You may also choose to do an experiment in which some participants are randomly assigned to receive your program (perhaps a workfare program that teaches new skills) while others do not receive this program or any other. You might measure participants' income before the program begins and after it is finished. You then see if the people in the program group changed more in income than did those in the control group. If greater change occurs in the program group, then you conclude that your program was successful.

Types of Data 

There are two types of data that your evaluator may collect. These quantitative and qualitative. Try to make sure that your evaluator includes both. You will not want the evaluator to focus totally on quantitative, hard data because doing so often fails to capture the essence or spirit of the program. You may want to get qualitative data as well. These data will help you understand how people felt about the program.

For example, how do you measure the qualitative effects the program has on people's lives? You might create attitude-rating scales and through various interviews and questionnaires, and you can quantify feelings and what people said by categorizing themes and so on for measurement purposes.

Quantitative designs focus primarily on measuring program results and comparing them to a standard. Quantitative standard does not always describe the whole life of a particular program. It does not describe how those who the program seeks to affect feel about everything.

Qualitative designs focus on describing the program in depth and on better understanding of the meaning and nature of its operations and effects. You may want to do one or both.

In conclusion, evaluation design is a plan that determines which individual or groups will participate in the evaluation. You need to decide what types of data you want to collect, and when your evaluation instruments or measures will be administered and by whom.
by Chath pierSath