Skip navigation

Tag Archives: Program evaluation

In an age of elevated accountability for results, the Evaluation Plan is one of the most critical components of a competitive grant proposal.

 

For virtually every objective one might conceive, many types of thoroughly reviewed evaluation instruments are readily available. Often these instruments are widely used to generate and monitor data and to track and report on performance outcomes; yet, they may be new to any given applicant and its grant writing team.

 

Selecting Evaluation Instruments

In selecting one or more evaluation instruments to measure a specific objective in a proposal, a smart grant writing team will first locate and study relevant technical reviews found throughout the professional literature of program evaluation. The smart team is certain to look for:

  • Evidence for the technical review writer’s objectivity
  • Evidence for the instrument’s reliability
  • Evidence for the instrument’s validity
  • Limitations on the available evidence
  • Discussions of the instrument’s intended uses
  • Prerequisites for the instrument’s effective use
  • Required frequency and mode of use
  • Time required for administration and data analysis and reporting
  • Costs associated with using the instrument

 

Finding Technical Reviews

There are many possible sources of technical reviews of evaluation instruments. One of the best and most comprehensive resources is the Mental Measurement Yearbooks, a series published both online and in print by the Buros Institute. A second resource, of more limited scope, is the ERIC Clearinghouse on Assessment and Evaluation. Nearly every specialized and science-driven discipline will have its own review repository as well.

 

Reasons for Using Technical Reviews

Applicants need to persuade skeptics that their Evaluation Plan will provide evidence of program effectiveness. One way to do so is to demonstrate to wary readers that the proposed evaluation instruments are judiciously selected and are appropriate for their proposed uses. The findings published in technical reviews furnish invaluable assets for accomplishing this task. The rest hinges upon how well an applicant uses these assets in describing and justifying its Evaluation Plan.

Advertisements

In the context of grants, evaluation is a systematic inquiry into project performance. In its formative mode, it looks at what is working and what is not; in its summative mode, it looks at what did work and what did not. In both modes, it identifies obstacles to things working well and suggests ways to overcome them. For evaluation to proceed, the events or conditions that it looks at must exist, must be describable and measurable, and must be taking place or have taken place. Its focus is actualities, not possibilities.

 

Data Collection:

Effective evaluation requires considerable planning. Its feasibility depends on access to data. Among the more important questions to consider in collecting data for evaluation are:

  • What kinds of data need to be acquired?
  • What will be the sources of data?
  • How will sources of data be sampled?
  • How will data be collected?
  • When and how often will the data be collected?
  • How will outcomes with and without a project be compared?
  • How will the data be analyzed?

 

Problem Definition:

In developing an evaluation plan, it is wise to start from the problem definition and the assessment of needs and work forward through the objectives to the evaluation methods. After all, how a problem is defined has inevitable implications for what kinds of data one must collect, the sources of data, the analyses one must do to try to answer an evaluation question, and the conclusions one can draw from the evidence.

 

Evaluations pose three kinds of questions: descriptive, normative, and impact (or cause and effect). Descriptive evaluation states what is or what has been. Normative evaluation states what is to what should be or what was to what should have been. Impact evaluation states the extent to which observed outcomes are attributable to what is being done or has been done. The options available for developing an evaluation plan vary with each kind of question.

 

Power:

An evaluation plan does not need to be complex in order to provide useful answers to the questions it poses. The power of an evaluation should be equated neither with its complexity nor with the extent it manipulates data statistically. A powerful evaluation uses analytical methods that fit the question posed; offer evidence to support the answer reached; rule out competing evidence; and identify modes of analysis, methods, and assumptions. Its utility is a function of the context of each question, its cost and time constraints, its design, the technical merits of its data collection and analysis, and the quality of its reporting of findings.

 

Constraints:

Among the most common constraints on conducting evaluations are: time, costs, expertise, location, and facilities. Of these constraints, time, costs, and expertise in particular serve to delimit the scope and feasibility of various possible evaluation design options.

 

Design Options:

Most evaluation plans adopt one of three design options: experimental, quasi-experimental (or non-equivalent comparison group), or pre/post In the context of observed outcomes, the experimental option, under random assignment of participants, is most able to attribute causes to outcomes; the pre/post option – even one featuring interrupted time series analyses – is least able to make such attributions.

 

The experimental option tends to be the most complex and costliest to implement; the pre/post option tends to be the simplest and least costly. Increasingly, Federal grant programs favor the experimental evaluation design, even in areas of inquiry where it is costly and difficult to implement at the necessary scale, such as education and social services.

Once an organization has won a multi-year grant, evaluation is essential to getting it renewed year to year. One way to share evaluation findings is the Annual Performance Report (APR).

 

Although the specific contents of an APR vary from funder to funder, they also tend to have similar structures from report to report. What follows is one typical structure:

 

Face Page or Title Page: Should identify the grant recipient, the grant maker, and the grant program. Also may need to provide unique numerical identifiers: submission date, grant award number, employer identification number, grantee DUNS number, and others.

 

Table of Contents: Should always include whatever major topics, in whatever predetermined sequence that a specific funder may require.

 

Executive Summary or Abstract: Should offer an overview of findings and recommendations and be no longer than one page.

 

Overall Purpose of Evaluation: Should state: why the evaluation was done; what kinds of evaluation were performed; who performed them; what kinds of decisions the evaluation was intended to inform or support; and who has made, is making, or is going to make such decisions.

 

Background or Context: Should briefly describe the organization and its history. Should describe the goals and nature of the product or program or service being evaluated. Should state the problem or need that the product or program or service is addressing. Should specify the performance indicators and desired outcomes. Should describe how the product or program or service is developed and/or delivered. Also should characterize who is developing or delivering the product or program or service.

 

Evaluation Methods: Should state the questions the evaluation is intended to answer. Also should indicate the types of data collected, what instruments were used to collect the data, and how the data were analyzed.

 

Evaluation Outcomes: Should discuss how the findings and conclusions based on the data are to be used, and any qualifying remarks about any limits in using the findings and conclusions.

 

Interpretations and Conclusions: Should flow from analysis of the evaluation data. Should be responsive to the funder’s evaluation priorities (e.g., measuring GPRA or GPRMA performance indicators in Federal grants).

 

Recommendations: Should flow from the findings and conclusions. Also should address any necessary adjustments in the product or program or service and other decisions that need to be made in order to achieve desired outcomes and accomplish goals.

 

Appendices or Attachments: Should reflect the funder’s requirements and the purposes of the specific evaluation. Appendices may include, for example: the logic model governing the project; plans for management and evaluation included in the original proposal; detailed tables of evaluation data; samples of instruments used to collect data and descriptions of the technical merits of these instruments; case studies of, or sample statements by, users of the product or program or service.

 

What follows is a sample Position Description for an External Evaluator. It is one of a series of posts of samples describing positions in a fictional STEM Partnership Project.

 

Position/Time Commitment: Project-paid External Evaluator (Per Contract)

 

Name: Dr. DE Falconer

 

Nature of Position:

Provides external formative and summative evaluation services to the project consistent with its program design, evaluation plan, and federal regulations; assists in eventual submission of the project for validation as a national model.

 

Accountability:

This position is directly responsible to the Project Director.

 

Duties and Responsibilities:

  1. Design an evaluation process compatible with the project’s objectives
  2. Provide interim and final evaluation reports for each project year
  3. Conduct on-site observations and consultations
  4. Review data collection, analysis, and recording processes; recommend needed modifications
  5. Assess and revise project evaluation implementation timeline and provide a schedule for conducting data gathering, analysis, and reporting
  6. Provide technical assistance as needed
  7. Prepare and submit final evaluation reports in consultation with the Project Director
  8. Attend at least one Partnership Advisory Team (PAT) meeting to outline the evaluation process
  9. Assist in assessing project participants’ training needs at the start of the project
  10. Design project questionnaires, interview protocols, checklists, rating scales, and all other project-developed instruments in consultation with project staff and consultants
  11. Assist in identifying and characterizing the non-project comparison group
  12. Communicate regularly with the Project Director concerning the evaluation process
  13. Attend and report on meetings convened by the funding program, as needed
  14. Assist in submitting the project for validation as a national model

 

Qualifications:

  1. Master’s degree in Education or a related field; a Doctorate is preferred
  2. Knowledge of and experience in assessing projects serving disadvantaged and minority high school students, evaluating partnership programs, and in managing the evaluation process
  3. Technical background in program evaluation, data collection and analysis, and reporting
  4. Familiarity with national model STEM programs
  5. Familiarity with state and national academic standards for STEM subjects

 


In its many guises, “accountability” is a modern-day watchword, one bursting with consequences. And in grant stewardship, an organization’s accountability for results – among both grant makers and grant recipients – has become compulsory and universal.

 

Background:

These days, every applicant should expect to submit an evaluation report of some kind for every grant it wins – no matter its source, amount, or duration. The reports often will discuss both a grant-funded program and its finances. Some evaluation reports may be simple and short, possibly even perfunctory; others may be complex and long, and absolutely critical to future funding.

 

Many evaluation reports will be standard-format and template-based, even those from small foundations. Use of templates and other predetermined formats will make it easier to aggregate and analyze data across cohorts of grant recipients. Such practices also will let a grant maker gauge the impacts of its grant awards and to report its findings to its core constituencies.

 

Government Performance and Results:

Federal grant programs may set the tone for current trends in accountability.

 

In preparing grant proposals as well as for subsequent evaluation reports, recipients of grants from Federal agencies need particularly to be aware of the Government Performance and Results Modernization Act of 2010 (GPRMA). Every Federal grant program has its own performance indicators, subsequent both to this Act and its decades-old predecessor, the Government Performance and Results Act (GPRA) of 1993.

 

The GPRMA’s Title 31 [Subtitle II, Chapter 11, Section 1115(b)(6)-(8)] includes instructions to federal agencies to “establish a balanced set of performance indicators to be used in measuring or assessing progress toward each performance goal, including as appropriate, customer service, efficiency, outcome, and output indicators; provide a basis for comparing actual program results with the established performance goals; and a description of how the agency will ensure the accuracy and reliability of data used to measure progress towards its performance goals…”

 

As is commonplace in Federal legislation, the GPRMA also includes helpful statutory definitions of several key terms, among which are:

  1. Outcome measure
  2. Output measure
  3. Performance goal
  4. Performance indicator
  5. Program activity
  6. Program evaluation

 

Consistent with the GPRMA, Federal agencies must establish “agency priority goals,” post them on their websites, report progress toward their performance goals on a quarterly basis, and compare actual results to desired performance levels on a quarterly basis as well. The GPRMA also requires that all agencies “for agency priority goals at greatest risk of not meeting the planned level of performance, identify prospects and strategies for performance improvement, including any needed changes to agency program activities, regulations, policies, and other activities.”

 

Later posts will explore what such comprehensive legislation means for winning and keeping grants from the 26 grant-making federal agencies during the 2010s.

%d bloggers like this: