Skip navigation

Tag Archives: Evaluation

This post is one of a series about what goes into proposals that win grants. Its topic is evaluation plans. Its context is the United States of America.

 

The quality of an applicant’s evaluation plan is critical for its proposal in winning a grant. The same plan is also critical to success in implementing a project. The evaluation plan demonstrates the applicant’s willingness to report on the benefits and results of a grant. Its content and level of detail vary with the funder’s requirements and with the nature and scope of a project’s program design.

 

Tips

 

An applicant’s evaluation plan needs to answer essential questions, such as:

  1. How will it collect or gather data?
  2. Who will collect the data?
  3. When will it collect the data?
  4. How often will it collect the data?
  5. How will it analyze the data?
  6. How will it report the data?
  7. When will it report the data?
  8. How often will it report the data?
  9. To whom will it report the data?

 

An applicant can strengthen its evaluation plan, if it:

  1. Describes its internal evaluation team
  2. Identifies and uses a highly qualified External Evaluator
  3. Presents its External Evaluator as one of its key personnel
  4. Defines and delivers what stakeholders need or want to know
  5. Defines its data collection needs and strategies
  6. Uses summative and formative evaluation methods
  7. Uses quantitative and qualitative evaluation methods
  8. Describes technical merits – reliability and validity – of its evaluation instruments
  9. Incorporates a grant program’s performance indicators (if any)
  10. Identifies target audiences for its evaluation reports
  11. Links monitoring and evaluation to its management plan
  12. Presents its evaluation processes in chart or table format
  13. Includes a timeline or a list of evaluation milestones

 

Among the many roles of an evaluation plan are to:

  1. Measure an applicant’s progress in achieving its objectives
  2. Provide accountability for outcomes to funders and other stakeholders
  3. Assure a grant maker of an organization’s effectiveness and capacity
  4. Improve the quality and extent of implementation of key activities
  5. Increase local support for a current initiative and for its sequels
  6. Inform decisions about what works and what to do after a grant ends

 

Advertisements

Introduction

Revised in mid-2016, this post covers guidance about the roles of evaluation in common grant application (CGA) forms. Its context is the United States of America.

 

Context

At least 20 associations of grant makers or other organizations in the United States publish a common grant application (CGA) form online. This post explores the instructions and questions about Evaluation and Evaluation Plans that they pose to applicants.

 

Other posts will explore the CGA in terms of required elements of proposals, applicant revenue sources, budget expense categories, and proposal length and format requirements. The end of the post explains the abbreviations that it uses.

 

Significance

As a collection, taken as a whole, the common grant application (CGA) forms provide insight into the questions and considerations that interest hundreds of private grant makers. Among other things, hey shed light on evaluation plans as elements of a complete proposal. And they differentiate evaluation plans, as attachments, from evaluation plans as required parts of complete proposal narratives.

 

Role of Evaluation

Out of the 20 providers of common grant application forms, two (or 10%) of them — DC and NY — give no instructions to applicants about Evaluation or Evaluation Plans. In addition, of the 18 CGA providers that do pose evaluation questions, four (or 22%) of them — AZ, CT, ME, and WA — do not present Evaluation as a separate proposal element.

 

In the table below, a Y (Yes) means that the CGA provider does give some instructions about Evaluation or Evaluation Plans. A plus (+) means that the CGA provider both gives instructions and includes Evaluation or Evaluation Plans — as such — as a distinct selection criterion in its instructions for proposal narratives. An asterisk (*) means that a CGA provider gives no instructions about Evaluation or Evaluation Plans.

 

  Common Grant Application Forms
  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
CGA Provider NNG AZ CA CO CT DC IL ME MA ME MN MO NJ NY OH PA1 PA2 TX WA WI
Instructions Y Y Y Y Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y
Criterion +   + +     +   + + + + +   + + + +   +
No Instructions           *               *            

 

All CGA States

 

Analysis

Among more frequent topics of Evaluation questions found on common grant application forms are:

  • How evaluation results will be used — NNG, CA, MI, MN. MO, OH, and WI
  • How the organization measures effectiveness – IL, ME, MO, NJ, WA, WI, and PA-2
  • How the organization defines (criteria) and measures success – IL, MI, MN, NJ, WA, WI, and PA-2
  • Anticipated results (outputs and/or outcomes) – MD, MA, NJ, WI, and PA-2
  • What assessment tools or instruments will be used — AZ, MO, PA-2, and TX
  • How the organization evaluates outcomes and/or results – CT, MD, OH, and PA-2
  • Who will be involved in evaluation — NNG, MN, and OH
  • How constituents and/or clients will be involved actively in the evaluation – MI, MN, and OH
  • How the organization measures short- and long-term effects or outcomes – MN, OH, and PA-2

 

Among less frequent topics of Evaluation questions on common grant application forms are:

  • What questions will evaluation address — NNG and AZ
  • Overall approach to evaluation — CO and WI
  • How the organization measures impact — CO and PA-1
  • Timeframe for demonstrating impact — CO and OH
  • What process and/or impact information the organization will collect – MD and TX
  • How the organization assesses overall success and effectiveness – MD and MA
  • How evaluation results will be disseminated – CA, MI, and OH

 

Among infrequent topics of Evaluation questions on common grant application forms are:

  • The organization’s plans for assessing progress toward goals – ME
  • The organization’s plans for assessing what works – ME
  • How the organization evaluates its programs – PA-1
  • How the organization has applied what it has learned from past evaluations – PA-1
  • How the organization monitors its work – WA

 

Sources

Below is a list of abbreviations used in this post. The common grant application forms are found on their providers’ websites.

  • 1.   NNG: National Network of Grantmakers
  • 2.   AZ: Arizona Grantmakers Forum
  • 3.   CA: San Diego Grantmakers
  • 4.   CO: Colorado Nonprofit Association
  • 5.   CT: Connecticut Council for Philanthropy
  • 6.   DC: Washington Regional Association of Grantmakers
  • 7.   IL: Forefront (Chicago area)
  • 8.   ME: Maine Philanthropy Center
  • 9.  MA: Associated Grantmakers
  • 10. MI: Council of Michigan Foundations
  • 11. MN: Minnesota Community Foundation
  • 12. MO: Gateway Center for Giving
  • 13. NJ: Council of New Jersey Grantmakers/Philanthropy New York
  • 14. NY: Grantmakers Forum of New York
  • 15. OH: Ohio Grantmakers Forum
  • 16. PA-1: Philanthropy Network Greater Philadelphia
  • 17. PA-2: Grantmakers of Western Pennsylvania
  • 18. TX: Central Texas Education Funders
  • 19. WA: Philanthropy Northwest
  • 20. WI: Donors Forum of Wisconsin

 

 

Introduction

Data drive the lives of grant proposals from cradle to grave. For better or worse, they do so in assessing needs, again in articulating objectives, and yet again in developing evaluation plans.

 

Performance Indicators

One defining aspect of an objective is its performance indicator or criterion for success. For each indicator a grant recipient must collect, analyze, and report data that measure the degree to which a criterion is being met or has been met.

 

In formulating objectives in a grant proposal, selection of any given performance indicator should occur only after a careful consideration of alternatives. Each alternative affects the entire evaluation and influences its usefulness to end-users.

 

Among useful questions that proposal planners should ask for each performance indicator are:

 

Sources

  • What data do you need in order to measure it?
  • Does it permit you to use existing sources of data?
  • Does it require you to generate new sources of data?
  • Is an appropriate instrument available to generate such data?
  • Must you create a new instrument to generate data?

 

Constraints

  • Is it practical to create a new instrument in terms of time?
  • Is it practical to do so in terms of cost?
  • Is it practical to do so in terms of available staff expertise?
  • Is it practical to do so in terms of logistics?

 

Time

  • When will you collect data?
  • Will you collect data only before and after the project?
  • How often will you collect data during the project?
  • How long after the project ends will you continue to collect data?

 

Design

  • How will you collect data?
  • Will you use an experimental or quasi-experimental design?
  • Will you use pre/post assessments?
  • Will you develop your own instruments (e.g., surveys, questionnaires)?
  • Will you conduct interviews?
  • How will you ensure each instrument is fit for its intended purpose?

 

Expertise

  • Who will gather the data?
  • Will you hire an external evaluator?
  • What skills and expertise must the evaluator have?
  • What will it cost to use an evaluator?
  • Who will analyze and interpret the data?

 

Audiences

  • Who are your audiences for the data?
  • What do these audiences need to know?
  • Who will report the data?
  • How will report recipients use the reported data?

 

In an age of elevated accountability for results, the Evaluation Plan is one of the most critical components of a competitive grant proposal.

 

For virtually every objective one might conceive, many types of thoroughly reviewed evaluation instruments are readily available. Often these instruments are widely used to generate and monitor data and to track and report on performance outcomes; yet, they may be new to any given applicant and its grant writing team.

 

Selecting Evaluation Instruments

In selecting one or more evaluation instruments to measure a specific objective in a proposal, a smart grant writing team will first locate and study relevant technical reviews found throughout the professional literature of program evaluation. The smart team is certain to look for:

  • Evidence for the technical review writer’s objectivity
  • Evidence for the instrument’s reliability
  • Evidence for the instrument’s validity
  • Limitations on the available evidence
  • Discussions of the instrument’s intended uses
  • Prerequisites for the instrument’s effective use
  • Required frequency and mode of use
  • Time required for administration and data analysis and reporting
  • Costs associated with using the instrument

 

Finding Technical Reviews

There are many possible sources of technical reviews of evaluation instruments. One of the best and most comprehensive resources is the Mental Measurement Yearbooks, a series published both online and in print by the Buros Institute. A second resource, of more limited scope, is the ERIC Clearinghouse on Assessment and Evaluation. Nearly every specialized and science-driven discipline will have its own review repository as well.

 

Reasons for Using Technical Reviews

Applicants need to persuade skeptics that their Evaluation Plan will provide evidence of program effectiveness. One way to do so is to demonstrate to wary readers that the proposed evaluation instruments are judiciously selected and are appropriate for their proposed uses. The findings published in technical reviews furnish invaluable assets for accomplishing this task. The rest hinges upon how well an applicant uses these assets in describing and justifying its Evaluation Plan.

Every grant seeker must carefully consider its options when it plans how it will collect data throughout a proposed project or initiative. Each data collection method has immediate consequences for budgets, personnel, and other aspects of the undertaking.

 

An earlier post discussed three approaches to data collection: self-reports, observation checklists, and standardized tests. This post will discuss three more: interviews, surveys/questionnaires, and reviews of existing records.

 

Interviews:

  1. How important is it to know the service recipient’s perspective?
  2. Do the interview questions already exist or must they be created?
  3. Who will create the questions and vet their suitability to intended purposes?
  4. How will the applicant ensure accurate recording of responses to interview questions?
  5. Will interviews be used together with a survey or separately?
  6. Are enough persons available to conduct the interviews?
  7. How often will interviews occur and who will be interviewed?
  8. Will interviews be in English or in other languages as well?
  9. Who will translate the interviews and ensure accuracy of the translations?

 

Surveys or Questionnaires:

  1. How important is it to know the service recipient’s perspective?
  2. How will you control for inaccurate or misleading survey responses?
  3. Does the survey already exist or must it be created?
  4. Who will create the survey and vet its suitability to intended purposes?
  5. Will the survey be all forced-choice responses or will it include open-ended prompts?
  6. Will the survey be self-administered?
  7. Who will complete the surveys?
  8. Who will collect completed surveys?
  9. Will the survey be in English or in other languages as well?
  10. Who will translate the survey responses and ensure accuracy of the translations?

 

Reviews of Existing Records:

  1. Are the records internal to the applicant organization?
  2. Are the records external (i.e., found in other organizations, such as partners)?
  3. Will the external organizations (or partners) agree to the use of their records?
  4. Who will determine whether the records are timely and relevant?
  5. Are the records quickly, easily, and readily accessible?
  6. Are the records formal and official?
  7. Are the records maintained consistently and regularly?
  8. Are the records reliable?
  9. Are protocols in place to protect and preserve privacy and confidentiality?
  10. How will the applicant ensure that existing protocols are followed?

What is a ‘performance indicator’? By one definition (found in the GPRA Modernization Act of 2010) it is “a particular value or characteristic used to measure an output or an outcome.” As a value, an indicator may be quantitative. As a characteristic, it is often quantitative, but it may also be qualitative.

 

It is often prudent to use two or three performance indicators to measure each output or outcome that is proposed to be the focus of an objective. Using one indicator alone is sometimes all that’s needed, but using more may yield findings that just one might miss.

 

Purposes of Indicators:

Use of indicators makes it possible to determine the extent to which the intended beneficiaries of a project or initiative in fact experienced a desired benefit. In turn, such determinations contribute to decisions about necessary interim or midcourse corrections and about the ultimate effectiveness of the project or initiative in achieving its objectives and attaining its goals. These determinations, as culled from evaluation reports, then contribute to decisions about continuing appropriations or allocations for specific grant programs.

 

In order to be useful in gauging the success and continued funding-worthiness of a project or initiative, performance indicators should have several attributes:

  • Specific
  • Measurable
  • Observable
  • Valid
  • Reliable
  • Pertinent

 

Indicators measure how closely a performance target has been met. If a target has been met or exceeded, based on the indicators used, the finding either implies or demonstrates a benefit. The more an intended benefit can be reported, the more successful a grant program will appear to be.

 

Performance Targets:

A performance target defines a criterion for success for an output or outcome. It sets a threshold for deciding whether a project or initiative is doing well or poorly in a given aspect. A usefully constructed performance target has several attributes:

  • Quantitative (number or ratio) preferably
  • Realistic or feasible
  • Reflective of experience
  • Reflective of baseline data
  • Valid
  • Reliable
  • Pertinent

In a multi-cycle project or initiative, the data collected during the first funding cycle will play several roles. It will corroborate or correct the baseline data presented in the original proposal. It will furnish a new basis for comparisons at intervals (e.g., quarterly or yearly) during a multi-cycle funding period. It will form a possible rationale for making midcourse corrections before the initial funding cycle ends.

 

Illustration:

  • Context – a high school physics science education project
  • Desired Outcome – that participants will demonstrate increased knowledge of the scientific method as implemented in a physics lab
  • Performance Indicator – that participants will list in correct sequence the contents by topic of a complete physics lab report
  • Performance Target – that 90% of participants submit a correctly sequenced physics lab report

As a proposal writer, as you sort through your options for how an applicant will collect data during the lifespan of a proposed project or initiative, it will help if you consider:

  1. What kinds of data does the applicant need?
  2. Must the data be quantitative?
  3. Must the data be standardized?
  4. Must the data be reliable?
  5. Must the data be valid?
  6. Do the data already exist? If so, where?
  7. Does a data collection instrument already exist? If so, is it usable as-is?

 

For a given type of data – to be analyzed with a given desired or required level of rigor – the best choices of data collection methods often prove also to be the simplest, least expensive, and most direct.

 

Among the most commonly used data collection methods are: self-reports, observation checklists, standardized tests, interviews, surveys, and reviews of existing records. This post will cover the first three methods. A later post will cover the others.

 

Self-Reports:

  1. Are there questions whose responses could be used to assess a given indicator?
  2. Does a self-report provide sufficiently objective and accurate data?
  3. Will a self-report be sufficiently reliable (e.g., stable over time)?
  4. Are adequate safeguards in place to protect privacy and confidentiality?

 

Observation Checklists:

  1. Is the expected change readily observed (such as a skill or a condition)?
  2. Will using the checklist require trained observers?
  3. Are enough already trained persons available to observe events and behaviors?
  4. Can volunteers be trained and deployed as observers?
  5. Can trained observers measure the indicator without also asking questions?
  6. Are adequate safeguards in place to protect privacy and confidentiality?

 

Standardized Tests:

  1. Is the expected change related to knowledge or a skill?
  2. Is the knowledge or skill something that is already tested?
  3. What are the technical attributes of the tests already used?
  4. Can a pre-existing test be used or must a new one be created?
  5. If a new one is needed, how will its validity and reliability be verified?
  6. Can the same test be used with all test-takers?
  7. Must special accommodations be made for some test-takers?
  8. Must the applicant administer the test or do others administer it?
  9. Do others already statistically analyze the test results or must the applicant?

Data are critical to the success of a grant proposal. In addition, data are critical to the success of a funded project or initiative. Although data may be qualitative as well as quantitative, major funders tend to look for plans to generate quantitative data. While designing a data collection plan, smart grant seekers will ask how (strategies) and how often (frequencies).

 

Data Collection Strategies:

In creating a plan, consider the best ways to capture each performance indicator. Since the changes in conditions or behaviors that each indicator will measure may be subtle, incremental, or gradual, each indicator will need to be sensitive enough both to detect the changes to be measured and to determine their significance.

 

Data Collection Frequency:

In addition, consider what frequency will furnish the most useful data for monitoring and measuring expected changes. Typical frequencies are daily, monthly, quarterly, and yearly. Be certain to differentiate between outputs and outcomes, since outputs often require considerably less time to be observable and measurable than do outcomes.

 

In planning for data collection, smart grant seekers will ensure that:

  1. Collecting data neither usurps nor impedes delivery of direct services
  2. Staff rigorously protect and preserve the privacy and confidentiality of data
  3. Data collection methods are time-efficient and cost-effective
  4. Data collection activities strictly observe human research standards and protocols
  5. A neutral third party evaluates and reports the collected data

 

Data Collection Caveats:

In considering the nature and uses of the data to be collected, be mindful that:

  1. Data should be aggregated and analyzed to reflect the total population served
  2. Findings should be limited to a specific project or initiative
  3. Findings should be limited to a specific population of intended beneficiaries
  4. Cause and effect claims require much more rigorous evidence than associative claims

 

A later post in this series will discuss data collection methods.

 

For decades now, outcome evaluation has been a key aspect of planning proposals that win grants. It requires a systematic analysis of projects or initiatives as they unfold over time, from inputs and activities onward through outputs and outcomes. The net result is a logic model useful for virtually every element of an evaluation plan.

 

Input:

It’s helpful to think of an input as any resource necessary to do the work of a project. As such, an input can be human labor (paid or volunteer personnel), materials (equipment and supplies), finances (existing and future funding), and facilities (locations where project activities will occur). Most inputs must be in place and available for use before a project starts.

 

Activity:

An activity is a work task associated with a project. It may occur before a project starts (e.g., planning), during a project (e.g., implementing and monitoring), or after a project ends (e.g., continuing and close-out). Some activities are singular or discrete events (e.g., yearly conferences), others are continuous processes (e.g., classroom instruction), and yet others may be both (e.g., training). Rationales for selecting specific activities should reflect documented needs and proposed objectives; often they also must reflect research into best practices.

 

Output:

An output is a unit of production or a unit of service, and stated as a number. It is often the focus of a process objective. As units of production, outputs include numbers of newsletters published, numbers of blog articles posted, numbers of curricular units developed, numbers of workshops held, numbers of library books purchased, and so on. As units of service, they include numbers of students taught, numbers of staff trained, numbers of parents contacted, numbers of patients treated, and so on. Outputs do not indicate what measurable changes occurred in the users of the products or in the recipients of the services.

 

Outcome:

An outcome is an observable and measurable change that occurs in a pre-defined population of intended beneficiaries either during or after a project. Often it is expressed in terms of a change in knowledge or skill (short-term outcomes), or a change in behavior (mid-term outcomes), or a change in affect, condition, or status (long-term outcomes). An outcome objective focuses on what is expected to happen as a consequence of staff undertaking a set of activities and of intended beneficiaries participating in them.

 

Thus, increased knowledge in teaching engineering is an expected outcome of taking a course in it; being a high school teacher who completed such a course is an output. Again, a reduced annual rate of middle school bullying is an expected outcome of implementing a school-wide model anti-bullying program; holding 12 hour-long sessions for all school staff on applying the model is an output. And creating a positive school climate is an expected outcome of a comprehensive school reform initiative; installing 20 posters about civic virtues throughout every school is an output.

 

Performance Target:

As the subject of a well-formulated project objective, each desired outcome is a performance target, which typically may be stated as a number or a ratio or both. The best target is both feasible and ambitious. Grant recipients use outcome indicators to observe, measure, monitor, and evaluate their progress toward attaining each performance target.

In the context of grants, evaluation is a systematic inquiry into project performance. In its formative mode, it looks at what is working and what is not; in its summative mode, it looks at what did work and what did not. In both modes, it identifies obstacles to things working well and suggests ways to overcome them. For evaluation to proceed, the events or conditions that it looks at must exist, must be describable and measurable, and must be taking place or have taken place. Its focus is actualities, not possibilities.

 

Data Collection:

Effective evaluation requires considerable planning. Its feasibility depends on access to data. Among the more important questions to consider in collecting data for evaluation are:

  • What kinds of data need to be acquired?
  • What will be the sources of data?
  • How will sources of data be sampled?
  • How will data be collected?
  • When and how often will the data be collected?
  • How will outcomes with and without a project be compared?
  • How will the data be analyzed?

 

Problem Definition:

In developing an evaluation plan, it is wise to start from the problem definition and the assessment of needs and work forward through the objectives to the evaluation methods. After all, how a problem is defined has inevitable implications for what kinds of data one must collect, the sources of data, the analyses one must do to try to answer an evaluation question, and the conclusions one can draw from the evidence.

 

Evaluations pose three kinds of questions: descriptive, normative, and impact (or cause and effect). Descriptive evaluation states what is or what has been. Normative evaluation states what is to what should be or what was to what should have been. Impact evaluation states the extent to which observed outcomes are attributable to what is being done or has been done. The options available for developing an evaluation plan vary with each kind of question.

 

Power:

An evaluation plan does not need to be complex in order to provide useful answers to the questions it poses. The power of an evaluation should be equated neither with its complexity nor with the extent it manipulates data statistically. A powerful evaluation uses analytical methods that fit the question posed; offer evidence to support the answer reached; rule out competing evidence; and identify modes of analysis, methods, and assumptions. Its utility is a function of the context of each question, its cost and time constraints, its design, the technical merits of its data collection and analysis, and the quality of its reporting of findings.

 

Constraints:

Among the most common constraints on conducting evaluations are: time, costs, expertise, location, and facilities. Of these constraints, time, costs, and expertise in particular serve to delimit the scope and feasibility of various possible evaluation design options.

 

Design Options:

Most evaluation plans adopt one of three design options: experimental, quasi-experimental (or non-equivalent comparison group), or pre/post In the context of observed outcomes, the experimental option, under random assignment of participants, is most able to attribute causes to outcomes; the pre/post option – even one featuring interrupted time series analyses – is least able to make such attributions.

 

The experimental option tends to be the most complex and costliest to implement; the pre/post option tends to be the simplest and least costly. Increasingly, Federal grant programs favor the experimental evaluation design, even in areas of inquiry where it is costly and difficult to implement at the necessary scale, such as education and social services.

%d bloggers like this: