Skip navigation

Tag Archives: Evaluation

Knowing the language of project evaluation is essential for writing a winning grant proposal. Entries here run from Milestone to Validity. Their context is North America.

 

Glossary Graphic 3

 

Below is a list of the glossary terms in this post:

 

Meta-Analysis Program Evaluation
Milestone Qualitative Evaluation
Objective Quantitative Evaluation
Outcome Reliability
Output Report
Practical Significance Result
Post-Assessment Statistical Significance
Pre-Assessment Summative Evaluation
Process Evaluation Theory of Change
Product Evaluation Validity

 

META-ANALYSIS: A statistical analysis of a number of separate but similar scientific studies or experiments, as available in the existing literature, in order to ascertain the statistical significance and practical significance of the combined data. Used to evaluate evidence across multiple published and unpublished studies or experiments to determine the effect sizes of interventions and to ascertain which interventions work best. Methods place available data in context, and are useful for a proposal’s Problem Statement, Research Review, and Evaluation Plan. Also see: Effect Size, Practical Significance, Statistical Significance.

 

MILESTONE: A discrete event or specific accomplishment used to measure the progress or momentum of a project or initiative towards implementing its activities, accomplishing its objectives, and attaining its goals. Also see: Benchmark.

 

OBJECTIVE: A time-bound statement, framed in specific, attainable, and measurable terms, of what an applicant is going to accomplish during a project or initiative; it advances the project or initiative towards attaining its goal or goals. Objectives are indispensable and critical elements in a Work Plan or a Plan of Action or a Program Design. Example: Each project year, 90% or more of project participants will demonstrate statistically significant gains (p< .05) in English literacy, as measured by state-mandated assessments.

 

OUTCOME: The desired and intended quantitative or qualitative end result or consequence of a set of activities undertaken to accomplish one or more objectives. It is often used as a measurement of effect rather than of effort. Examples: 75% reduction in school expulsions; 25% reduction in high school dropout rates; 25% increase in Advanced Placement course enrollments; 20% decrease in body mass index; 50% reduction in motor vehicle thefts; 20% decrease in pedestrian crosswalk fatalities.

 

OUTPUT: A tangible or quantifiable direct product of an activity. It is often used as a measurement of effort rather than of effect. Examples: Four new geography units; ten bilingual education workshops; six program newsletters; 2400 home visits; four credit-awarding webinars; a new health sciences kit.

 

POST-ASSESSMENT: Measurement of knowledge, attitudes, behaviors, and/or other attributes of a population of beneficiaries or their context. Usually occurs either after or near the end of a project or initiative. Often called a Post-Test. Also see: Pre-Assessment.

 

PRACTICAL SIGNIFICANCE: The magnitude of an observed difference, or its effect size. It measures relationships among variables in real-world settings. Sample size does not directly impact it. When obtained results of studies or experiments are large enough to be useful or meaningful in such settings, they are said to be practically significant. Also see: Effect Size, Statistical Significance.

 

PRE-ASSESSMENT: Measurement of knowledge, attitudes, behaviors, and/or other attributes of a population of beneficiaries or their context. Usually occurs either before or early after the start of a project or initiative. Often called a Pre-Test. Also see: Post-Assessment.

 

PROCESS EVALUATION: A rigorous investigation, measurement, and description of the actions, strategies, structures, and processes enacted and adjusted during the implementation of a project or initiative. Also see: Product Evaluation.

 

PRODUCT EVALUATION: Use of rigorous and systematic methods to investigate and describe the results of the actions, strategies, structures, and processes enacted and adjusted during the implementation of a project or initiative. Also see: Process Evaluation.

 

PROGRAM EVALUATION: Use of rigorous and systematic methods to investigate and describe the effectiveness and efficiency of a program, and of its projects or initiatives, by one or more qualified and impartial evaluators. Also see: External Evaluation, Internal Evaluation

 

QUALITATIVE EVALUATION: Use of rigorous and systematic methods to investigate and describe properties, state, and character of observed and self-reported phenomena associated with implementing a project or initiative. Also see: Quantitative Evaluation.

 

QUANTITATIVE EVALUATION: Use of rigorous and systematic statistical methods and data analysis to track, measure, analyze, rank, rate, compare, and report the extent to which a project or initiative accomplishes its objectives and attains its goals. Also see: Qualitative Evaluation.

 

RELIABILITY: The extent to which a scale yields consistent or stable results if the same measurement is repeated a number of times. Among types of statistical reliability are: internal, external, test-retest, inter-rater, parallel forms, and split-half. Reliability coefficients are used to determine degree of similarity of results (i.e., the range of variance) using the same scale. The range of correlation for reliability coefficients is 0 to 1. The higher the correlation, the more reliable is the scale; desirable levels start at 0.85+ for high-stakes measures, and 0.70+ for low-stakes measures. The Buros Mental Measurements Yearbooks are a useful source of technical reviews for more than 3,000 scales. Also see: Validity.

 

REPORT: A document, created for a specific target audience, that describes the context and methods used to monitor and evaluate a grant-funded project or initiative, and that presents the evaluator’s findings and conclusions. May propose midcourse adjustments. May suggest strategies to overcome obstacles to more effective evaluation. May suggest ways to use evaluation findings to guide subsequent plans, actions, and strategies.

 

RESULT: A measurable consequence of implementing a project or initiative, but not necessarily the intended and anticipated focus of an objective or a goal. Examples: Improved academic achievement in a science enrichment project; increased property values in a graffiti abatement project; reduced PRAMS-indicated risk behaviors in an infant mortality reduction project.

 

STATISTICAL SIGNIFICANCE: The probability (usually set at p< 0.05) that a relationship exists between two or more variables – a function of the means and standard deviations of data samples. The relationship may be strong or weak. Sample size directly impacts it, as does sampling error. When the observed difference in studies or experiments is large enough to conclude that it is unlikely to have occurred by chance, results are said to be statistically significant. Also see: Practical Significance.

 

SUMMATIVE EVALUATION: The measurement of the extent or degree of success of a project or initiative; it offers conclusions about what worked (and what did not) and it makes recommendations about what to keep, what to change, and what to discontinue; it occurs at the end of each project year and after the grant-funded project ends. Also called Outcome Evaluation or Product Evaluation. Also see: Formative Evaluation.

 

THEORY OF CHANGE: A framework of assumptions, beliefs, and principles used to guide implementation of a project or initiative in pursuit of desirable, measurable, and observable changes among its beneficiaries. May encompass reviews of evidence of need, evidence for selection of effective and cost-efficient strategies, and evidence for anticipated outcomes.

 

VALIDITY: The extent to which a scale measures what it claims to measure. Among types of statistical validity are: predictive, postdictive, population, ecological, concurrent, face, criterion, internal, external, construct, content, and factorial. The range of correlation for the validity coefficient is 0 to 1. The higher the correlation, the more valid is the scale; commonly, validity coefficients range only from 0 to 0.50. The Buros Mental Measurements Yearbooks are a useful source of technical reviews for more than 3,000 scales. Also see: Reliability.

 

This post concludes a two-part Glossary for Evaluation Plans. It is a companion to a seven-part Glossary of Budget Development and a five-part Glossary of Proposal Development.

 

Advertisements

Knowing the language of project evaluation is essential for writing a winning grant proposal. Entries here cover Activity to Logic Model. Their context is North America.

 

Glossary Graphic 3

 

Below is a list of the glossary terms in this post:

 

Activity Evaluator
Assessment Experimental Design
Baseline External Evaluation
Benchmark Final Evaluation
Data Analysis Formative Evaluation
Data Collection Goal
Effect Impact
Effect Size Implementation
Effectiveness Indicator
Effort Input
Evaluation Interim Evaluation
Evaluation Design Internal Evaluation
Evaluation Plan Logic Model
Evaluation Team

 

ACTIVITY: A step or action taken to meet one or more objectives of a project or initiative. An activity may occur any number of times. It may be singular or it may be part of a series or sequence of related activities. Also see: Objective.

 

ASSESSMENT: A formal or informal measurement of the status of one or more issues of interest to an individual or organization. Alternatively, the means or instrument used to measure the status of one or more such issues. Often an assessment repeats at regular intervals, e.g., each month or each year. Also see:Post-Assessment, Pre-Assessment.

 

BASELINE: A starting point measured for a goal or objective before a project or initiative begins and at intervals during its implementation. Provides a point of reference for interim and pre-post comparisons.

 

BENCHMARK: (1) An external frame of reference or a state of affairs used as a source or basis of comparison and as a target towards which a grant-funded project or initiative aspires.  Example:  Becoming a nationally validated model program. (2) An internal periodic or interim target towards which a grant-funded project or initiative aspires. Example: Yearly gains in increments of 0.5 normal curve equivalents (NCE) over the baseline performance on some measure in a multiyear project.

 

DATA ANALYSIS: A methodical study of the structures, dynamics, relationships, and elements characterizing the inputs and outputs of a project or initiative in order to learn how they relate to each other, how they interact with each other, and how they contribute to each other.

 

DATA COLLECTION: Methodical accumulation and organization of statistical and empirical evidence about the structures, dynamics, inter-relationships, and elements characterizing the inputs and outputs of a project or initiative.

 

EFFECT: A change attributable by observation and/or measurement to one or more efforts during a project or initiative as its consequence or outcome. Also see: Effort, Outcome.

 

EFFECT SIZE: A standardized mean difference between two groups, which quantifies the size of the difference between them. It is calculated as the mean of the experimental group minus the mean of the control group, with the result divided by the standard deviation of the population from which the two groups were taken. It is precisely equivalent to a Z score on a standard normal distribution. Also see: Practical Significance.

 

EFFECTIVENESS: In terms of inputs and outputs, the extent to which a project or initiative has made progress towards its goals and objectives or the extent to which it has met them.

 

EFFORT: A step or action implemented in order to move a project or initiative towards reaching an outcome, objective, or goal. Also see: Effect, Outcome.

 

EVALUATION: Analysis of the degree to which an applicant, as a grant recipient, implements its activities, accomplishes its objectives, and attains its goals. Also an analysis of obstacles to progress and strategies used to overcome them. Often describes processes and products, inputs and outputs. May use qualitative and/or quantitative measures. Applies standards of evaluation design to identify outcomes of statistical and practical significance. May be formative and/or summative. Also see: Formative Evaluation, Summative Evaluation.

 

EVALUATION DESIGN: A time-bounded plan for evaluating a project. Plan components include goals, objectives, activities, timeline, and strategies. Also called an Evaluation Plan.

 

EVALUATION PLAN: An applicant’s proposed scheme, method, or program for collecting, measuring, analyzing, and reporting data about the progress and outcomes of a project or initiative, and for ascertaining, describing, and confirming the degree to which it has accomplished its objectives and attained its goals. Often encompasses using interim findings to contribute to ongoing improvement of a project or initiative during implementation. Indicates what will be done, who will do it with what and where, when and how often it will be done, at what cost, and (often) why it will be done. Usually forms a critical part of a proposal’s narrative and budget.

 

EVALUATION TEAM: A group of persons who work as a unit in evaluating a project or initiative. May include persons paid with non-grant funds and persons affiliated with organizations other than the applicant or grant recipient. Also see: Evaluation Team, External Evaluation.

 

EVALUATOR: The person or firm hired or retained to evaluate a project or initiative. Also see: Evaluation Team, External Evaluation, Internal Evaluation.

 

EXPERIMENTAL DESIGN: A quantitative research method where one set of variables is kept constant and another set of variables (comparison group) is manipulated and compared to the first set of variables (control group).

 

EXTERNAL EVALUATION: Evaluation that uses persons who work outside the organization of the grant recipient or fiscal agent and/or its partnering agencies. Also called Third-Party Evaluation. Also see: Summative Evaluation.

 

FINAL EVALUATION: The last evaluation of a grant-funded project or initiative; it occurs after its last year or otherwise at its end. Also see: Interim Evaluation.

 

FORMATIVE EVALUATION: Monitoring that occurs at set intervals during a project or initiative; it yields feedback that often leads to adjustments and corrective action during the course of that project or initiative; also may be called Process Evaluation. Also see: Summative Evaluation.

 

GOAL: A desired long-term accomplishment or a general and desired direction of change, often stated in abstract or global terms. The goal normally reflects the mission of the applicant and/or the funding purposes of a specific grant maker. Also see: Objective.

 

IMPACT: A tangible or quantifiable long-term outcome of a grant-funded project or initiative, often framed in broad terms as a desirable or ideal condition or state of affairs and as a consequence or effect attributable to accomplishing one or more of its objectives.

 

IMPLEMENTATION: The process of doing the activities specifically described in a proposal and any others (e.g., fiscal management and performance monitoring) that are explicitly required by a funder or are deemed necessary, often implicitly and as a matter of course, to the success of a project or initiative.

 

INDICATOR: (1) A measure of the need for some aspect of a project or initiative. (2) A measure of the direct outcomes and results of a project or initiative for its participants and for its intended beneficiaries; in this latter sense, it also may be called a Performance Measure or a Performance Indicator. Also see: Need.

 

INPUT: A tangible or quantifiable resource invested in the pursuit of the specific outcomes and impacts sought in a grant-funded project or initiative. Examples: Time, expertise, funding, personnel, supplies, facilities, and technologies.

 

INTERIM EVALUATION: Evaluation at pre-defined intervals before the end of a grant-funded project or initiative. Also called Midcourse Evaluation. Also see: Final Evaluation.

 

INTERNAL EVALUATION: Evaluation that uses one or more persons who work within the organization of the grant recipient or fiscal agent and/or its partnering agencies. Also see: External Evaluation.

 

LOGIC MODEL: A schematic or graphical representation, often presented as a flow chart or as a table, which shows how inputs and activities interact and lead to outputs, outcomes, and impacts. Example: A table that presents goals, objectives, key activities, key personnel, evaluation measures, a timeline, and associated costs – all in one synoptic document.

 

A later post will conclude this two-part Glossary on Evaluation Plans.

This post is about evaluation plans. It is part of a series about what goes into proposals that win grants. Its context is the United States of America.

 

Evaluation Plans

 

The quality of an applicant’s evaluation plan is critical for its proposal in winning a grant. The same plan is also critical to success in implementing a project. The evaluation plan demonstrates the applicant’s willingness to report on the benefits and results of a grant. Its content and level of detail vary with the funder’s requirements and with the nature and scope of a project’s program design.

 

Data Collection Graphic

 

Essential Questions

 

An applicant’s evaluation plan needs to answer essential questions, such as:

  • How will it collect or gather data?
  • Who will collect the data?
  • When will it collect the data?
  • How often will it collect the data?
  • How will it analyze the data?
  • How will it report the data?
  • When will it report the data?
  • How often will it report the data?
  • To whom will it report the data?

 

Strategies

 

An applicant can strengthen its evaluation plan, if it:

  • Describes its internal evaluation team
  • Identifies and uses a highly qualified External Evaluator
  • Presents its External Evaluator as one of its key personnel
  • Defines and delivers what stakeholders need or want to know
  • Defines its data collection needs and strategies
  • Uses summative and formative evaluation methods
  • Uses quantitative and qualitative evaluation methods
  • Describes technical merits – reliability and validity – of its evaluation instruments
  • Incorporates a grant program’s performance indicators (if any)
  • Identifies target audiences for its evaluation reports
  • Links monitoring and evaluation to its management plan
  • Presents its evaluation processes in chart or table format
  • Includes a timeline or a list of evaluation milestones

 

Roles

 

Among the many roles of an evaluation plan are to:

  • Measure an applicant’s progress in achieving its objectives
  • Provide accountability for outcomes to funders and other stakeholders
  • Assure a grant maker of an organization’s effectiveness and capacity
  • Improve the quality and extent of implementation of key activities
  • Increase local support for a current initiative and for its sequels
  • Inform decisions about what works and what to do after a grant ends

 

Introduction

Revised in mid-2016, this post covers guidance about the roles of evaluation in common grant application (CGA) forms. Its context is the United States of America.

 

Context

At least 20 associations of grant makers or other organizations in the United States publish a common grant application (CGA) form online. This post explores the instructions and questions about Evaluation and Evaluation Plans that they pose to applicants.

 

Other posts will explore the CGA in terms of required elements of proposals, applicant revenue sources, budget expense categories, and proposal length and format requirements. The end of the post explains the abbreviations that it uses.

 

Significance

As a collection, taken as a whole, the common grant application (CGA) forms provide insight into the questions and considerations that interest hundreds of private grant makers. Among other things, hey shed light on evaluation plans as elements of a complete proposal. And they differentiate evaluation plans, as attachments, from evaluation plans as required parts of complete proposal narratives.

 

Role of Evaluation

Out of the 20 providers of common grant application forms, two (or 10%) of them — DC and NY — give no instructions to applicants about Evaluation or Evaluation Plans. In addition, of the 18 CGA providers that do pose evaluation questions, four (or 22%) of them — AZ, CT, ME, and WA — do not present Evaluation as a separate proposal element.

 

In the table below, a Y (Yes) means that the CGA provider does give some instructions about Evaluation or Evaluation Plans. A plus (+) means that the CGA provider both gives instructions and includes Evaluation or Evaluation Plans — as such — as a distinct selection criterion in its instructions for proposal narratives. An asterisk (*) means that a CGA provider gives no instructions about Evaluation or Evaluation Plans.

 

  Common Grant Application Forms
  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
CGA Provider NNG AZ CA CO CT DC IL ME MA ME MN MO NJ NY OH PA1 PA2 TX WA WI
Instructions Y Y Y Y Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y
Criterion +   + +     +   + + + + +   + + + +   +
No Instructions           *               *            

 

All CGA States

 

Analysis

Among more frequent topics of Evaluation questions found on common grant application forms are:

  • How evaluation results will be used — NNG, CA, MI, MN. MO, OH, and WI
  • How the organization measures effectiveness – IL, ME, MO, NJ, WA, WI, and PA-2
  • How the organization defines (criteria) and measures success – IL, MI, MN, NJ, WA, WI, and PA-2
  • Anticipated results (outputs and/or outcomes) – MD, MA, NJ, WI, and PA-2
  • What assessment tools or instruments will be used — AZ, MO, PA-2, and TX
  • How the organization evaluates outcomes and/or results – CT, MD, OH, and PA-2
  • Who will be involved in evaluation — NNG, MN, and OH
  • How constituents and/or clients will be involved actively in the evaluation – MI, MN, and OH
  • How the organization measures short- and long-term effects or outcomes – MN, OH, and PA-2

 

Among less frequent topics of Evaluation questions on common grant application forms are:

  • What questions will evaluation address — NNG and AZ
  • Overall approach to evaluation — CO and WI
  • How the organization measures impact — CO and PA-1
  • Timeframe for demonstrating impact — CO and OH
  • What process and/or impact information the organization will collect – MD and TX
  • How the organization assesses overall success and effectiveness – MD and MA
  • How evaluation results will be disseminated – CA, MI, and OH

 

Among infrequent topics of Evaluation questions on common grant application forms are:

  • The organization’s plans for assessing progress toward goals – ME
  • The organization’s plans for assessing what works – ME
  • How the organization evaluates its programs – PA-1
  • How the organization has applied what it has learned from past evaluations – PA-1
  • How the organization monitors its work – WA

 

Sources

Below is a list of abbreviations used in this post. The common grant application forms are found on their providers’ websites.

  • 1.   NNG: National Network of Grantmakers
  • 2.   AZ: Arizona Grantmakers Forum
  • 3.   CA: San Diego Grantmakers
  • 4.   CO: Colorado Nonprofit Association
  • 5.   CT: Connecticut Council for Philanthropy
  • 6.   DC: Washington Regional Association of Grantmakers
  • 7.   IL: Forefront (Chicago area)
  • 8.   ME: Maine Philanthropy Center
  • 9.  MA: Associated Grantmakers
  • 10. MI: Council of Michigan Foundations
  • 11. MN: Minnesota Community Foundation
  • 12. MO: Gateway Center for Giving
  • 13. NJ: Council of New Jersey Grantmakers/Philanthropy New York
  • 14. NY: Grantmakers Forum of New York
  • 15. OH: Ohio Grantmakers Forum
  • 16. PA-1: Philanthropy Network Greater Philadelphia
  • 17. PA-2: Grantmakers of Western Pennsylvania
  • 18. TX: Central Texas Education Funders
  • 19. WA: Philanthropy Northwest
  • 20. WI: Donors Forum of Wisconsin

 

 

Data drive grant proposals from cradle to grave. It happens while doing needs assessments. It happens while articulating performance objectives. And it happens while designing evaluation plans.

 

Performance Indicators 

One defining aspect of an objective is its performance indicator. The indicator is an objective’s criterion for success. For each performance indicator a grant recipient must collect, analyze, and report data that measure the degree to which a criterion is being met or has been met.

 

Selecting strong indicators demands planning. In formulating every objective in a grant proposal, selection of any given performance indicator should occur only after a careful consideration of alternatives. Each alternative affects the entire evaluation and influences its usefulness to end-users.

 

Indicators Graphic

 

Among useful questions that proposal planners should ask for each performance indicator are ones that address Sources, Constraints, Time, Design, Expertise, and Audiences:

 

Sources

  • What data do we need in order to measure it?
  • Does it permit us to use existing sources of data?
  • Does it require us to generate new sources of data?
  • Is an appropriate instrument available to generate such data?
  • Must we create a new instrument to generate data?

 

Constraints

  • Is it practical to create a new instrument in terms of time?
  • Is it practical to do so in terms of cost?
  • Is it practical to do so in terms of available staff expertise?
  • Is it practical to do so in terms of logistics?

 

Time

  • When will we collect data?
  • Will we collect data only before and after the project?
  • How often will we collect data during the project?
  • How long after the project ends will we continue to collect data?

 

Design

  • How will we collect data?
  • Will we use an experimental or quasi-experimental design?
  • Will we use pre/post assessments?
  • Will we develop your own instruments (e.g., surveys, questionnaires)?
  • Will we conduct interviews?
  • How will we ensure each instrument is fit for its intended purpose?

 

Expertise

  • Who will gather the data?
  • Will we hire an external evaluator?
  • What skills and expertise must the evaluator have?
  • What will it cost to use an evaluator?
  • Who will analyze and interpret the data?

 

Audiences

  • Who are our audiences for the data?
  • What do these audiences need to know?
  • Who will report the data?
  • How will report recipients use the reported data?

 

Often, the success or failure of a grant proposal hinges on how an applicant answers these questions.

 

 

 

In an age of elevated accountability for results, the Evaluation Plan is one of the most critical components of a competitive grant proposal.

 

For virtually every objective one might conceive, many types of thoroughly reviewed evaluation instruments are readily available. Often these instruments are widely used to generate and monitor data and to track and report on performance outcomes; yet, they may be new to any given applicant and its grant writing team.

 

Selecting Evaluation Instruments

 

In selecting one or more evaluation instruments to measure a specific objective in a proposal, an effective grant writing team will first locate and study relevant technical reviews found throughout the professional literature of program evaluation.

 

Evaluation Tools Graphic

 

The effective team is certain to look for:

 

  • Evidence for the technical review writer’s objectivity
  • Evidence for the instrument’s reliability
  • Evidence for the instrument’s validity
  • Limitations on the available evidence
  • Discussions of the instrument’s intended uses
  • Prerequisites for the instrument’s effective use
  • Required frequency and mode of use
  • Time required for administration and data analysis and reporting
  • Costs associated with using the instrument

 

Finding Technical Reviews

 

There are many possible sources of technical reviews of evaluation instruments. One of the best and most comprehensive resources is the Mental Measurement Yearbooks, a series published both online and in print by the Buros Institute. A second resource, of more limited scope, is the ERIC Clearinghouse on Assessment and Evaluation. Nearly every specialized and science-driven discipline will have its own review repository as well.

 

Reasons for Using Technical Reviews

 

Applicants need to persuade skeptics that their Evaluation Plan will provide evidence of program effectiveness. One way to do so is to demonstrate to wary readers that the proposed evaluation instruments are judiciously selected and are appropriate for their proposed uses. The findings published in technical reviews furnish invaluable assets for accomplishing this task. The rest hinges upon how well an applicant uses these assets in describing and justifying its Evaluation Plan.

Every grant seeker must carefully consider its options when it plans how it will collect data throughout a proposed project or initiative. Each data collection method has immediate consequences for budgets, personnel, and other aspects of the undertaking.

 

An earlier post discussed three approaches to data collection: self-reports, observation checklists, and standardized tests. This post will discuss three more: interviews, surveys/questionnaires, and reviews of existing records.

 

Data Collection Graphic2

 

Interviews

 

  1. How important is it to know the service recipient’s perspective?
  2. Do the interview questions already exist or must they be created?
  3. Who will create the questions and vet their suitability to intended purposes?
  4. How will the applicant ensure accurate recording of responses to interview questions?
  5. Will interviews be used together with a survey or separately?
  6. Are enough persons available to conduct the interviews?
  7. How often will interviews occur and who will be interviewed?
  8. Will interviews be in English or in other languages as well?
  9. Who will translate the interviews and ensure accuracy of the translations?

 

Surveys or Questionnaires

 

  1. How important is it to know the service recipient’s perspective?
  2. How will you control for inaccurate or misleading survey responses?
  3. Does the survey already exist or must it be created?
  4. Who will create the survey and vet its suitability to intended purposes?
  5. Will the survey be all forced-choice responses or will it include open-ended prompts?
  6. Will the survey be self-administered?
  7. Who will complete the surveys?
  8. Who will collect completed surveys?
  9. Will the survey be in English or in other languages as well?
  10. Who will translate the survey responses and ensure accuracy of the translations?

 

Reviews of Existing Records

 

  1. Are the records internal to the applicant organization?
  2. Are the records external (i.e., found in other organizations, such as partners)?
  3. Will the external organizations (or partners) agree to the use of their records?
  4. Who will determine whether the records are timely and relevant?
  5. Are the records quickly, easily, and readily accessible?
  6. Are the records formal and official?
  7. Are the records maintained consistently and regularly?
  8. Are the records reliable?
  9. Are protocols in place to protect and preserve privacy and confidentiality?
  10. How will the applicant ensure that existing protocols are followed?

What is a ‘performance indicator’? By one definition (found in the GPRA Modernization Act of 2010) it is “a particular value or characteristic used to measure an output or an outcome.” As a value, an indicator may be quantitative. As a characteristic, it is often quantitative, but it may also be qualitative.

 

It is often prudent to use two or three performance indicators to measure each output or outcome that is proposed to be the focus of an objective. Using one indicator alone is sometimes all that’s needed, but using more may yield findings that just one might miss.

 

Purposes of Indicators

 

Use of indicators makes it possible to determine the extent to which the intended beneficiaries of a project or initiative in fact experienced a desired benefit. In turn, such determinations contribute to decisions about necessary interim or midcourse corrections and about the ultimate effectiveness of the project or initiative in achieving its objectives and attaining its goals. These determinations, as culled from evaluation reports, then contribute to decisions about continuing appropriations or allocations for specific grant programs.

 

In order to be useful in gauging the success and continued funding-worthiness of a project or initiative, performance indicators should have several attributes:

 

  • Specific
  • Measurable
  • Observable
  • Valid
  • Reliable
  • Pertinent

 

Indicators measure how closely a performance target has been met. If a target has been met or exceeded, based on the indicators used, the finding either implies or demonstrates a benefit. The more an intended benefit can be reported, the more successful a grant program will appear to be.

 

Performance Indicators Graphic

 

 

Performance Targets

 

 

A performance target defines a criterion for success for an output or outcome. It sets a threshold for deciding whether a project or initiative is doing well or poorly in a given aspect. A usefully constructed performance target has several attributes:

 

  • Quantitative (number or ratio) preferably
  • Realistic or feasible
  • Reflective of experience
  • Reflective of baseline data
  • Valid
  • Reliable
  • Pertinent

 

In a multi-cycle project or initiative, the data collected during the first funding cycle will play several roles. It will corroborate or correct the baseline data presented in the original proposal. It will furnish a new basis for comparisons at intervals (e.g., quarterly or yearly) during a multi-cycle funding period. It will form a possible rationale for making midcourse corrections before the initial funding cycle ends.

 

Performance Targets Graphic

 

Example

 

  • Context – a high school physics science education project
  • Desired Outcome – that participants will demonstrate increased knowledge of the scientific method as implemented in a physics lab
  • Performance Indicator – that participants will list in correct sequence the contents by topic of a complete physics lab report
  • Performance Target – that 90% of participants submit a correctly sequenced physics lab report

As a proposal writer, as you sort through your options for how an applicant will collect data during the lifespan of a proposed project or initiative, it will help if you consider:

 

  1. What kinds of data does the applicant need?
  2. Must the data be quantitative?
  3. Must the data be standardized?
  4. Must the data be reliable?
  5. Must the data be valid?
  6. Do the data already exist? If so, where?
  7. Does a data collection instrument already exist? If so, is it usable as-is?

 

For a given type of data – to be analyzed with a given desired or required level of rigor – the best choices of data collection methods often prove also to be the simplest, least expensive, and most direct.

 

Among the most commonly used data collection methods are: self-reports, observation checklists, standardized tests, interviews, surveys, and reviews of existing records. This post will cover the first three methods. A later post will cover the others.

 

Data Collection Graphic1

 

Self-Reports

 

  1. Are there questions whose responses could be used to assess a given indicator?
  2. Does a self-report provide sufficiently objective and accurate data?
  3. Will a self-report be sufficiently reliable (e.g., stable over time)?
  4. Are adequate safeguards in place to protect privacy and confidentiality?

 

Observation Checklists

 

  1. Is the expected change readily observed (such as a skill or a condition)?
  2. Will using the checklist require trained observers?
  3. Are enough already trained persons available to observe events and behaviors?
  4. Can volunteers be trained and deployed as observers?
  5. Can trained observers measure the indicator without also asking questions?
  6. Are adequate safeguards in place to protect privacy and confidentiality?

 

Standardized Tests

 

  1. Is the expected change related to knowledge or a skill?
  2. Is the knowledge or skill something that is already tested?
  3. What are the technical attributes of the tests already used?
  4. Can a pre-existing test be used or must a new one be created?
  5. If a new one is needed, how will its validity and reliability be verified?
  6. Can the same test be used with all test-takers?
  7. Must special accommodations be made for some test-takers?
  8. Must the applicant administer the test or do others administer it?
  9. Do others already statistically analyze the test results or must the applicant?

Data are critical to the success of a grant proposal. In addition, data are critical to the success of a funded project or initiative. Although data may be qualitative as well as quantitative, major funders tend to look for plans to generate quantitative data. While designing a data collection plan, effective grant seekers will ask how (strategies) and how often (frequencies).

 

Data Collection Graphic

 

Strategies

 

In planning for data collection, effective grant seekers will ensure that:

 

  1. Collecting data neither usurps nor impedes delivery of direct services
  2. Staff rigorously protect and preserve the privacy and confidentiality of data
  3. Data collection methods are time-efficient and cost-effective
  4. Data collection activities strictly observe human research standards and protocols
  5. A neutral third party evaluates and reports the collected data

 

In creating a plan, consider the best ways to capture each performance indicator. Since the changes in conditions or behaviors that each indicator will measure may be subtle, incremental, or gradual, each indicator will need to be sensitive enough both to detect the changes to be measured and to determine their practical and statistical significance.

 

Frequency

 

In addition, consider what frequency will furnish the most useful data for monitoring and measuring expected changes. Typical frequencies are daily, monthly, quarterly, and yearly. Be certain to differentiate between outputs and outcomes, since outputs often require considerably less time to be observable and measurable than do outcomes.

 

Caveats

 

In considering the nature and uses of the data to be collected, be mindful that:

  1. Data should be aggregated and analyzed to reflect the total population served
  2. Findings should be limited to a specific project or initiative
  3. Findings should be limited to a specific population of intended beneficiaries
  4. Cause and effect claims require much more rigorous evidence than associative claims

 

Two later posts in this series discuss data collection methods.

 

%d bloggers like this: