Skip navigation

Monthly Archives: May 2012

Every grant seeker must carefully consider its options when it plans how it will collect data throughout a proposed project or initiative. Each data collection method has immediate consequences for budgets, personnel, and other aspects of the undertaking.

 

An earlier post discussed three approaches to data collection: self-reports, observation checklists, and standardized tests. This post will discuss three more: interviews, surveys/questionnaires, and reviews of existing records.

 

Data Collection Graphic2

 

Interviews

 

  1. How important is it to know the service recipient’s perspective?
  2. Do the interview questions already exist or must they be created?
  3. Who will create the questions and vet their suitability to intended purposes?
  4. How will the applicant ensure accurate recording of responses to interview questions?
  5. Will interviews be used together with a survey or separately?
  6. Are enough persons available to conduct the interviews?
  7. How often will interviews occur and who will be interviewed?
  8. Will interviews be in English or in other languages as well?
  9. Who will translate the interviews and ensure accuracy of the translations?

 

Surveys or Questionnaires

 

  1. How important is it to know the service recipient’s perspective?
  2. How will you control for inaccurate or misleading survey responses?
  3. Does the survey already exist or must it be created?
  4. Who will create the survey and vet its suitability to intended purposes?
  5. Will the survey be all forced-choice responses or will it include open-ended prompts?
  6. Will the survey be self-administered?
  7. Who will complete the surveys?
  8. Who will collect completed surveys?
  9. Will the survey be in English or in other languages as well?
  10. Who will translate the survey responses and ensure accuracy of the translations?

 

Reviews of Existing Records

 

  1. Are the records internal to the applicant organization?
  2. Are the records external (i.e., found in other organizations, such as partners)?
  3. Will the external organizations (or partners) agree to the use of their records?
  4. Who will determine whether the records are timely and relevant?
  5. Are the records quickly, easily, and readily accessible?
  6. Are the records formal and official?
  7. Are the records maintained consistently and regularly?
  8. Are the records reliable?
  9. Are protocols in place to protect and preserve privacy and confidentiality?
  10. How will the applicant ensure that existing protocols are followed?
Advertisements

What is a ‘performance indicator’? By one definition (found in the GPRA Modernization Act of 2010) it is “a particular value or characteristic used to measure an output or an outcome.” As a value, an indicator may be quantitative. As a characteristic, it is often quantitative, but it may also be qualitative.

 

It is often prudent to use two or three performance indicators to measure each output or outcome that is proposed to be the focus of an objective. Using one indicator alone is sometimes all that’s needed, but using more may yield findings that just one might miss.

 

Purposes of Indicators

 

Use of indicators makes it possible to determine the extent to which the intended beneficiaries of a project or initiative in fact experienced a desired benefit. In turn, such determinations contribute to decisions about necessary interim or midcourse corrections and about the ultimate effectiveness of the project or initiative in achieving its objectives and attaining its goals. These determinations, as culled from evaluation reports, then contribute to decisions about continuing appropriations or allocations for specific grant programs.

 

In order to be useful in gauging the success and continued funding-worthiness of a project or initiative, performance indicators should have several attributes:

 

  • Specific
  • Measurable
  • Observable
  • Valid
  • Reliable
  • Pertinent

 

Indicators measure how closely a performance target has been met. If a target has been met or exceeded, based on the indicators used, the finding either implies or demonstrates a benefit. The more an intended benefit can be reported, the more successful a grant program will appear to be.

 

Performance Indicators Graphic

 

 

Performance Targets

 

 

A performance target defines a criterion for success for an output or outcome. It sets a threshold for deciding whether a project or initiative is doing well or poorly in a given aspect. A usefully constructed performance target has several attributes:

 

  • Quantitative (number or ratio) preferably
  • Realistic or feasible
  • Reflective of experience
  • Reflective of baseline data
  • Valid
  • Reliable
  • Pertinent

 

In a multi-cycle project or initiative, the data collected during the first funding cycle will play several roles. It will corroborate or correct the baseline data presented in the original proposal. It will furnish a new basis for comparisons at intervals (e.g., quarterly or yearly) during a multi-cycle funding period. It will form a possible rationale for making midcourse corrections before the initial funding cycle ends.

 

Performance Targets Graphic

 

Example

 

  • Context – a high school physics science education project
  • Desired Outcome – that participants will demonstrate increased knowledge of the scientific method as implemented in a physics lab
  • Performance Indicator – that participants will list in correct sequence the contents by topic of a complete physics lab report
  • Performance Target – that 90% of participants submit a correctly sequenced physics lab report

As a proposal writer, as you sort through your options for how an applicant will collect data during the lifespan of a proposed project or initiative, it will help if you consider:

 

  1. What kinds of data does the applicant need?
  2. Must the data be quantitative?
  3. Must the data be standardized?
  4. Must the data be reliable?
  5. Must the data be valid?
  6. Do the data already exist? If so, where?
  7. Does a data collection instrument already exist? If so, is it usable as-is?

 

For a given type of data – to be analyzed with a given desired or required level of rigor – the best choices of data collection methods often prove also to be the simplest, least expensive, and most direct.

 

Among the most commonly used data collection methods are: self-reports, observation checklists, standardized tests, interviews, surveys, and reviews of existing records. This post will cover the first three methods. A later post will cover the others.

 

Data Collection Graphic1

 

Self-Reports

 

  1. Are there questions whose responses could be used to assess a given indicator?
  2. Does a self-report provide sufficiently objective and accurate data?
  3. Will a self-report be sufficiently reliable (e.g., stable over time)?
  4. Are adequate safeguards in place to protect privacy and confidentiality?

 

Observation Checklists

 

  1. Is the expected change readily observed (such as a skill or a condition)?
  2. Will using the checklist require trained observers?
  3. Are enough already trained persons available to observe events and behaviors?
  4. Can volunteers be trained and deployed as observers?
  5. Can trained observers measure the indicator without also asking questions?
  6. Are adequate safeguards in place to protect privacy and confidentiality?

 

Standardized Tests

 

  1. Is the expected change related to knowledge or a skill?
  2. Is the knowledge or skill something that is already tested?
  3. What are the technical attributes of the tests already used?
  4. Can a pre-existing test be used or must a new one be created?
  5. If a new one is needed, how will its validity and reliability be verified?
  6. Can the same test be used with all test-takers?
  7. Must special accommodations be made for some test-takers?
  8. Must the applicant administer the test or do others administer it?
  9. Do others already statistically analyze the test results or must the applicant?

Data are critical to the success of a grant proposal. In addition, data are critical to the success of a funded project or initiative. Although data may be qualitative as well as quantitative, major funders tend to look for plans to generate quantitative data. While designing a data collection plan, effective grant seekers will ask how (strategies) and how often (frequencies).

 

Data Collection Graphic

 

Strategies

 

In planning for data collection, effective grant seekers will ensure that:

 

  1. Collecting data neither usurps nor impedes delivery of direct services
  2. Staff rigorously protect and preserve the privacy and confidentiality of data
  3. Data collection methods are time-efficient and cost-effective
  4. Data collection activities strictly observe human research standards and protocols
  5. A neutral third party evaluates and reports the collected data

 

In creating a plan, consider the best ways to capture each performance indicator. Since the changes in conditions or behaviors that each indicator will measure may be subtle, incremental, or gradual, each indicator will need to be sensitive enough both to detect the changes to be measured and to determine their practical and statistical significance.

 

Frequency

 

In addition, consider what frequency will furnish the most useful data for monitoring and measuring expected changes. Typical frequencies are daily, monthly, quarterly, and yearly. Be certain to differentiate between outputs and outcomes, since outputs often require considerably less time to be observable and measurable than do outcomes.

 

Caveats

 

In considering the nature and uses of the data to be collected, be mindful that:

  1. Data should be aggregated and analyzed to reflect the total population served
  2. Findings should be limited to a specific project or initiative
  3. Findings should be limited to a specific population of intended beneficiaries
  4. Cause and effect claims require much more rigorous evidence than associative claims

 

Two later posts in this series discuss data collection methods.

 

In the Federal Register, on 28 February 2012, the Office of Management and Budget (OMB) and the 26 Federal grant-making agencies issued an Advanced Notice of Proposed Guidance (ANPG) seeking public comments. After a year of preparation, it came as a first shot across the bow in reforming Federal policies and procedures for managing grants so that they reflect more closely the priorities and realities of 21st-century grants management.

 

Reforms 1 Graphics

 

Scope of Reforms

 

In the Federal initiative were dozens of proposed reforms pertaining to:

 

After an extension of the original deadline, the public comment period has ended.

 

Proposed Reforms

 

Among the many proposed reforms are several that promise significant changes:

  • Notifying potential applicants 90 days before proposals are due
  • Using project impact and fiscal agent risk analysis – as well as merit – as review factors
  • Updating regulations to reflect the use of the Internet in grants administration
  • Consolidating and repurposing the Catalog of Federal Domestic Assistance (CFDA) as the Catalog of Federal Financial Assistance (CFFA)
  • Promulgating a single circular for grants management

 

Such changes, if enacted, will be significant for all seekers and recipients of Federal grant awards. A later post will discuss their potential significance for writers of proposals for Federal funding.

The last decade has seen non-stop reform of the principles governing Federal grants management. Many of these are not new and should be familiar to experienced grant-seekers. The effort to apply them – pervasively – across the 26 Federal agencies that make grants is new. It also poses significant challenges to many organizations as grant seekers.

 

Strategies

 

In such a reform milieu, what can an organization do to ensure that it continues to win grants? These possible action steps come to mind:

  1. Learn the principles that are intended to govern how the Federal agencies manage the grant making process
  2. Consider in what ways and to what degree such principles already operate or can be applied in the local context
  3. Adopt these principles and then build them into planning, proposing, and executing new projects and initiatives to Federal grant makers

 

Principles

 

At their core, many of the key principles guiding federal grant reform are principles of organizational management. Since virtually every proposal needs a work plan, an evaluation plan, a management plan, and a continuation plan, these are fine places for applicants to discuss how such principles either already guide or will guide local activities.

 

The table aligns current Federal management reform principles with specific proposal narrative components where an applicant might choose to incorporate them. Although the phrasing derives from publications of the Office of Management and the Budget (OMB), the concepts are intended embrace all Federal agencies.

 

Reform Principles Graphics

 

This is one of a series of posts about Federal grant making.

 

For decades now, outcome evaluation has been a key aspect of planning proposals that win grants. It requires a systematic analysis of projects or initiatives as they unfold over time, from inputs and activities onward through outputs and outcomes. The net result is a logic model useful for virtually every element of an evaluation plan.

 

Evaluation Plans Graphics

 

Input

 

It’s helpful to think of an input as any resource necessary to do the work of a project. As such, an input can be human labor (paid or volunteer personnel), materials (equipment and supplies), finances (existing and future funding), and facilities (locations where project activities will occur). Most inputs must be in place and available for use before a project starts.

 

Activity

 

An activity is a work task associated with a project. It may occur before a project starts (e.g., planning), during a project (e.g., implementing and monitoring), or after a project ends (e.g., continuing and close-out). Some activities are singular or discrete events (e.g., yearly conferences), others are continuous processes (e.g., classroom instruction), and yet others may be both (e.g., training). Rationales for selecting specific activities should reflect documented needs and proposed objectives; often they also must reflect research into best practices.

 

Output

 

An output is a unit of production or a unit of service, and stated as a number. It is often the focus of a process objective. As units of production, outputs include numbers of newsletters published, numbers of blog articles posted, numbers of curricular units developed, numbers of workshops held, numbers of library books purchased, and so on. As units of service, they include numbers of students taught, numbers of staff trained, numbers of parents contacted, numbers of patients treated, and so on. Outputs do not indicate what measurable changes occurred in the users of the products or in the recipients of the services.

 

Outcome

 

An outcome is an observable and measurable change that occurs in a pre-defined population of intended beneficiaries either during or after a project. Often it is expressed in terms of a change in knowledge or skill (short-term outcomes), or a change in behavior (mid-term outcomes), or a change in affect, condition, or status (long-term outcomes). An outcome objective focuses on what is expected to happen as a consequence of staff undertaking a set of activities and of intended beneficiaries participating in them.

 

Thus, increased knowledge in teaching engineering is an expected outcome of taking a course in it; being a high school teacher who completed such a course is an output. Again, a reduced annual rate of middle school bullying is an expected outcome of implementing a school-wide model anti-bullying program; holding 12 hour-long sessions for all school staff on applying the model is an output. And creating a positive school climate is an expected outcome of a comprehensive school reform initiative; installing 20 posters about civic virtues throughout every school is an output.

 

Performance Target

 

As the subject of a well-formulated project objective, each desired outcome is a performance target, which typically may be stated as a number or a ratio or both. The best target is both feasible and ambitious. Grant recipients use outcome indicators to observe, measure, monitor, and evaluate their progress toward attaining each performance target.

In the context of grants, evaluation is a systematic inquiry into project performance. In its formative mode, it looks at what is working and what is not; in its summative mode, it looks at what did work and what did not. In both modes, it identifies obstacles to things working well and suggests ways to overcome them. For evaluation to proceed, the events or conditions that it looks at must exist, must be describable and measurable, and must be taking place or have taken place. Its focus is actualities, not possibilities.

 

Design Options Graphics

 

Data Collection

 

Effective evaluation requires considerable planning. Its feasibility depends on access to data. Among the more important questions to consider in collecting data for evaluation are:

  • What kinds of data need to be acquired?
  • What will be the sources of data?
  • How will sources of data be sampled?
  • How will data be collected?
  • When and how often will the data be collected?
  • How will outcomes with and without a project be compared?
  • How will the data be analyzed?

 

Problem Definition

 

In developing an evaluation plan, it is wise to start from the problem definition and the assessment of needs and work forward through the objectives to the evaluation methods. After all, how a problem is defined has inevitable implications for what kinds of data one must collect, the sources of data, the analyses one must do to try to answer an evaluation question, and the conclusions one can draw from the evidence.

 

Evaluations pose three kinds of questions: descriptive, normative, and impact (or cause and effect). Descriptive evaluation states what is or what has been. Normative evaluation states what is to what should be or what was to what should have been. Impact evaluation states the extent to which observed outcomes are attributable to what is being done or has been done. The options available for developing an evaluation plan vary with each kind of question.

 

Power

 

An evaluation plan does not need to be complex in order to provide useful answers to the questions it poses. The power of an evaluation should be equated neither with its complexity nor with the extent it manipulates data statistically. A powerful evaluation uses analytical methods that fit the question posed; offer evidence to support the answer reached; rule out competing evidence; and identify modes of analysis, methods, and assumptions. Its utility is a function of the context of each question, its cost and time constraints, its design, the technical merits of its data collection and analysis, and the quality of its reporting of findings.

 

Constraints

 

Among the most common constraints on conducting evaluations are: time, costs, expertise, location, and facilities. Of these constraints, time, costs, and expertise in particular serve to delimit the scope and feasibility of various possible evaluation design options.

 

Design Options

 

Most evaluation plans adopt one of three design options: experimental, quasi-experimental (or non-equivalent comparison group), or pre/post In the context of observed outcomes, the experimental option, under random assignment of participants, is most able to attribute causes to outcomes; the pre/post option – even one featuring interrupted time series analyses – is least able to make such attributions.

 

The experimental option tends to be the most complex and costliest to implement; the pre/post option tends to be the simplest and least costly. Increasingly, Federal grant programs favor the experimental evaluation design, even in areas of inquiry where it is costly and difficult to implement at the necessary scale, such as education and social services.

 

This post is one of a series about designing and using Evaluation Plans in grant proposals.

Once an organization has won a multi-year grant, evaluation is essential to getting it renewed year to year. One way to share evaluation findings is the Annual Performance Report (APR).

 

Evaluation Reports Graphics

 

Although the specific contents of an APR vary from funder to funder, they also tend to have similar structures from report to report. What follows is one typical structure:

 

Cover Sheet (or Title Page)

 

Identify the grant recipient, the grant maker, and the grant program. Also may need to provide unique numerical identifiers: submission date, grant award number, employer identification number, grantee DUNS number, and others. Like the rest of a report, a cover sheet is often an online or PDF form.

 

Table of Contents

 

Always include whatever major topics, in whatever predetermined sequence that a specific funder requires.

 

Abstract (or Executive Summary)

 

Offer an overview of findings and recommendations and be no longer than two pages. Also highlight project goals and significant accomplishments, and include the population served.

 

Overall Purpose of Evaluation

 

Indicate: why the evaluation was done; what kinds of evaluation were performed; who performed them; what kinds of decisions the evaluation was intended to inform or support; and who has made, is making, or is going to make such decisions.

 

Background (or Context)

 

Briefly describe the organization and its history. Describe the goals and nature of the product or program or service being evaluated. State the problem or need that the product or program or service is addressing. Specify the performance indicators and desired outcomes. Describe how the product or program or service is developed and/or delivered. Characterize who is developing or delivering the product or program or service.

 

Evaluation Methods

 

State the questions the evaluation is intended to answer. Indicate the types of data collected, what instruments were used to collect the data, and how the data were analyzed.

 

Evaluation Outcomes

 

Discuss how the findings and conclusions based on the data are to be used, and any qualifying remarks about any limits in using the findings and conclusions.

 

Interpretations and Conclusions

 

Flow from analysis of the evaluation data. Be responsive to the funder’s evaluation priorities (e.g., measuring GPRA or GPRMA performance indicators in Federal grants).

 

Recommendations

 

Flow from the findings and conclusions. Address any necessary adjustments in the product or program or service and other decisions that need to be made in order to achieve desired outcomes and accomplish goals.

 

Appendices (or Attachments)

 

Reflect the funder’s requirements and the purposes of the specific evaluation. Appendices may include, for example: the logic model governing the project; plans for management and evaluation included in the original proposal; detailed tables of evaluation data; samples of instruments used to collect data and descriptions of the technical merits of these instruments; case studies of, or sample statements by, users of the product or program or service.

 

 

It’s far too late to fix defects in a grant proposal when it must be sent on its way before the end of the day. The best time to review it for quality, completeness, and internal consistency is well before its submission deadline. At least a week ahead of deadline is a reasonable target.

 

Review Checklists Graphics

 

Red Team Reviews

 

Internal red team reviews, before submitting a proposal, improve the likelihood that a proposal will win a grant. The more eyes that see a proposal, the stronger it should become. The best eyes are those of educated and articulate persons who were not directly involved in creating it. Lack of prior involvement enhances the objectivity of their critiques. Ideally, they will see the entire proposal, but almost any objective pre-submittal review is useful.

 

Checklists

 

Checklists expedite the red team review process. In order to minimize half-point item ratings, the entire sample checklist’s maximum score is 200 points. This post covers several sample checklist sections; other posts cover others.

 

Maximum sub-scores vary by section, as noted. For each item, divide the subsection’s maximum sub-score by the 5 or 10 items in the proposal subsection.

 

CONTINUATION PLAN

YES/NO SCORE PROPOSAL ATTRIBUTE
Presents a plan to obtain further funding
Identifies potential and secured sources of future funding
Minimizes reliance on future grant support
Is supported with letters of commitment
Letters of commitment state specific commitment amounts
Total: Maximum is 10 points.

 

BUDGET and BUDGET NARRATIVE

YES/NO SCORE PROPOSAL ATTRIBUTE
Is consistent with the proposal narrative
Provides sufficient detail for every line item
Limits line items to within the proposed budget period
Justifies and clearly explains all cost items
Identifies sources of funding for all line items
Breaks out fringe benefits from salaries
Pay rates are consistent with staff roles and qualifications
Includes indirect charges when appropriate
Separates non-personnel cost items from personnel items
Budget clearly relates to project’s proposed work plan
Total: Maximum is 30 points.

 

RATING A PROPOSAL’S READINESS FOR SUBMISSION:

CONTINUATION PLAN                

Reviewer:                  Maximum: 10

BUDGET and BUDGET NARRATIVE                                                     

Reviewer:                   Maximum: 30

Total Possible Score:  40

Total Review Score:

 

%d bloggers like this: