Skip navigation

Monthly Archives: May 2012

Every grant seeker must carefully consider its options when it plans how it will collect data throughout a proposed project or initiative. Each data collection method has immediate consequences for budgets, personnel, and other aspects of the undertaking.

 

An earlier post discussed three approaches to data collection: self-reports, observation checklists, and standardized tests. This post will discuss three more: interviews, surveys/questionnaires, and reviews of existing records.

 

Interviews:

  1. How important is it to know the service recipient’s perspective?
  2. Do the interview questions already exist or must they be created?
  3. Who will create the questions and vet their suitability to intended purposes?
  4. How will the applicant ensure accurate recording of responses to interview questions?
  5. Will interviews be used together with a survey or separately?
  6. Are enough persons available to conduct the interviews?
  7. How often will interviews occur and who will be interviewed?
  8. Will interviews be in English or in other languages as well?
  9. Who will translate the interviews and ensure accuracy of the translations?

 

Surveys or Questionnaires:

  1. How important is it to know the service recipient’s perspective?
  2. How will you control for inaccurate or misleading survey responses?
  3. Does the survey already exist or must it be created?
  4. Who will create the survey and vet its suitability to intended purposes?
  5. Will the survey be all forced-choice responses or will it include open-ended prompts?
  6. Will the survey be self-administered?
  7. Who will complete the surveys?
  8. Who will collect completed surveys?
  9. Will the survey be in English or in other languages as well?
  10. Who will translate the survey responses and ensure accuracy of the translations?

 

Reviews of Existing Records:

  1. Are the records internal to the applicant organization?
  2. Are the records external (i.e., found in other organizations, such as partners)?
  3. Will the external organizations (or partners) agree to the use of their records?
  4. Who will determine whether the records are timely and relevant?
  5. Are the records quickly, easily, and readily accessible?
  6. Are the records formal and official?
  7. Are the records maintained consistently and regularly?
  8. Are the records reliable?
  9. Are protocols in place to protect and preserve privacy and confidentiality?
  10. How will the applicant ensure that existing protocols are followed?
Advertisements

What is a ‘performance indicator’? By one definition (found in the GPRA Modernization Act of 2010) it is “a particular value or characteristic used to measure an output or an outcome.” As a value, an indicator may be quantitative. As a characteristic, it is often quantitative, but it may also be qualitative.

 

It is often prudent to use two or three performance indicators to measure each output or outcome that is proposed to be the focus of an objective. Using one indicator alone is sometimes all that’s needed, but using more may yield findings that just one might miss.

 

Purposes of Indicators:

Use of indicators makes it possible to determine the extent to which the intended beneficiaries of a project or initiative in fact experienced a desired benefit. In turn, such determinations contribute to decisions about necessary interim or midcourse corrections and about the ultimate effectiveness of the project or initiative in achieving its objectives and attaining its goals. These determinations, as culled from evaluation reports, then contribute to decisions about continuing appropriations or allocations for specific grant programs.

 

In order to be useful in gauging the success and continued funding-worthiness of a project or initiative, performance indicators should have several attributes:

  • Specific
  • Measurable
  • Observable
  • Valid
  • Reliable
  • Pertinent

 

Indicators measure how closely a performance target has been met. If a target has been met or exceeded, based on the indicators used, the finding either implies or demonstrates a benefit. The more an intended benefit can be reported, the more successful a grant program will appear to be.

 

Performance Targets:

A performance target defines a criterion for success for an output or outcome. It sets a threshold for deciding whether a project or initiative is doing well or poorly in a given aspect. A usefully constructed performance target has several attributes:

  • Quantitative (number or ratio) preferably
  • Realistic or feasible
  • Reflective of experience
  • Reflective of baseline data
  • Valid
  • Reliable
  • Pertinent

In a multi-cycle project or initiative, the data collected during the first funding cycle will play several roles. It will corroborate or correct the baseline data presented in the original proposal. It will furnish a new basis for comparisons at intervals (e.g., quarterly or yearly) during a multi-cycle funding period. It will form a possible rationale for making midcourse corrections before the initial funding cycle ends.

 

Illustration:

  • Context – a high school physics science education project
  • Desired Outcome – that participants will demonstrate increased knowledge of the scientific method as implemented in a physics lab
  • Performance Indicator – that participants will list in correct sequence the contents by topic of a complete physics lab report
  • Performance Target – that 90% of participants submit a correctly sequenced physics lab report

As a proposal writer, as you sort through your options for how an applicant will collect data during the lifespan of a proposed project or initiative, it will help if you consider:

  1. What kinds of data does the applicant need?
  2. Must the data be quantitative?
  3. Must the data be standardized?
  4. Must the data be reliable?
  5. Must the data be valid?
  6. Do the data already exist? If so, where?
  7. Does a data collection instrument already exist? If so, is it usable as-is?

 

For a given type of data – to be analyzed with a given desired or required level of rigor – the best choices of data collection methods often prove also to be the simplest, least expensive, and most direct.

 

Among the most commonly used data collection methods are: self-reports, observation checklists, standardized tests, interviews, surveys, and reviews of existing records. This post will cover the first three methods. A later post will cover the others.

 

Self-Reports:

  1. Are there questions whose responses could be used to assess a given indicator?
  2. Does a self-report provide sufficiently objective and accurate data?
  3. Will a self-report be sufficiently reliable (e.g., stable over time)?
  4. Are adequate safeguards in place to protect privacy and confidentiality?

 

Observation Checklists:

  1. Is the expected change readily observed (such as a skill or a condition)?
  2. Will using the checklist require trained observers?
  3. Are enough already trained persons available to observe events and behaviors?
  4. Can volunteers be trained and deployed as observers?
  5. Can trained observers measure the indicator without also asking questions?
  6. Are adequate safeguards in place to protect privacy and confidentiality?

 

Standardized Tests:

  1. Is the expected change related to knowledge or a skill?
  2. Is the knowledge or skill something that is already tested?
  3. What are the technical attributes of the tests already used?
  4. Can a pre-existing test be used or must a new one be created?
  5. If a new one is needed, how will its validity and reliability be verified?
  6. Can the same test be used with all test-takers?
  7. Must special accommodations be made for some test-takers?
  8. Must the applicant administer the test or do others administer it?
  9. Do others already statistically analyze the test results or must the applicant?

Data are critical to the success of a grant proposal. In addition, data are critical to the success of a funded project or initiative. Although data may be qualitative as well as quantitative, major funders tend to look for plans to generate quantitative data. While designing a data collection plan, smart grant seekers will ask how (strategies) and how often (frequencies).

 

Data Collection Strategies:

In creating a plan, consider the best ways to capture each performance indicator. Since the changes in conditions or behaviors that each indicator will measure may be subtle, incremental, or gradual, each indicator will need to be sensitive enough both to detect the changes to be measured and to determine their significance.

 

Data Collection Frequency:

In addition, consider what frequency will furnish the most useful data for monitoring and measuring expected changes. Typical frequencies are daily, monthly, quarterly, and yearly. Be certain to differentiate between outputs and outcomes, since outputs often require considerably less time to be observable and measurable than do outcomes.

 

In planning for data collection, smart grant seekers will ensure that:

  1. Collecting data neither usurps nor impedes delivery of direct services
  2. Staff rigorously protect and preserve the privacy and confidentiality of data
  3. Data collection methods are time-efficient and cost-effective
  4. Data collection activities strictly observe human research standards and protocols
  5. A neutral third party evaluates and reports the collected data

 

Data Collection Caveats:

In considering the nature and uses of the data to be collected, be mindful that:

  1. Data should be aggregated and analyzed to reflect the total population served
  2. Findings should be limited to a specific project or initiative
  3. Findings should be limited to a specific population of intended beneficiaries
  4. Cause and effect claims require much more rigorous evidence than associative claims

 

A later post in this series will discuss data collection methods.

 

Sooner or later, many grant seekers visit a Foundation Center Cooperating Collection to do an online prospect search. After spending an hour or less, they often leave smiling broadly, having just sent long lists of leads to their e-mail accounts. But what do they do next?

 

Prospect Research:

How do experienced grant seekers make sense of their sometimes lengthy lists? How do they decide which leads are worth pursuing and which ones are dead-ends? As they study each grant maker profile, they do so by posing and answering questions such as those presented here.

 

Descriptors Questions
Physical Location Is the foundation local?
  How near is it to the applicant?
Website Is there one?
Limitations Does the applicant fall within any one or more of them?
Type of Grantmaker Is the grant maker an independent foundation?
  Is it a family foundation?
  Is it a corporate charitable giving program?
IRS 990-PF Forms What years are available?
  What is the most current year available?
Deadline(s) Is there one or more? When is it or when are they?
Purposes/Activities Do they match the applicant’s purposes/intended activities?
Fields of Interest Does they match the applicant’s interests?
Trustees/Directors Does the applicant have a connection to any of them?
Financial Data Are the asset amounts more than $100,000?
  Is total giving more than $50,000?
Selected Grants Does the grant maker profile list any grant award selections?
  If so, for what amounts were they?
  If so, to what kinds of organizations were they made?

 

A later post will discuss how and why answers to these specific questions will help potential applicants to winnow the grains of strong leads from the chaff of weak ones.

 

 

In the Federal Register, on 28 February 2012, the Office of Management and Budget (OMB) and the 26 Federal grant-making agencies issued an Advanced Notice of Proposed Guidance (ANPG) seeking public comments. After a year of preparation, it came as a first shot across the bow in reforming Federal policies and procedures for managing grants so that they reflect more closely the priorities and realities of 21st-century grants management.

 

Scope of Reform:

In the Federal initiative were dozens of proposed reforms pertaining to:

 

After an extension of the original deadline, the public comment period has ended.

 

Proposed Reforms:

Among the many proposed reforms are several that promise significant changes:

  • Notifying potential applicants 90 days before proposals are due
  • Using project impact and fiscal agent risk analysis – as well as merit – as review factors
  • Updating regulations to reflect the use of the Internet in grants administration
  • Consolidating and repurposing the Catalog of Federal Domestic Assistance (CFDA) as the Catalog of Federal Financial Assistance (CFFA)
  • Promulgating a single circular for grants management

 

Such changes, if enacted, will be significant for all seekers and recipients of Federal grant awards. A later post will discuss their potential significance for writers of proposals for Federal funding.

The last decade has seen non-stop reform of the principles governing Federal grants management. Many of these are not new and should be familiar to experienced grant-seekers. The effort to apply them – pervasively – across the 26 Federal agencies that make grants is new. It also poses significant challenges to many organizations as grant seekers.

 

Strategies:

In such a reform milieu, what can an organization do to ensure that it continues to win grants? These possible action steps come to mind:

  1. Learn the principles that are intended to govern how the Federal agencies manage the grant making process
  2. Consider in what ways and to what degree such principles already operate or can be applied in the local context
  3. Adopt these principles and then build them into planning, proposing, and executing new projects and initiatives to Federal grant makers

 

Principles:

At their core, many of the key principles guiding federal grant reform are principles of organizational management. Since virtually every proposal needs a work plan, an evaluation plan, a management plan, and a continuation plan, these are fine places for applicants to discuss how such principles either already guide or will guide local activities.

 

The table aligns current Federal management reform principles with specific proposal narrative components where an applicant might choose to incorporate them. Although the phrasing derives from publications of the Office of Management and the Budget (OMB), the concepts are intended embrace all Federal agencies.

 

Federal Reform Principles Proposal Narrative Components
Using data-driven analysis Needs assessment, evaluation plan
Creating goal-oriented organizations Program design or work plan
Presenting goal-driven action plans Program design or work plan
Setting interim performance targets Program design, timeline
Measuring at milestones Timeline, evaluation plan
Regularly reviewing performance data Evaluation plan
Improving performance continuously Program design, evaluation plan
Being accountable for results Evaluation plan
Reviewing outcome metrics Evaluation plan
Reviewing leading performance indicators Evaluation plan
Monitoring and tracking progress Evaluation plan
Making midcourse adjustments Management plan
Employing team-based leadership Management plan
Achieving transparency of operations Management plan
Sharing best practices Program design, continuation plan

 

This is one of a series of posts on the Federal grant reforms and the future of Federal grant making.

For decades now, outcome evaluation has been a key aspect of planning proposals that win grants. It requires a systematic analysis of projects or initiatives as they unfold over time, from inputs and activities onward through outputs and outcomes. The net result is a logic model useful for virtually every element of an evaluation plan.

 

Input:

It’s helpful to think of an input as any resource necessary to do the work of a project. As such, an input can be human labor (paid or volunteer personnel), materials (equipment and supplies), finances (existing and future funding), and facilities (locations where project activities will occur). Most inputs must be in place and available for use before a project starts.

 

Activity:

An activity is a work task associated with a project. It may occur before a project starts (e.g., planning), during a project (e.g., implementing and monitoring), or after a project ends (e.g., continuing and close-out). Some activities are singular or discrete events (e.g., yearly conferences), others are continuous processes (e.g., classroom instruction), and yet others may be both (e.g., training). Rationales for selecting specific activities should reflect documented needs and proposed objectives; often they also must reflect research into best practices.

 

Output:

An output is a unit of production or a unit of service, and stated as a number. It is often the focus of a process objective. As units of production, outputs include numbers of newsletters published, numbers of blog articles posted, numbers of curricular units developed, numbers of workshops held, numbers of library books purchased, and so on. As units of service, they include numbers of students taught, numbers of staff trained, numbers of parents contacted, numbers of patients treated, and so on. Outputs do not indicate what measurable changes occurred in the users of the products or in the recipients of the services.

 

Outcome:

An outcome is an observable and measurable change that occurs in a pre-defined population of intended beneficiaries either during or after a project. Often it is expressed in terms of a change in knowledge or skill (short-term outcomes), or a change in behavior (mid-term outcomes), or a change in affect, condition, or status (long-term outcomes). An outcome objective focuses on what is expected to happen as a consequence of staff undertaking a set of activities and of intended beneficiaries participating in them.

 

Thus, increased knowledge in teaching engineering is an expected outcome of taking a course in it; being a high school teacher who completed such a course is an output. Again, a reduced annual rate of middle school bullying is an expected outcome of implementing a school-wide model anti-bullying program; holding 12 hour-long sessions for all school staff on applying the model is an output. And creating a positive school climate is an expected outcome of a comprehensive school reform initiative; installing 20 posters about civic virtues throughout every school is an output.

 

Performance Target:

As the subject of a well-formulated project objective, each desired outcome is a performance target, which typically may be stated as a number or a ratio or both. The best target is both feasible and ambitious. Grant recipients use outcome indicators to observe, measure, monitor, and evaluate their progress toward attaining each performance target.

In the context of grants, evaluation is a systematic inquiry into project performance. In its formative mode, it looks at what is working and what is not; in its summative mode, it looks at what did work and what did not. In both modes, it identifies obstacles to things working well and suggests ways to overcome them. For evaluation to proceed, the events or conditions that it looks at must exist, must be describable and measurable, and must be taking place or have taken place. Its focus is actualities, not possibilities.

 

Data Collection:

Effective evaluation requires considerable planning. Its feasibility depends on access to data. Among the more important questions to consider in collecting data for evaluation are:

  • What kinds of data need to be acquired?
  • What will be the sources of data?
  • How will sources of data be sampled?
  • How will data be collected?
  • When and how often will the data be collected?
  • How will outcomes with and without a project be compared?
  • How will the data be analyzed?

 

Problem Definition:

In developing an evaluation plan, it is wise to start from the problem definition and the assessment of needs and work forward through the objectives to the evaluation methods. After all, how a problem is defined has inevitable implications for what kinds of data one must collect, the sources of data, the analyses one must do to try to answer an evaluation question, and the conclusions one can draw from the evidence.

 

Evaluations pose three kinds of questions: descriptive, normative, and impact (or cause and effect). Descriptive evaluation states what is or what has been. Normative evaluation states what is to what should be or what was to what should have been. Impact evaluation states the extent to which observed outcomes are attributable to what is being done or has been done. The options available for developing an evaluation plan vary with each kind of question.

 

Power:

An evaluation plan does not need to be complex in order to provide useful answers to the questions it poses. The power of an evaluation should be equated neither with its complexity nor with the extent it manipulates data statistically. A powerful evaluation uses analytical methods that fit the question posed; offer evidence to support the answer reached; rule out competing evidence; and identify modes of analysis, methods, and assumptions. Its utility is a function of the context of each question, its cost and time constraints, its design, the technical merits of its data collection and analysis, and the quality of its reporting of findings.

 

Constraints:

Among the most common constraints on conducting evaluations are: time, costs, expertise, location, and facilities. Of these constraints, time, costs, and expertise in particular serve to delimit the scope and feasibility of various possible evaluation design options.

 

Design Options:

Most evaluation plans adopt one of three design options: experimental, quasi-experimental (or non-equivalent comparison group), or pre/post In the context of observed outcomes, the experimental option, under random assignment of participants, is most able to attribute causes to outcomes; the pre/post option – even one featuring interrupted time series analyses – is least able to make such attributions.

 

The experimental option tends to be the most complex and costliest to implement; the pre/post option tends to be the simplest and least costly. Increasingly, Federal grant programs favor the experimental evaluation design, even in areas of inquiry where it is costly and difficult to implement at the necessary scale, such as education and social services.

The competition for grants is fierce these days. Foundation assets are stagnant or declining while governments pursue fiscal austerity measures. In 2012, nonprofits need all the keys they can find to unlock the grant makers’ strongboxes and win significant grant awards.

 

This post presents a few places where nonprofits can go to find potential grants from corporations and foundations – either for free or at a modest cost. An earlier post focuses on resources for improving organizational readiness to pursue grants.

 

Foundation Center:

The Foundation Center produces and sells many tools for grant seekers. It publishes online and print directories, books on writing proposals, and many other valuable materials. Among its resources for grant seekers are:

  • Searchable online databases of grant makers, grants, and IRS Form 990 filings online
  • A catalog of nonprofit literature
  • On-call researchers available through an associates program
  • A short course on writing grant proposals
  • A collection of grant makers’ requests for proposals
  • A peer-to-peer philanthropy message board

 

In addition, the Center maintains an extensive searchable database of grant makers.

 

Fundsnet Services:

Fundsnet Services provides free access to resources about grants, fundraising, philanthropy, foundations, and nonprofits, including a database of corporate and foundation grant makers, which is both searchable and sorted by 20-plus topical areas. It lacks the depth and breadth of the Foundation Center, but it is a free and easily navigated place to start a search.

 

Chronicle of Philanthropy:

The Chronicle of Philanthropy presents information useful to seekers of private donations (gifts) and foundation grants on such topics as:

  • Grant seeking
  • Fundraising
  • Philanthropic giving
  • Management
  • Data, trends, and causes in philanthropy

 

The website also features a very useful current calendar of foundation application deadlines.

 

Grantsmanship Center:

The Grantsmanship Center offers a wide array of free and fee-based tools for seekers of grants from both federal and private sources, among which are:

  • Five different training programs
  • Proposal review and other consulting assistance
  • Publications on prospect research, proposal planning, and proposal research
  • The comprehensive, online, searchable GrantDomain grant maker database

 

May you search well and prosper!

 

%d bloggers like this: