Skip navigation

Tag Archives: evaluation plans

This post is about project goals. It is one of a series about what goes into proposals that win grants. Its context is the United States of America.


Project Goals


Having clearly stated project goals is critical in competing for grants. Well-formulated goals drive project planning. The same goals also drive project implementation. Typically, the goal statements that help applicants to win grants are relatively long-term, abstract, ambitious, and ultimately attainable. They also clearly relate to and underpin a specific proposal’s declared objectives.




In formulating goals, an applicant should:

  • Use them as the ultimate rationale for its proposal-specific objectives and activities
  • Reflect its own proposal-specific needs assessment or problem statement
  • Verify that the funder’s program-specific goals are compatible with its own goals
  • Mirror or resonate with the grant maker’s funding priorities, goals, and long-term vision (Example: Environmental Education Local Grants Program RFP 2016)


A well-articulated goal should be compatible with the funder’s declared overarching goals and its program-specific review criteria. It should also be time-bound, quantifiable, abstract, and significant.


Typically, a goal will:

  • Include a time frame. Examples: ‘…by the end of the funding period…’ or ‘…by the end of five years…’ or ‘…after completing the proposed project…’or ‘…by September 30, 2021…’ 
  • Define a performance criterion. Examples: A specific number (N): ‘a total of (N)…’ or… a specific percentage (%): ‘a ratio (%) of…’
  • Suggest a final state of accomplishment. Examples: Use verbs like ‘…will have increased…’ or ‘…will have reduced…’ or ‘…will have created…’ or ‘…will have implemented…’
  • Allude to cause and effect. Example: Start a goal with: ‘…As a result of…’




A goal statement having these four elements may have a structure resembling this:

…‘As a result of project activities, by September 30, 2025, 90% or more of participating middle school students each year will meet or exceed State proficiency standards in Environmental Education.’




Goals are considerably more open-ended or general than objectives. Applicants are less likely to be expected to measure the attainment of goals, except by proxy through measuring the proposed objectives that should lead to accomplishing those goals.





Revised in mid-2016, this post covers guidance about the roles of evaluation in common grant application (CGA) forms. Its context is the United States of America.



At least 20 associations of grant makers or other organizations in the United States publish a common grant application (CGA) form online. This post explores the instructions and questions about Evaluation and Evaluation Plans that they pose to applicants.


Other posts will explore the CGA in terms of required elements of proposals, applicant revenue sources, budget expense categories, and proposal length and format requirements. The end of the post explains the abbreviations that it uses.



As a collection, taken as a whole, the common grant application (CGA) forms provide insight into the questions and considerations that interest hundreds of private grant makers. Among other things, hey shed light on evaluation plans as elements of a complete proposal. And they differentiate evaluation plans, as attachments, from evaluation plans as required parts of complete proposal narratives.


Role of Evaluation

Out of the 20 providers of common grant application forms, two (or 10%) of them — DC and NY — give no instructions to applicants about Evaluation or Evaluation Plans. In addition, of the 18 CGA providers that do pose evaluation questions, four (or 22%) of them — AZ, CT, ME, and WA — do not present Evaluation as a separate proposal element.


In the table below, a Y (Yes) means that the CGA provider does give some instructions about Evaluation or Evaluation Plans. A plus (+) means that the CGA provider both gives instructions and includes Evaluation or Evaluation Plans — as such — as a distinct selection criterion in its instructions for proposal narratives. An asterisk (*) means that a CGA provider gives no instructions about Evaluation or Evaluation Plans.


  Common Grant Application Forms
  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Instructions Y Y Y Y Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y
Criterion +   + +     +   + + + + +   + + + +   +
No Instructions           *               *            


All CGA States



Among more frequent topics of Evaluation questions found on common grant application forms are:

  • How evaluation results will be used — NNG, CA, MI, MN. MO, OH, and WI
  • How the organization measures effectiveness – IL, ME, MO, NJ, WA, WI, and PA-2
  • How the organization defines (criteria) and measures success – IL, MI, MN, NJ, WA, WI, and PA-2
  • Anticipated results (outputs and/or outcomes) – MD, MA, NJ, WI, and PA-2
  • What assessment tools or instruments will be used — AZ, MO, PA-2, and TX
  • How the organization evaluates outcomes and/or results – CT, MD, OH, and PA-2
  • Who will be involved in evaluation — NNG, MN, and OH
  • How constituents and/or clients will be involved actively in the evaluation – MI, MN, and OH
  • How the organization measures short- and long-term effects or outcomes – MN, OH, and PA-2


Among less frequent topics of Evaluation questions on common grant application forms are:

  • What questions will evaluation address — NNG and AZ
  • Overall approach to evaluation — CO and WI
  • How the organization measures impact — CO and PA-1
  • Timeframe for demonstrating impact — CO and OH
  • What process and/or impact information the organization will collect – MD and TX
  • How the organization assesses overall success and effectiveness – MD and MA
  • How evaluation results will be disseminated – CA, MI, and OH


Among infrequent topics of Evaluation questions on common grant application forms are:

  • The organization’s plans for assessing progress toward goals – ME
  • The organization’s plans for assessing what works – ME
  • How the organization evaluates its programs – PA-1
  • How the organization has applied what it has learned from past evaluations – PA-1
  • How the organization monitors its work – WA



Below is a list of abbreviations used in this post. The common grant application forms are found on their providers’ websites.

  • 1.   NNG: National Network of Grantmakers
  • 2.   AZ: Arizona Grantmakers Forum
  • 3.   CA: San Diego Grantmakers
  • 4.   CO: Colorado Nonprofit Association
  • 5.   CT: Connecticut Council for Philanthropy
  • 6.   DC: Washington Regional Association of Grantmakers
  • 7.   IL: Forefront (Chicago area)
  • 8.   ME: Maine Philanthropy Center
  • 9.  MA: Associated Grantmakers
  • 10. MI: Council of Michigan Foundations
  • 11. MN: Minnesota Community Foundation
  • 12. MO: Gateway Center for Giving
  • 13. NJ: Council of New Jersey Grantmakers/Philanthropy New York
  • 14. NY: Grantmakers Forum of New York
  • 15. OH: Ohio Grantmakers Forum
  • 16. PA-1: Philanthropy Network Greater Philadelphia
  • 17. PA-2: Grantmakers of Western Pennsylvania
  • 18. TX: Central Texas Education Funders
  • 19. WA: Philanthropy Northwest
  • 20. WI: Donors Forum of Wisconsin



In an age of elevated accountability for results, the Evaluation Plan is one of the most critical components of a competitive grant proposal.


For virtually every objective one might conceive, many types of thoroughly reviewed evaluation instruments are readily available. Often these instruments are widely used to generate and monitor data and to track and report on performance outcomes; yet, they may be new to any given applicant and its grant writing team.


Selecting Evaluation Instruments

In selecting one or more evaluation instruments to measure a specific objective in a proposal, a smart grant writing team will first locate and study relevant technical reviews found throughout the professional literature of program evaluation. The smart team is certain to look for:

  • Evidence for the technical review writer’s objectivity
  • Evidence for the instrument’s reliability
  • Evidence for the instrument’s validity
  • Limitations on the available evidence
  • Discussions of the instrument’s intended uses
  • Prerequisites for the instrument’s effective use
  • Required frequency and mode of use
  • Time required for administration and data analysis and reporting
  • Costs associated with using the instrument


Finding Technical Reviews

There are many possible sources of technical reviews of evaluation instruments. One of the best and most comprehensive resources is the Mental Measurement Yearbooks, a series published both online and in print by the Buros Institute. A second resource, of more limited scope, is the ERIC Clearinghouse on Assessment and Evaluation. Nearly every specialized and science-driven discipline will have its own review repository as well.


Reasons for Using Technical Reviews

Applicants need to persuade skeptics that their Evaluation Plan will provide evidence of program effectiveness. One way to do so is to demonstrate to wary readers that the proposed evaluation instruments are judiciously selected and are appropriate for their proposed uses. The findings published in technical reviews furnish invaluable assets for accomplishing this task. The rest hinges upon how well an applicant uses these assets in describing and justifying its Evaluation Plan.

In the context of grants, evaluation is a systematic inquiry into project performance. In its formative mode, it looks at what is working and what is not; in its summative mode, it looks at what did work and what did not. In both modes, it identifies obstacles to things working well and suggests ways to overcome them. For evaluation to proceed, the events or conditions that it looks at must exist, must be describable and measurable, and must be taking place or have taken place. Its focus is actualities, not possibilities.


Data Collection:

Effective evaluation requires considerable planning. Its feasibility depends on access to data. Among the more important questions to consider in collecting data for evaluation are:

  • What kinds of data need to be acquired?
  • What will be the sources of data?
  • How will sources of data be sampled?
  • How will data be collected?
  • When and how often will the data be collected?
  • How will outcomes with and without a project be compared?
  • How will the data be analyzed?


Problem Definition:

In developing an evaluation plan, it is wise to start from the problem definition and the assessment of needs and work forward through the objectives to the evaluation methods. After all, how a problem is defined has inevitable implications for what kinds of data one must collect, the sources of data, the analyses one must do to try to answer an evaluation question, and the conclusions one can draw from the evidence.


Evaluations pose three kinds of questions: descriptive, normative, and impact (or cause and effect). Descriptive evaluation states what is or what has been. Normative evaluation states what is to what should be or what was to what should have been. Impact evaluation states the extent to which observed outcomes are attributable to what is being done or has been done. The options available for developing an evaluation plan vary with each kind of question.



An evaluation plan does not need to be complex in order to provide useful answers to the questions it poses. The power of an evaluation should be equated neither with its complexity nor with the extent it manipulates data statistically. A powerful evaluation uses analytical methods that fit the question posed; offer evidence to support the answer reached; rule out competing evidence; and identify modes of analysis, methods, and assumptions. Its utility is a function of the context of each question, its cost and time constraints, its design, the technical merits of its data collection and analysis, and the quality of its reporting of findings.



Among the most common constraints on conducting evaluations are: time, costs, expertise, location, and facilities. Of these constraints, time, costs, and expertise in particular serve to delimit the scope and feasibility of various possible evaluation design options.


Design Options:

Most evaluation plans adopt one of three design options: experimental, quasi-experimental (or non-equivalent comparison group), or pre/post In the context of observed outcomes, the experimental option, under random assignment of participants, is most able to attribute causes to outcomes; the pre/post option – even one featuring interrupted time series analyses – is least able to make such attributions.


The experimental option tends to be the most complex and costliest to implement; the pre/post option tends to be the simplest and least costly. Increasingly, Federal grant programs favor the experimental evaluation design, even in areas of inquiry where it is costly and difficult to implement at the necessary scale, such as education and social services.

Once an organization has won a multi-year grant, evaluation is essential to getting it renewed year to year. One way to share evaluation findings is the Annual Performance Report (APR).


Although the specific contents of an APR vary from funder to funder, they also tend to have similar structures from report to report. What follows is one typical structure:


Face Page or Title Page: Should identify the grant recipient, the grant maker, and the grant program. Also may need to provide unique numerical identifiers: submission date, grant award number, employer identification number, grantee DUNS number, and others.


Table of Contents: Should always include whatever major topics, in whatever predetermined sequence that a specific funder may require.


Executive Summary or Abstract: Should offer an overview of findings and recommendations and be no longer than one page.


Overall Purpose of Evaluation: Should state: why the evaluation was done; what kinds of evaluation were performed; who performed them; what kinds of decisions the evaluation was intended to inform or support; and who has made, is making, or is going to make such decisions.


Background or Context: Should briefly describe the organization and its history. Should describe the goals and nature of the product or program or service being evaluated. Should state the problem or need that the product or program or service is addressing. Should specify the performance indicators and desired outcomes. Should describe how the product or program or service is developed and/or delivered. Also should characterize who is developing or delivering the product or program or service.


Evaluation Methods: Should state the questions the evaluation is intended to answer. Also should indicate the types of data collected, what instruments were used to collect the data, and how the data were analyzed.


Evaluation Outcomes: Should discuss how the findings and conclusions based on the data are to be used, and any qualifying remarks about any limits in using the findings and conclusions.


Interpretations and Conclusions: Should flow from analysis of the evaluation data. Should be responsive to the funder’s evaluation priorities (e.g., measuring GPRA or GPRMA performance indicators in Federal grants).


Recommendations: Should flow from the findings and conclusions. Also should address any necessary adjustments in the product or program or service and other decisions that need to be made in order to achieve desired outcomes and accomplish goals.


Appendices or Attachments: Should reflect the funder’s requirements and the purposes of the specific evaluation. Appendices may include, for example: the logic model governing the project; plans for management and evaluation included in the original proposal; detailed tables of evaluation data; samples of instruments used to collect data and descriptions of the technical merits of these instruments; case studies of, or sample statements by, users of the product or program or service.


This is one in a series of posts presenting sample elements of a possible proposal. In their illustrative details, its contents are both fictional and factual; however, its overall approach has won grants for similar purposes.


Project Evaluation. As local educational agencies, both YCESC and its school district partners continuously evaluate and report results of educational programs for all students and all staff. Several individual YCESC staff members have 10 or more years of experience in program management and evaluation. Their expertise and accomplishments in meeting and exceeding state and federal evaluation and fiscal accountability standards are widely recognized.


The project will use (a) pre-post teacher surveys, (b) observation checklists, and (c) pre-post participant surveys to measure: (1) the project’s effectiveness in increasing infusion of existing Watershed Education curricular resources into the Science curriculum (grades 6-12) in 12 districts; and (2) the project’s effectiveness in increasing teachers, parents’, and other stakeholders’ critical-thinking, problem-solving, and decision-making skills in environmental issues of water resource management in the Saco River Watershed. Evaluation of project activities also will use testing instruments and performance measures that are packaged with the existing Watershed Education model curricular materials. The Project Coordinator will confer at least monthly with representatives of all project partners to review and discuss collection and analysis of evaluation data and to identify any needed changes in the project’s approach. Results will be reported to all partners and the EPA.

A goal is a general or overall direction for a project or initiative, its ultimate destination. It specifies and prioritizes the long-term direction and extent of desired change in meeting an applicant’s documented needs. A goal is neither an objective (a specific increment of change) nor a strategy (a means or approach for accomplishing an objective). Goal statements form part of a Work Plan or Plan of Action.

A goal should say in broad, abstract, and ideal terms what ultimate state of affairs an applicant intends to reach. It should say who is expected to achieve or experience it (such as individuals or populations), or what is to be achieved (such as an event or a product).

Every goal implies a set of costs for obtaining one or more desired results – its scope and magnitude directly impact the budget and its credibility. Every goal also entails proposing one or more objectives whose accomplishment should contribute towards achieving that goal.

Example: ‘Within five years of initial funding, 95% of all five-year olds in Houston, Texas will start the first day of Kindergarten ready to learn.’

Formulating Goals:

In planning to formulate a goal in a proposal, several questions prove useful:

Purposes: How does your goal relate to the grant maker’s aims or purposes in awarding grants? Does your goal mirror the grant maker’s language about its aims or purposes for funding?

Needs: How does your goal relate to your identified needs? How does it relate to your problem statement?

Results: How does your goal relate to your expected results? Does it account for all of them or only some subset of them?

Components: How does your goal relate to the components in your program design? Do you have a goal for each component? Are your goals independent of your program design components?

Activities: How does your goal relate to your proposed activities? Can these activities lead to achieving it within the desired timeframe?

Evaluation: How will you measure achievement of your goal? How will you measure and report progress in achieving it?

Priorities: if your goal is one of several, what priority does it have? Is this particular goal subordinate to other goals? Is it in fact the overarching goal for your project or initiative?

Costs: How much will it cost to accomplish your goal within the desired timeframe? Is it possible to achieve your goal within the total available budget?

This post is one in a series about questions useful in planning competitive grant proposals.

Sooner or later, nearly everyone who competes for grants has compiled his or her own list of what works and what doesn’t. This post is the third in a series about what works (do) and what does not work (do not do) in writing competitive grant proposals. It covers Capacity-Building Plans, Evaluation Plans, and Budgets and Cost-Effectiveness.


In writing a Capacity-Building Plan:


  1. Indicate an intention to continue key aspects of a program or project after a grant ends.
  2. Identify key program elements likely to be continued after a grant ends.
  3. Describe practical steps to be taken to identify key elements for continuation.
  4. Present a timeline for absorbing the costs of key elements before a grant ends.
  5. Discuss how program monitoring and evaluation will help to identify key elements.



  1. Try to postpone thinking about building capacity until after a grant ends.
  2. Overstate previous experiences or outcomes in building capacity.
  3. Confuse building capacity during a grant with sustainability after it ends.
  4. Neglect the role of leveraging applicant and partners’ resources in building capacity.
  5. Ignore the meaning each specific funder attaches to the concept of capacity.


In writing an Evaluation Plan:


  1. Present performance criteria for each objective to be evaluated.
  2. Align evaluation methods with every specific proposed objective.
  3. Describe a process for selecting an Evaluator.
  4. Identify an Evaluator by name and affiliation whenever possible.
  5. Include a timeline for key evaluation activities.
  6. Describe how you will use interim findings to make midcourse adjustments.



  1. Postpone creating an evaluation plan until after a grant award.
  2. Fail to describe the roles, responsibilities, and qualifications of an Evaluator.
  3. Rely exclusively on measures yet to be developed.
  4. Ignore the technical psychometric merits of proposed evaluation instruments.
  5. Omit the process for collecting, analyzing, and reporting performance data.
  6. Suggest that evaluation reports are a waste of time and scarce funds.


In writing Budget and Cost-Effectiveness:


  1. Align all budget items with the Work Plan.
  2. Be explicit about all assumptions used in calculating costs.
  3. Provide specific rates and amounts for all line items.
  4. Calculate costs per participant or avoided costs or similar measures of cost-effectiveness.
  5. Explain any unusual or questionable cost items.



  1. Propose disallowed or illegal uses of grant funds.
  2. Be arbitrary about rates for mileage, per diem, lodging, salaries, or fringe benefits.
  3. Expect to be able to introduce entirely new budget items after a grant award occurs.
  4. Omit related costs to be paid for using other funding sources.
  5. Pad the budget with unnecessary or irrelevant line items or cost figures.


The last post in this series will cover Other Websites that discuss what works and doesn’t work in competing for grants.

Proposals that win grants for K-12 education have many predictable information needs. Applicants that have such information at the ready before the announcement of a grant opportunity greatly improve the likelihood of funding.


Although applicants may not need every item listed here for every evaluation plan, among the proposal narrative elements it is prudent to anticipate are: evaluation instruments, evaluation methods, and evaluation personnel.


Evaluation Instruments:

  1. Types of instruments to be used (e.g., NRTs, CRTs, rating scales, checklists, questionnaires, interviews, logs of use, reflective journals, and so on)
  2. Sources and technical merits of all measurement instruments mentioned in all objectives
  3. Sources and technical merits of measurement instruments in non-English languages, if any
  4. Sources and technical merits of specialized or ad hoc measurement instruments, if any


Evaluation Methods:

  1. Performance criteria for each objective
  2. Methods of quantitative analysis and criteria of statistical significance of results
  3. Methods of qualitative analysis
  4. Formative (or process) evaluation methods
  5. Summative (or outcomes) evaluation methods
  6. Description of comparison or control groups
  7. Type of evaluation design to be used (experimental, quasi-experimental, and so on)
  8. Data collection procedures
  9. Methods and audiences for reporting interim and final results
  10. Timeline for implementing evaluation plan


Evaluation Personnel:

  1. Evaluation responsibilities of all project-related personnel
  2. Name, qualifications, and roles of external evaluator, if any
  3. Name, qualifications, and roles of internal evaluator, if any
  4. Résumé or curriculum vitae for external evaluator, if any
  5. Résumé or curriculum vitae for internal evaluator, if any


Later posts will cover information needs for other aspects of educational grant proposals.

A checklist is a useful tool for writing proposals that will win competitive grants. This is the second half of a basic checklist.



  • Demonstrate your organization’s capacity for undertaking projects similar to what it now proposes.
  • Give evidence of financial commitment to pursue similar or related initiatives after the new grant period ends.
  • Describe past or current collaborations or partnerships with other organizations.



  • Provide compelling evidence of strong organizational support for the project being proposed.
  • Identify sources and types of internal and external support for the project, as appropriate.
  • Present the local organizational context for the project.
  • Present how the project reflects or advances other existing plans in your organization.
  • Provide a plan to continue core project tasks or strategies after the grant period ends.
  • Obtain uniquely worded letters of commitment from each project partner.
  • Use each partner’s organizational letterhead for all letters of commitment.
  • Describe specific and measurable commitments, not generic ones.



  • Offer evidence that proposed staff will be able to do the work required.
  • Connect staff experience to the project’s objectives and activities.
  • Present details in staff resumés that reflect specific project roles and tasks.
  • Describe the specific roles of all key project staff.
  • Provide basic demographic data about backgrounds of all key project staff.
  • Identify the specific time commitments of all key project staff.



  • Use a budget narrative to defend and explain every line item.
  • Provide for broad sharing with key audiences of significant project processes, products, and outcomes.
  • Present a plan to share costs with partner organizations, if appropriate.
  • Link the budget to the project’s goals and objectives.
  • Itemize the budget in detail whenever possible and appropriate.
  • Include only eligible or allowed items in the budget request.
  • Ensure that the budget is realistic, appropriate, and well justified.



  • Align evaluation measures with the objectives.
  • Propose instruments appropriate for measuring attainment of each objective.
  • Seek and obtain third party evaluation whenever possible.
  • Provide for making midcourse corrections as well as for interim and final evaluation of results.
  • Describe the roles of monitoring or process evaluation in guiding the project.
  • Describe a plan for collecting, analyzing, and reporting performance data.
  • Demonstrate a willingness to be accountable for results.
  • Enumerate potential local and broader benefits of completing the proposed project.



  • Describe a plan for reaching and informing key audiences about the project.
  • Explain how the marketing plan will use established and emerging social media.
  • Identify any tangible or electronic products the project will develop and deliver.
  • Describe a plan for ensuring high quality in creating and delivering project-developed products.
%d bloggers like this: