Skip navigation

Tag Archives: proposal writing

This post examines the potential usefulness of meta-analysis for winning grants. It looks at how grant seekers can apply available research in seeking funding for parental involvement programs designed to improve academic achievement in Grades K-12. The post’s few examples by no means exhaust the sources of evidence available to applicants seeking grants to improve educational outcomes – or outcomes in other domains for which grants may be awarded.

 

Effect Sizes of Overall Parental Involvement Programs

 

In a meta-analysis (2012), Dr. William Jeynes, using Hedge’s g, found that overall parental involvement programs yielded effect sizes of 0.19 to 0.31 standard deviations (SD) on academic achievement in urban elementary schools, and of 0.32 to 0.35 SD on academic achievement in urban secondary schools. Effect sizes varied by specific measures of academic achievement – e.g., standardized tests (0.31 and 0.33 SD) or non-standardized assessments (0.19 and 0.32 SD). The magnitudes of such effect sizes are small, but positive.

For urban Latino students in Grades K-5, in Reading, in the context of overall parental involvement programs in urban schools, an effect size of 0.31 SD on standardized tests equates to gains of 0.17 to 0.6 school years; an effect size of 0.19 SD on non-standardized assessments equates to gains of 0.1 to 0.4 school years.

 

Overall PI and ES Graphic 1

 

For urban Latino students in Grades 6-12, in Reading, in the same context, an effect size of 0.33 SD on standardized tests equates to gains of 0.9 to 5.5 school years; an effect size of 0.32 SD on non-standardized assessments equates to gains of 0.83 to 5.3 school years.

 

Overall PI and ES Graphic 2

 

Effect Sizes of General Parental Involvement

 

In a meta-analysis (2017), Dr. William Jeynes, using Hedge’s g, found that general parental involvement yielded overall effect sizes on academic achievement of urban Latino students of 0.30 standard deviations (SD) in Grades K-5, 0.29 SD in Grades 6-12, and 0.45 SD in Grades K-12. Overall effect sizes varied by specific measures of academic achievement among urban Latino students – standardized tests (0.24), non-standardized assessments (0.64), or behavior (0.16). The magnitudes of such effect sizes are small to moderate, and positive.

 

In the context of general parental involvement, an overall effect size of 0.30 SD equates to gains of 0.17 to 0.6 school years in Reading for urban Latino students in Grades K-5; and an effect size of 0.29 SD equates to gains of 0.87 to 5.0 school years in Reading for urban Latino students in Grades 6-12. The magnitudes of such effect sizes are small, but positive.

 

For urban Latino students in Grades K-5, in Reading, in the context of specific measures of academic achievement, a gain of 0.24 SD on standardized tests equates to gains of 0.13 to 0.5 school years; a gain of 0.64 SD on non-standardized assessments equates to a gain of 0.4 to1.3 school years; and a gain of 0.16 SD on behavior equates to gains of 0.08 to 0.32 school years.

 

General PI and ES Graphic 1

 

For urban Latino students in Grades 6-12, in Reading, in the context of specific measures of academic achievement, a gain of 0.24 SD on standardized tests equates to gains of 0.67 to 4.1 school years; a gain of 0.64 SD on non-standardized assessments equates to a gain of 1.85 to 10.9 school years; and a gain of 0.16 SD on behavior equates to gains of 0.48 to 2.7 school years in Reading for urban Latino students in Grades 6-12.

 

General PI and ES Graphic 2

 

Observations

 

Both overall and general programs of parental involvement appear to yield small to moderate – and positive – effect sizes on several measures of academic achievement among urban Latino students in Grades K-12. Such research is potentially useful as an element of an evidence-based rationale for a plan of action in a competitive grant proposal.

 

Available meta-analyses – such as the examples provided here – demonstrate the practical significance of overall and general parental involvement on measures of the academic achievement of urban students. Their findings are useful evidence of the likelihood that implementing such programs will contribute to improved educational outcomes. As such, these meta-analyses – and others similar to them – are welcome and useful resources for applicants wishing to persuade public and private funders to award grants to create or expand programs of parental involvement in Grades K-12.

 

 

 

 

 

 

Advertisements

This post is part of a series about the potential usefulness of meta-analysis for winning grants. It explores how grant seekers can apply available research in seeking funding for parental involvement programs designed to improve academic achievement in urban elementary and secondary schools. The post’s few examples by no means exhaust the sources of evidence available to applicants seeking grants to improve educational outcomes – or outcomes in other domains for which grants may be awarded.

 

Effect Sizes in Elementary Schools

 

In a meta-analysis (2005), Dr. William Jeynes, using Hedge’s g, found that general parental involvement yielded overall effect sizes of 0.37 to 0.85 standard deviations (SD) on academic achievement in urban elementary schools, and parental involvement programs yielded overall effect sizes of 0.27 to 0.40 SD on academic achievement in urban elementary schools. Overall effect sizes of general parental involvement varied by specific measures of academic achievement – e.g., overall (0.74), grades (0.85), standardized tests (0.37) or academic attitudes and behaviors (0.34). Overall effect sizes of specific programs of parental involvement also varied by specific measures of academic achievement – e.g., overall (0.27), grades (0.32), standardized tests (0.40) or academic attitudes and behaviors (0.30). These effect sizes range from small to large; all are positive.

 

In the context of general parental involvement, an overall effect size of 0.74 standard deviations (SD) on overall academic achievement equates to gains of from 0.45 to 1.45 school years for Reading in Grades K-5. An overall effect size of 0.85 SD on grades equates to gains of from 0.55 to 1.7 school years for Reading in Grades K-5. An overall effect size of 0.37 SD on standardized tests equates to gains of from 0.2 to 0.7 school years for Reading in Grades K-5. An overall effect size of 0.34 SD on academic attitudes and behaviors equates to gains of from 0.2 to 0.7 school years for Reading in Grades K-5.

 

General PI in ES

 

In the context of specific programs of parental involvement, an overall effect size on overall academic achievement of 0.27 standard deviations (SD) equates to a gain of from 0.17 to 0.67 school years for Reading in Grades K-5. An overall effect size on grades of 0.32 SD equates to gains of from 0.2 to 0.7 school years for Reading in Grades K-5. An overall effect size on standardized tests of 0.40 SD equates to gains of from 0.23 to 0.73 school years for Reading in Grades K-5. An overall effect size on academic attitudes and behaviors of 0.30 SD equates to gains of from 0.19 to 0.69 school years for Reading in Grades K-5.

 

Specific PI in ES

 

Effect Sizes in Secondary Schools

 

In a meta-analysis (2007), Dr. William Jeynes, using Hedge’s g, found that general parental involvement yielded overall effect sizes of 0.40 to 0.47 standard deviations (SD) on academic achievement in urban secondary schools, and parental involvement programs yielded effect sizes of 0.25 to 0.36 SD on academic achievement in urban secondary schools. Overall effect sizes of general parental involvement varied by specific measures of academic achievement – e.g., overall (0.46), grades (0.40), standardized tests (0.47) or academic attitudes and behaviors (0.43). Overall effect sizes of specific programs of parental involvement also varied by specific measures of academic achievement – e.g., overall (0.36), grades (0.25), standardized tests (0.36) or academic attitudes and behaviors (0.25). These effect sizes range from small to moderate; all are positive.

 

In the context of general parental involvement, an overall effect size on overall academic achievement of 0.46 standard deviations (SD) equates to gains of from 1.25 to 7.1 school years for Reading in Grades 6-12. An overall effect size on grades of 0.40 SD equates to gains of from 1.1 to 6.9 school years for Reading in Grades 6-12. An overall effect size on standardized tests of 0.47 SD equates to gains of from 1.3 to 7.2 school years for Reading in Grades 6-12. An overall effect size on academic attitudes and behaviors of 0.43 SD equates to gains of from 1.15 to 7.0 school years for Reading in Grades 6-12.

 

General PI in SS

 

In the context of specific programs of parental involvement, an overall effect size on overall academic achievement of 0.36 standard deviations (SD) equates to gains of from 1.0 to 5.9 school years for Reading in Grades 6-12. An overall effect size on grades of 0.25 SD equates to gains of from 0.67 to 4.2 school years for Reading in Grades 6-12. An overall effect size on standardized tests of 0.36 SD equates to gains of from 1.0 to 5.9 school years for Reading in Grades 6-12. An overall effect size on academic attitudes and behaviors of 0.43 SD equates to gains of from 1.15 to 7.0 school years for Reading in Grades 6-12.

 

Specific PI in SS

 

Observations

 

Available meta-analyses – such as the examples provided here – demonstrate the practical significance of general and specific programs of parental involvement on measures of the academic achievement of urban elementary and secondary students. Their findings are useful evidence of the likelihood that implementing such programs will contribute to improved educational outcomes. As such, these meta-analyses – and others in the research literature similar to them – are welcome and useful resources for applicants wishing to persuade public and private funders to award grants to create or expand programs of parental involvement in Grades PK-12.

 

 

This post explores interpretations of effect sizes in the context of writing proposals for competitive grants in PK-12 education. It translates effect sizes into time-indexed measures of academic growth in Grades PK-11 for instruction in Reading. Such conversion helps to transform the unfamiliar into the familiar.

 

Time Indexed Effect Sizes and Academic Growth

 

Research has generated time-indexed effect sizes based on national norms of academic growth in Reading and Mathematics (Lee et al., 2012). It’s possible now to convert Cohen’s (standardized group mean differences) to d’ (school years of schooling).

 

Reading (Grades K-5)

 

In the context of the United States of America, a school year is commonly 180 instructional days (±5 days). Based on the results of research on time-indexed effect sizes – and assuming, for simplicity of calculation, a school year of 180 days – the list summarizes the results of research on time-indexed effect sizes in Reading in Grades K-5:

 

  • In K, an effect size (d) of 0.2 equates to 0.1 of a school year (18 school days), and an effect size (d) of 0.5 equates to 0.3 of a school year (54 school days).
  • In Grade 1, an effect size (d) of 0.2 equates to 0.1 of a school year (18 school days), and an effect size (d) of 0.5 equates to 0.3 of a school year (54 school days).
  • In Grade 2, an effect size (d) of 0.2 equates to 0.2 of a school year (36 school days), and an effect size (d) of 0.5 equates to 0.4 of a school year (72 school days).
  • In Grade 3, an effect size (d) of 0.2 equates to 0.2 of a school year (36 school days), and an effect size (d) of 0.5 equates to 0.6 of a school year (108 school days).
  • In Grade 4, an effect size (d) of 0.2 equates to 0.4 of a school year (72 school days), and an effect size (d) of 0.5 equates to 0.9 of a school year (162 school days).
  • In Grade 5, an effect size (d) of 0.2 equates to 0.4 of a school year (72 school days), and an effect size (d) of 0.5 equates to 1.0 of a school year (180 school days).

 

Examples

 

A meta-analysis of parental involvement in urban elementary schools (Jeynes, 2005) found overall effect sizes of 0.37 for general parental involvement on elementary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 36 school days (in Grades K-1), to 54 school days (Grade 2), to 60 school days (Grade 3), to 118 school days (Grade 4), and to 126 school days (Grade 5).

 

 

The same meta-analysis of parental involvement in urban elementary schools (Jeynes, 2005) found overall effect sizes of 0.40 for programs of parental involvement on elementary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 42 school days (in Grades K-1), to 60 school days (Grade 2), to 84 school days (Grade 3), to 132 school days (Grade 4), and to 144 school days (Grade 5).

 

 

Reading (Grades 6-12)

 

Based on the results of the same research on time-indexed effect sizes, the list below summarizes the noteworthy results of research on time-indexed effect sizes in Reading in Grades 6-12:

 

  • In Grade 6, an effect size (d) of 0.2 equates to 0.6 of a school year (108 school days), and an effect size (d) of 0.5 equates to 1.4 of a school year (252 school days).
  • In Grade 7, an effect size (d) of 0.2 equates to 0.8 of a school year (144 school days), and an effect size (d) of 0.5 equates to 1.9school years (342 school days).
  • In Grade 8, an effect size (d) of 0.2 equates to 0.5 of a school year (180 school days), and an effect size (d) of 0.5 equates to 2.5 school years (450 school days).
  • In Grade 9, an effect size (d) of 0.2 equates to 0.8 of a school year (144 school days), and an effect size (d) of 0.5 equates to 1.9 school years (342 school days).
  • In Grade 10, an effect size (d) of 0.2 equates to 0.5 of a school year (90 school days), and an effect size (d) of 0.5 equates to 1.3 school years (214school days).
  • In Grade 11, an effect size (d) of 0.2 equates to 0.5 of a school year (90 school days), and an effect size (d) of 0.5 equates to 1.3 school years (214 school days).

 

Examples

 

A meta-analysis of parental involvement in urban secondary schools (Jeynes, 2007) found overall effect sizes of 0.47 for general parental involvement on secondary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 238 school days (Grade 6), to 320 school days (Grade 7), to 420 school days (Grade 8), to 320 school days (Grade 9), to 200 school days (Grade 10), and to 200 school days (Grade 11).

 

 

The same meta-analysis of parental involvement in urban secondary schools (Jeynes, 2007) found overall effect sizes of 0.36 for programs of parental involvement on secondary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 185 school days (Grade 6), to 234 school days (Grade 7), to 315 school days (Grade 8), to 234 school days (Grade 9), to 152 school days (Grade 10), and to 152 school days (Grade 11).

 

 

Observations

 

Conversion of effect sizes into instructional day equivalents is one way that seekers of competitive grants can translate abstruse research findings into more concrete and familiar terms.

 

The meta-analyses cited here are by no means the only ones available to eligible applicants for competitive grants in PK-12 Education. They are purely illustrative of what’s available. Grant seekers may use such findings in Research Rationales or Reviews of Literature – and elsewhere in proposals – to persuade reviewers that a project is likely to yield results of practical significance (e.g., improved academic achievement through parental involvement), and thus worthy of an investment of a funder’s scarce resources.

 

Note

 

The conversions of effect sizes into instructional days, as represented in this post and its graphics, derive from: Jaekyung Lee, Jeremy Finn, and Xiaoyan Liu, “Time-indexed Effect Size for P-12 Reading and Math Program Evaluation.” Paper presented at the Society for Research on Educational Effectiveness (SREE) Spring 2012 Conference, Washington, DC on March 9, 2012. It is available here.

This post explores interpretations of effect sizes in the context of writing proposals for competitively awarded grants in PK-12 Education. It translates effect sizes of several selected magnitudes into changes in comparative percentile ranks. It also refers to meta-analyses that have reported effect sizes at or near these selected magnitudes.

 

Research cited in examples throughout this post reported its results using Hedge’s g. Hedge’s is a more conservative measure than Cohen’s d; it is less likely to overstate effect sizes. The primary difference between the two is that Hedge’s uses pooled weighted standard deviations while Cohen’s uses pooled standard deviations.

 

Effect Size of 0.2

 

Once one calculates an effect size, one can interpret it in terms of changes in percentile rank (or changes in relative position along a bell curve distribution).

 

With an effect size of 0.2, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 11th. With an effect size of 0.2, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 58%.

 

Examples: A meta-analysis of efforts to reduce the academic achievement gap across racial/ethnic subgroups (Jeynes, 2015) found an overall effect size of 0.22 for family factors as a variable in the reduction of the gap. An earlier meta-analysis of overall programs of urban parental involvement (by program type) in grades PK-12 (Jeynes, 2012) found an overall effect size of 0.21 on non-standardized measures of academic achievement.

 

Effect Size of 0.3

 

With an effect size of 0.3, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 10th. With an effect size of 0.3, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 62%.

 

Examples: A meta-analysis of specific programs of parental involvement in urban elementary schools (Jeynes, 2005) found an overall effect size of 0.27 on students’ overall academic achievement. A second meta-analysis of specific programs of parental involvement in urban secondary schools (Jeynes, 2007) found an overall effect size of 0.36 on students’ overall academic achievement. A third meta-analysis of general programs of parental involvement (Jeynes, 2017) found an overall effect size of 0.30 on Latino students’ overall academic achievement in grades K-5.

 

Effect Size of 0.5

 

With an effect size of 0.5, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 8th. With an effect size of 0.5, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 69%.

 

Examples: A meta-analysis of general programs of parental involvement in urban secondary schools (Jeynes, 2007) found an overall effect size of 0.47 on students’ performance on standardized tests. A second meta-analysis of general programs of parental involvement for African American students (Jeynes, 2003) found an effect size of 0.48 on students’ overall academic achievement.

 

Effect Size of 0.7

 

With an effect size of 0.7, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 6th. With an effect size of 0.7, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 76%.

 

Example: A meta-analysis of general programs of parental involvement in urban elementary schools (Jeynes, 2005) found an overall effect size of 0.74 on students’ overall academic achievement.

 

Observations

 

The meta-analyses cited here are by no means the only ones available to eligible applicants for competitive grants in PK-12 Education. Those selected are for illustration only.

 

 

 

Grant seekers may use such findings in Research-Based Rationales or Reviews of Literature – and elsewhere in proposals – to persuade review panels that a project is likely to yield results of practical significance (e.g., improved academic achievement through parental involvement), and thus worthy of an investment of a funder’s scarce resources.

 

Note

 

Data represented in both graphics in this post come from Coe, “It’s the effect size, stupid: what effect size is and why it is important.” Paper presented at the Annual Conference of the British Educational Research Association (2002), University of Exeter, England, 12-14 September 2002; it is available here.

Earlier posts here have described ways to use PESTLE Analysis, SWOT Analysis, Logic Models, and other tools for developing competitive grant proposals.

 

As a powerful project planning and evaluation tool, meta-analysis also belongs in every grant proposal writer’s repertoire. This post is the first of a series that explores the uses of meta-analysis in competitive proposals for grants.

 

Context

 

Grant awards are scarce; competition for them is intense. Applicants must persuade review panels that their proposals are worth an investment of finite funds. One means of persuasion is meta-analysis.

 

Some funders already require a meta-analysis of existing research as part of an application for a grant to support further research. Research-based rationales are also commonplace as review criteria for many programs that award grants for direct services. Since the utility of meta-analysis extends beyond research grants, applicants for grants for direct services might well take heed of meta-analysis and use it in their proposals.

 

Advantages

 

Meta-analysis reviews existing research literature. As one educational researcher put it: “…A meta-analysis statistically combines all the relevant existing studies on a given subject in order to deter­mine the aggregated results of said research…. [Jeynes, 2011, p. 10].”

 

One task of meta-analysis is to calculate effect sizes. As a second researcher has stated: “One of the main advantages of using effect size is that when a particular experiment has been replicated, the different effect size estimates from each study can easily be combined to give an overall best estimate of the size of the effect. This process of synthesizing experimental results into a single effect size estimate is known as ‘meta-analysis.’… [Coe, 2002, p. 8].”

 

Attributes

 

Effect size is a standardized, scale-free measure of the relative size of the practical difference that an intervention makes on some aspect of an experimental group. It is particularly useful for quantifying effects measured on unfamiliar or arbitrary scales. It is also useful for comparing the relative sizes of effects from different studies. Interpretation of effect sizes generally depends on the assumptions that “control” and experimental groups are “normally distributed” and have the same standard deviations.

 

In calculating effect sizes, “…it is often better to use a ‘pooled’ estimate of standard deviation. The pooled estimate is essentially an average of the standard deviations of the experimental and control groups…. [This] is not the same as the standard deviation of all the values in both groups ‘pooled’ together … The use of a pooled estimate of standard deviation depends on the assumption that the two calculated standard deviations are estimates of the same population value… [i.e.,] that the experimental and control group standard deviations differ only as a result of sampling variation [Coe, 2002, p. 9].”

 

Standard Deviation

 

Before calculating effect size, one must calculate standard deviation. Standard deviation measures the dispersion within a dataset relative to its mean. It is used in calculating effect sizes. Steps in calculating standard deviation (SD) are:

  1. Calculate the mean value – Add all of the data points and divide by the number of data points.
  2. Calculate the variance of the data – First subtract the value of the data point from the mean. Next, square each of the resulting values. Next, sum the results. Next, divide the result by the number of data points less one.
  3. Take the square root of the variance to find the standard deviation.

 

After having found the standard deviation, one can calculate effect sizes.

 

Effect Sizes

 

The formula for the often-used Cohen’s = [Mean of experimental group] – [Mean of control group] ÷ Standard Deviation.

 

For Cohen’s effect size, the magnitude generally ranges from -3.0 to +3.0. Different measures of effect sizes apply different thresholds of magnitude before one interprets them as ‘Small’ or ‘Medium’ or ‘Large’. The table presents standard interpretations of Cohen’s – only to be used as guidelines and interpreted in the context of the research.

 

Magnitudes of Effect Sizes Graphic

 

Measurement of effect size varies by the context and the measure used. Available formulas allow researchers – and grant proposal writers – to convert one type of effect size to another.

 

After calculating effect sizes, many researchers commonly calculate their confidence intervals.

 

Confidence Intervals

 

A confidence interval (CI) is the probability that a value will fall between the upper and lower bounds of a probability distribution. In educational research meta-analyses, most often the 95% confidence level is selected. Calculation of CI uses the sample’s mean and standard deviation, and it assumes a normal distribution (the familiar bell curve). The CI reflects the degree of certainty associated with a sampling method, so that, when set at 95%, the upper and lower bounds will contain the true mean 95% of the time.

 

There are several steps in calculating confidence intervals:

  1. Find the number of observations (n), calculate their mean (X), and calculate their standard deviation (s).
  2. Select a confidence interval, look up its Z value in a Z table, then use the Z value in the formula: X ± Z s/ √n

 

Where X = mean, Z = selected Z value, s = standard deviation, √ = square root, and n = number of observations

 

Grant seekers can explore examples of the use of confidence intervals associated with effect sizes for educational interventions at [Jeynes, 2012, p. 726]  and in other research cited there.

 

Limitations

 

Like any statistical method, the use of meta-analyses has its limitations. Among these are:

  1. Interpretations of effect sizes assume the normal distribution of values for “control groups” and “experimental groups” and that the groups have the same standard deviations.
  2. Confidence intervals for effect sizes are not always calculated and reported in published meta-analyses.
  3. Some situations render problematic the interpretation of standardized effect sizes, such as: (1) when a sample has restricted range, or (2) when a sample does not come from a normal distribution, or (3) when the measurement from which it was derived has unknown reliability.

 

The next post in the series explores interpretations of effect sizes in the context of writing proposals for competitive grants in PK-12 education.

Knowing the language of budget development is essential for writing proposals that win grants. Entries here span Salaries to Zero Funding. Their context is North America.

 

Glossary Graphic 1

 

Below is a list of glossary terms in this post:

 

Salaries Supplies
Seed Money Sustainability
Selection Criteria Travel
Sequestration Unallowable Costs
Single Point of Contact (SPOC) Uniform Application Forms
Soft Funds Unrestricted Funds
Standard Form Wages
Sub-Grantee Zero Funding
Supplanting

 

SALARIES: The compensation of professional and technical personnel – who are typically limited only to those holding a post-secondary degree – before the addition of fringe benefits.

 

SEED MONEY: A grant award intended to help start a new project or initiative or to launch a new non-profit organization. It also may be called a Start-Up Grant.

 

SELECTION CRITERIA: The formal set of factors a grant maker uses in scoring and ranking a set of competitive proposals to determine which ones it will select for funding. It also may be called Criteria or Review Criteria.

 

SEQUESTRATION: A mandatory spending cut in the federal budget, such as through a repeal of legislation authorizing a grant program, a reduction of the funding appropriated for a grant program, or an appropriation of no funding for a grant program.

 

SINGLE POINT OF CONTACT (SPOC): The person in state government whom an applicant must inform when it is applying for a federal grant. The federal Office of Management and the Budget maintains a list of single points of contact. Some states have one; others do not.

 

SOFT FUNDS: A non-technical term for the funding of staff positions or other resources using grant funds rather than other means (e.g., revenues from tax levies); it reflects the premise that such assets are not as secure or stable, over the long term, as those funded using other means (e.g., annual tax levies). Also see: Hard Funds.

 

STANDARD FORM: A blank template that an applicant must complete and submit – as each specific program requires – with its application for a federal grant. The federal General Services Administration provides a comprehensive collection of standard forms. Example: SF424. Also see: Uniform Application Forms.

 

SUB-GRANTEE: A lower-tier recipient (e.g., a county agency) of grant funds from a higher-tier recipient of those funds (e.g., a state agency) and not directly from the grant maker; also called a sub-recipient. Also see: Grantee.

 

SUPPLANTING: A deliberate shifting or displacement in the source of funds (e.g., state or local) used to afford a given resource (e.g., personnel) in an organization because of the availability of federal grant funds after a new grant award. One caveat in many government grant programs is “Do not supplant.”

 

SUPPLIES: A cost category for consumable resources such as paper, pens, pencils, postage, folders, files, binders, paperclips, toner, blank data storage media, and similar office products. Definitions and thresholds for value of the discrete items vary widely across grant programs and funding agencies. Also see: Materials.

 

SUSTAINABILITY: A measure of the perceived likelihood that an applicant (and its partners, if any) will be able to obtain and use funding (and other resources) from itself and/or other sources to continue its proposed project or initiative after its initial grant funding ends. Grant makers of all types often favor proposals that exhibit a high potential for sustainability.

 

TRAVEL: A cost category for costs associated with going place-to-place, including fares (e.g., air, bus, train, taxi, or shuttle), vehicle rentals or leases, mileage, tolls, meals, tips, and lodging. Every item assigned to this category must be clearly defined and thoroughly justified.

 

UNALLOWABLE COSTS: The cost categories or discrete line items that a grant maker forbids or discourages an applicant to include as part of a proposed budget. Also see: Allowable Costs.

 

UNIFORM APPLICATION FORMS: The standard forms that applicants must complete and submit with applications for federal grants; several of them require specific or detailed budget information.  Example: SF424. In federal programs, these are associated with specific notices of grant opportunities posted on http://www.grants.gov.

 

UNRESTRICTED FUNDS: Funds from a grant or any other source that an organization may use for any legal purpose, such as general funds or operating funds. Also see: Restricted Funds.

 

WAGES: The hourly compensation of non-professional personnel – typically all of those who do not hold a post-secondary degree – before the addition of fringe benefits, if any.

 

ZERO FUNDING: The termination of a grant program authorized by law or regulation by ending all appropriations for funding it. Also see: Declining Funding, Level Funding.

 

This post concludes a revised seven-part series Glossary for Budget Development. A companion five-part series covers Proposal Development, and a companion two-part series covers Evaluation Plans.

 

Knowing the language of budget development is essential for writing proposals that win grants. Entries here span Operating Support Grant to Review Panel. Their context is North America.

 

Glossary Graphic 1

 

Below is a list of glossary terms in this post:

 

Operating Support Grant Project Period
Other Proposal
Overhead Recipient
Pass-Through Funding Regulations
Personnel Replicability
Private Foundation Request for Proposals (RFP)
Program Office Research Grant
Program Officer Restricted Funds
Project Review Panel
Project Income

 

OPERATING SUPPORT GRANT: A grant that supports the general purpose or work of an applicant, and as such may be used as general revenue or unrestricted funding.

 

OTHER: A cost category commonly used in state and federal grant programs for budget items that do not fit other categories. Every item assigned to this category should be as clearly defined and as well justified as every other item in a proposed budget. Avoid using this cost category for proposing budget line items vaguely identified as “contingency” or as “miscellaneous.”

 

OVERHEAD: SeeIndirect Costs.

 

PASS-THROUGH FUNDING: A scheme for the distribution of funding where a first tier of grant recipients (e.g., state agencies) administers a grant program, awards sub-grants to a second tier of eligible applicants (e.g., school districts or non-profit organizations); and performs a yearly program audit of the second-tier grant recipients. It also may be called Flow-Through Funding.

 

PERSONNEL: A cost category for the human resources or labor, internal to the applicant as an organization, who will be involved in implementing a project; it includes positions paid in salaries and those paid in wages, and it excludes all independent contractors (e.g., evaluators and other consultants). Personnel may or may not be paid out of a proposed grant budget. Also see: Key Personnel, Staff,  Staffing Plan.

 

PRIVATE FOUNDATION: A legally defined type of nonprofit organization whose directors or trustees conduct charitable programs for social, cultural, educational, religious, or other permissible purposes. Example: Sarkeys Foundation.

 

PROGRAM OFFICE: An administrative unit, within a grant-making organization, that implements or coordinates the details of conducting a grant program, including the review and ranking of applications.

 

PROGRAM OFFICER: An administrator on the staff of a grant-making organization who runs a specific grant program, manages grant competitions, and provides technical assistance either to potential grant applicants or to existing grant recipients or to both.

 

PROJECT: The entire proposed plan for which an applicant requests grant funds.

 

PROJECT INCOME: The revenue an applicant’s project is expected to generate during a given time-span; it may include products sold, membership dues, service fees, earned interest, and funds raised by other means. It may also be called Program Income or Revenue.

 

PROJECT PERIOD: The total time for which support of a discretionary project has been approved; it is usually a series of one-year budget periods. Most project periods last one to five years; some may be longer, others may be shorter. Also see: Budget Period, Grant Period.

 

PROPOSAL: A written application of vastly varying length and content, submitted to one or more grant makers, describing a plan or initiative to meet one or more identified needs, and requesting partial or full funding for its support. Some grant makers and grant programs require much more formal, detailed, and highly structured proposals – narratives and budgets – than do others. It also may be called an Application or a Funding Request.

 

RECIPIENT: An individual or organization that will receive a grant or has received a grant.

 

REGULATIONS: Administrative guidelines for government grants, issued after enabling legislation, which establish and define eligible applicants; eligible beneficiaries; the nature of activities to be funded; allowable costs; selection criteria for proposal review; and other requirements. Example: Education Department General Administrative Regulations (EDGAR).

 

REPLICABILITY: The proven or predicted ability of a project’s effective activities and strategies to be transportable to another setting and to generate similar results in it; it is a factor in considering the potential impact of an initial grant award and it is a criterion often associated with grant programs that fund demonstration projects. Also see: Demonstration Grant.

 

REQUEST FOR PROPOSALS (RFP): A formal invitation to apply for a grant that describes what types of applicants are eligible to apply; when proposals are due; the program selection criteria; the contents required in a complete proposal; anticipated levels and durations of funding; and other considerations. The specific length and contents of an RFP vary widely from one grant program and one solicitation to another. Also may be called a Request for Applications (RFA) or a Notice of Funding Availability (NOFA).

 

RESEARCH GRANT: A grant designed to support research rather than to support other purposes such as direct services or general operating costs. Also see: Direct Services Grant.

 

RESTRICTED FUNDS: Funds that a grant recipient may use only for predetermined purposes – such as those defined in the approved budget of a funded grant proposal – and that, consequently, it cannot expend as general funds. Also see: Unrestricted Funds.

 

REVIEW PANEL: A group of peers or experts retained by a grant maker to evaluate the merits of grant proposals in a grant competition and to recommend which ones should be funded. Sometimes the reviewers may include one or more directors or trustees of a foundation.

 

A later post will cover Budget Development Glossary entries from Salaries to Zero Funding.

 

Knowing the language of budget development is essential for writing proposals that win grants. Entries here span In-Kind Contribution to Novice Applicant. Their context is North America.

 

Glossary Graphic 1

 

Below is a list of glossary terms in this post:

 

In-Kind Contribution Level Funding
Indirect Cost Rate Leveraging
Indirect Costs (IDC) Line Item
Invitational Priority Market Value
Lead Agent (Applicant) Matching Funds
Lead Agent (Grant Maker) Matching Grant
Letter of Commitment Materials
Letter of Inquiry Multiyear Budget
Letter of Intent (LOI) Non-Competitive Grant
Letter of Support Novice Applicant

 

IN-KIND CONTRIBUTION: A non-cash donation of labor (paid staff or unpaid volunteer), facilities, equipment, materials, or supplies to carry out a project. Applicants for grants must exercise extraordinary care in calculating the cash value of in-kind contributions and in identifying, tracking, and reporting the sources of such contributions. Also see: Matching Funds.

 

INDIRECT COST RATE: An annually revised percentage established by a unit of government for a grant recipient that the recipient uses in computing the amount it charges to a grant to reimburse itself for the indirect costs it incurs in doing the work of the grant-funded project. The rate may be a Final Rate, a Fixed Rate, a Predetermined Rate, or a Provisional Rate. A foundation grant maker also may solicit and approve an applicant’s proposed indirect cost rate before it considers a proposal from an applicant or awards a grant to it.

 

INDIRECT COSTS (IDC): A cost category for costs that are not readily allocable to or identifiable with operating a specific grant program; it is also often called Overhead. Indirect costs equal direct costs multiplied by the approved indirect cost rate (IDC = DC x rate). Such costs commonly relate to administration and facilities. Generally, a government agency, as a grant maker, reimburses indirect costs only after it has negotiated and approved an indirect cost rate with the grant recipient. As grant makers, foundations are less apt than units of government to allow full or partial recovery of an organization’s indirect costs. Also see: Direct Costs, Indirect Cost Rate.

 

INVITATIONAL PRIORITY: An area of special focus which a grant maker would prefer to see an applicant address in its proposal, but which does not affect the review, rating, or rank ordering of proposals. Also see: Absolute Priority, Competitive Priority.

 

LEAD AGENCY (Applicant): The organization that submits a proposal on behalf of a partnership of two or more organizations and that serves as the grant recipient. If funded, the lead agency is legally responsible for implementing and administering its funded project, for properly managing all grant funds, and for submitting all required reports.

 

LEAD AGENCY (Grant Maker): Particularly in federal grant making, the agency or program office with the primary responsibility for approving or funding a project. It reviews the proposals, coordinates with other involved agencies, and notifies the applicant of its funding outcome.

 

LETTER OF COMMITMENT: A brief official letter that conveys the willingness of a partner organization to commit cash or other resources to a proposed project. It specifies the terms and conditions of the commitment, the precise resources to be offered or delivered, and the actual or estimated values of those resources. Also see: Letter of Support.

 

LETTER OF INQUIRY (LOI): A brief, but formal, mode of grant application, typically one to five pages long, often used when an applicant seeks a grant from a foundation. Commonly includes: introduction; problem statement; objectives and activities; evaluation plan; organizational capacity statement; and budget. Often forms a basis for deciding whether the foundation will request a full proposal from an applicant. Informally, also known as an LOI.

 

LETTER OF INTENT: (1) A brief official letter or email (or other specified form of notification) from a potential applicant to a grant maker that conveys its intention to apply for funding. The grant maker may request or require the letter of intent in order to gauge the number of applicants likely to be competing for funding in a given grant program. (2) Some grant makers may use the term as a synonym for Letter of Inquiry.

 

LETTER OF SUPPORT: A brief official letter that conveys the enthusiasm, endorsement, and encouragement of an individual or an organization for an applicant’s proposed grant-funded project or initiative and for its request for funding, but does not explicitly commit resources to it. Also see: Letter of Commitment.

 

LEVEL FUNDING: An amount of grant funding that does not change from year to year during a multiyear grant period. Also see: Declining Funding.

 

LEVERAGING: A measure of the potential role that a given grant award is likely to have in attracting other funding or resources to a proposed project or initiative. As the specific grant maker requires, an applicant may present either a ratio of requested grant funds to total project funds or a ratio of requested grant funds to funds from other sources.

 

LINE ITEM: A single, discrete, allowable element of expenditure with an allocated cost within a specific cost category. Line items are parts of a detailed, itemized budget.

 

MARKET VALUE: The economic value of a resource (e.g., volunteer labor at minimum wage) as determined up to the date and time an applicant submits a proposal (e.g., the wage rate in effect on or before that date). Often, an applicant determines market value by checking an official government publication or website or by reviewing a grant program’s regulations.

 

MATCHING FUNDS: The share of the total costs of a project or initiative, as required by law or regulation, which comes from any source other than the specific grant being sought. Matching funds may consist of the fair market value of donated resources (in-kind contributions) or of actual cash to be spent (cash) or of both. See the table for examples when an applicant is requesting a $500,000 grant award. Also see: Cost Sharing.

 

Matching Funds Graphic

 

MATCHING GRANT: A grant awarded to an applicant with the intention of matching some of the funds (i.e., as a partial match) or all of the funds (i.e., as a total match) awarded to an applicant by another source. Also see: Challenge Grant.

 

MATERIALS: A cost category for consumable resources such as media (e.g., books, workbooks, digital videodisks, or software), references, and training products. The category may be conjoined with Supplies or it may be subsumed as a part of Supplies. Also see: Supplies.

 

MULTIYEAR BUDGET: A budget covering all or part of two or more consecutive fiscal or calendar years. Many grant makers require a budget for an entire multiyear project period at the time of the original application for a grant.

 

NON-COMPETITIVE GRANT: A funding program from which applicants are eligible for a grant award if they complete and submit required materials by a given deadline. Also may be called: a Budget Earmark, an Allocation Grant, an Entitlement Grant, a Formula Grant, or a Mandatory Grant.

 

NOVICE APPLICANT: An individual or an organization that has not obtained a discretionary grant directly from a specified unit or level of government (e.g., a federal agency) or from a specified grant program within a pre-defined time-span (e.g., the last five fiscal years).

 

A later post will cover Budget Development Glossary entries from Operating Support Grant to Review Panel.

 

 

Knowing the language of budget development is essential for writing proposals that win grants. Entries here span Floor Amount to Hard Funds. Their context is North America.

 

Glossary Graphic 1

 

Below is a list of glossary terms in this post:

 

Floor Amount General Grant
Form 990-PF Grant
Formula Grant Grantee
Fringe Benefits Grantor
Full-Time Equivalent (FTE) Grant Agreement
Funding Cycle Grant Period
Funding Offer Guidelines
Funding Priorities Hard Funds

 

FLOOR AMOUNT: The minimum amount allowed as a grant request, often stated as the lower limit of an anticipated funding range. Also see: Ceiling Amount.

 

FORM 990-PF: A yearly Internal Revenue Service (IRS) form required of all private foundations (hence the -PF) that provides a public record of the financial status and grant-making activity of a private foundation. Accessible (for free) on GuideStar, the forms are useful for prospect research.

 

FORMULA GRANT: A non-competitive grant whose amount is established by applying a formula based upon criteria described in a law, amplified in that law’s subsequent regulations, and awarded after a yearly formal application process; it may also be called an Entitlement Grant or an Allocation Grant or a Mandatory Grant. Example: Indian Education Formula Grants to Local Education Agencies.

 

FRINGE BENEFITS: A cost category for non-salary and non-wage modes of staff compensation that accrue to those who qualify for them; usually stated as a percentage (%) of salaries or wages. Among examples are: health insurance; dental insurance; unemployment insurance; workers’ compensation; paid holidays; paid sick leave; paid personal leave; paid vacation days; and FICA (social security).

 

FULL-TIME EQUIVALENT (FTE): The financial obligation for one full-time staff member. Full-time often denotes a position that requires more than 30 hours per week. Two or more persons may split the position in the budget to add up to one full-time equivalent. FTE may be written as a percentage (stated as: % FTE) or as a specific number of hours per week.

 

FUNDING CYCLE: A sequence of events that starts with a formal public notice that funds are available, and includes the deadline for submission of applications, the review of applications, the award of grants, the completion of contractual documents, and the release of funds; the same sequence may recur in subsequent years if funds are available.

 

FUNDING OFFER: A proposal by a grant maker, in oral or written form, to award a successful applicant an amount of funding that is less than it had requested; such an offer may occur when the grant maker either does not allow certain proposed line items in an applicant’s budget or does not have enough funds to fund the project or initiative at the full amount requested.

 

FUNDING PRIORITY: Any one of the project-related factors that grant makers may use to award extra rating-points to otherwise-qualified applicants. Priorities may also include such non-project factors as geographic distribution of grant awards and the diversity of types of funded applicants. Many federal grant programs announce absolute priorities, competitive priorities, or invitational priorities in their requests for proposals. Also called a Funding Preference.

 

GENERAL GRANT: A grant designed to subsidize the organization-wide operating expenses of a worthy applicant rather than to provide support for a specific project or initiative. Also called an Operating Support Grant or an Operational Grant. Example: Facility rent and utilities.

 

GRANT: An award of funding for an eligible recipient to do pre-defined activities using pre-defined resources over a pre-defined time-span to achieve pre-defined objectives and advance towards one or more pre-defined goals, but whose outcomes are less certain than those expected from a contract.

 

GRANTEE: The organization or individual that receives the grant funds and is responsible for implementing and administering the project or initiative and managing the grant funds; it is also called a Grant Recipient. Also see: Grantor, Sub-grantee.

 

GRANTOR: The organization (e.g., corporation, foundation, or governmental unit) that awards grants; also called the Funder, the Funding Agency, or the Grant Maker. Also see: Grantee, Sub-grantee.

 

GRANT AGREEMENT: A legally binding and enforceable understanding entered into by a grant recipient with a grant maker; it is commonly based on an approved application made by the grant recipient and it commits the grant recipient to implement certain activities and pursue certain objectives, within a pre-defined time-span, for a specific amount of funding. By reference, it may incorporate other municipal or state or federal statutes and regulations beyond those enabling the grant program.

 

GRANT PERIOD: The total time-span for which a grant maker has committed to funding a grant recipient; it may or may not last exactly as long as a budget period or a project period. Also see: Budget Period, Project Period.

 

GUIDELINES: The instructions that describe what the grant maker wants to fund, what applications for funding must contain, how applications – including their budgets – must be prepared and submitted, and how proposals will be reviewed. Also see: Request for Proposals.

 

HARD FUNDS: A non-technical term for the funding of staff positions or other resources that support a program or initiative by using annual tax levies or similarly predictable and renewable revenues rather than by using grant funds; its character reflects the perception that such assets are more secure, over the long term, than those funded using grant funds. Also see: Soft Funds.

 

A later post will cover Budget Development Glossary entries from In-Kind Contribution to Novice Applicant.

 

 

 

 

Knowing the language of budget development is essential for writing proposals that win grants. Entries here span Demonstration Grant to Fiscal Year. Their context is North America.

 

Glossary Graphic 1

 

Below is a list of glossary terms in this post:

 

Demonstration Grant Enabling Legislation
Direct Cost Endowment Fund
Direct Services Grant Equipment
Discretionary Grant External Grant
DUNS Number Family Foundation
Earmark Federated Giving Program
Eligible Activity Fiscal Agent
Eligible Applicant Fiscal Sponsor
Eligibility Criterion Fiscal Year (FY)

 

DEMONSTRATION GRANT: A grant designed to help an applicant to test, prove, or establish the feasibility or effectiveness of new approaches or new types of services in solving one or more defined problems, or in addressing one or more defined needs.

 

DIRECT COST: A cost directly associated with operating a project and borne using funds from a grant maker. In government grants, direct costs commonly include: personnel (salaries, wages, and fringe benefits), consultants or contractual services, supplies and materials, equipment, travel, construction and renovation, and other. Foundation and corporate categories for allowable direct costs are typically fewer than government categories. Also see: Indirect Costs.

 

DIRECT SERVICES GRANT: A grant designed to provide services directly to a pre-defined population of beneficiaries rather than to support other purposes such as research or general operating costs. Also see: Research Grant.

 

DISCRETIONARY GRANT: (1) A grant awarded to a recipient selected after a competitive review based upon the judgment of the grant maker or at the option of the grant maker. A discretionary grant program commonly involves a high ratio of applications to grant awards. (2) In the foundation context, a discretionary grant also may be a grant awarded to a recipient based upon the judgment of a member of its board of directors or trustees or at the option of a member of its board of directors or trustees.

 

DUNS (DATA UNIVERSAL NUMBERING SYSTEM) NUMBER: A unique nine-digit identification number provided by Dun & Bradstreet (for free, must use Internet Explorer); it is required as an identifier for every applicant before it applies for a grant from the federal government.

 

EARMARK: A grant appropriated by a legislative body prior to a peer review. It specifies the applicant’s name, the activity, and the grant amount.

 

ELIGIBLE ACTIVITY: One of a circumscribed set of activities for which applicants can propose to spend available grant funds. Enabling state or federal legislation often explicitly defines eligible activities. Many public and private grant makers also often define them in their application guidelines or on their websites.

 

ELIGIBLE APPLICANT: One of several specific and defined types of organizations that may apply for funding from a specific grant program at a specific time. Types commonly include: Non-Profit Organizations, Community-Based Organizations, Faith-Based Organizations, Institutions of Higher Education, State Educational Agencies, Local Educational Agencies, and Federally Recognized American Indian Tribes, among others. Depending upon the specific grant maker and the specific grant program, individuals also are often eligible to apply for grants.

 

ELIGIBILITY CRITERION: One of several qualifying factors that a potential applicant must satisfy before it seeks a grant; often it pertains to the type of individual or organization as an applicant or to the population or geographic area to benefit from a grant.

 

ENABLING LEGISLATION: A law, enacted at any level of government (e.g., city, borough, county, parish, state, federal), which creates and defines one or more grant programs.

 

ENDOWMENT FUND: An account of funds set up to be invested in perpetuity to provide income for the continuous support of a non-profit organization. Some foundations will award grants for endowments.

 

EQUIPMENT: A cost category for durable resources requested in a budget; generally, each discrete item of equipment lasts more than a defined period of time (e.g., one year or three years) and costs more than a defined minimum amount (e.g., $500 or $5,000). Definitions of equipment in terms of durability and minimum cost vary widely among grant makers. Also see: Supplies.

 

EXTERNAL GRANT: A grant awarded to the applicant by any source other than the applicant itself. Example: For a school district, sources of such external grants include the local community foundation, the state educational agency, and the federal educational agency.

 

FAMILY FOUNDATION: An independent, private foundation that the members of a single family both fund and maintain. Example: Walton Family Foundation.

 

FEDERATED GIVING PROGRAM: A collaborative fundraising effort usually administered by a supervising nonprofit organization that in turn distributes the funds generated through that effort as grants to other nonprofit organizations. Example: Tulsa Area United Way.

 

FISCAL AGENT: An organization that has legal accountability for managing a grant award, for expending its funds, and for reporting on grant expenditures.

 

FISCAL SPONSOR: A third-party organization that agrees to serve as the fiscal agent for a grant on behalf of an applicant or a consortium of applicants; some grant makers will permit use of a fiscal sponsor, others will not.

 

FISCAL YEAR (FY): A 12-month period at the end of which the financial accounts are closed for the organization in question. Common fiscal years are: October 1 through September 30 (federal), July 1 through June 30 (states), and January 1 through December 31 (foundations). Organization-wide financial audits commonly occur after the end of each fiscal year.

 

A later post will cover Budget Development Glossary entries from Floor Amount to Hard Funds.

%d bloggers like this: