Skip navigation

This is the last of three posts on using foundation directories in prospect research. It covers: trustees/directors, financial data, and selected grants.

 

Directories Graphic 3

 

Trustees/Directors

 

Sometimes, as popular wisdom has it, it’s not what you know it’s whom you know. If someone connected to an applicant organization (e.g., an executive director or a member of a board of directors) knows personally someone connected to a foundation (e.g., a director or a trustee), it may improve the applicant’s odds of getting a grant.

 

A well-placed connection on a foundation’s board of directors may be willing to advocate on behalf of a grant for a specific applicant. If not actual advocacy, the same connection may be willing to share deeper insights into what the foundation’s decision-makers favor in a grant proposal. Although the absence of a well-placed advocate is not a reason to forgo a grant opportunity, its presence can prove helpful.

 

Financial Data

 

Smaller foundations tend to have lesser financial assets and to award fewer grants than larger ones. They also tend to award smaller amounts in each grant or to award grants only to pre-selected (or invited) applicants.

 

No matter who does the work, preparing a proposal costs an applicant time and money. It may get a greater return on its investment if it seeks a single grant of $50,000 rather than using the same proposal to seek ten grants each for a tenth as much. However, if all an applicant needs is a grant of $5,000, it should not request one for $50,000.

 

Selected Grants

 

Entries in a foundation directory may list sample recent grant awards. Such lists seldom present every grant award a foundation has made in a recent year.

 

Look at the amounts awarded and the nature of the recipients. If at least one recipient is similar to the applicant and if the amounts are similar to what the applicant needs, then add the funder to a list of possible grant makers. Look up the funder’s website and/or at its annual 990-PF or 990 filings. Both places will list every grant it made in a given reporting period and will confirm (or disconfirm) the foundation as a possible funder.

 

Advertisements

This is the second of three posts on using foundation directories in prospect research. It covers: deadlines, purposes and activities, and fields of interest.

 

Directories Graphic 2

 

Deadlines

 

Some foundations award grants yearly, others semi-yearly, others quarterly, and still others on a rolling basis. Semi-yearly means there are two opportunities to apply per year; quarterly means there are four. A rolling basis means there is no fixed deadline and applications can be submitted at virtually any time.

 

In some cases, foundations use two-step deadlines: one for a pre-proposal (or a letter of inquiry or a concept paper) and a later one for a full proposal (if invited). Only if a pre-proposal is persuasive will a subsequent full proposal be invited.

 

Grant award notices may lag a month or longer after a board meeting where proposals are reviewed and grant awards are approved. After a proposal is rejected, an applicant may need to wait a year before it submits another; if its proposal is funded, it may need to wait two years.

 

Purposes and Activities

 

A specific foundation may have many purposes or few; it also may fund many types of activities or few. A corporate charitable giving program may favor opportunities for its personnel to volunteer in the community and to enhance public awareness of its brand by product donations; it will not fund the purchase of similar products made by other companies. Foundations may fund activities, but not paid labor (usually termed personnel).

 

These varied grantor-specific funding purposes and allowable activities constrain the options available to potential applicants. A poor match here is not a match worth pursuing.

 

Fields of Interest

 

A specific foundation or corporate charity may have many fields of interest or few. It may fund strictly within its proclaimed interests or it may also stray outside them from time to time. Directories list fields of interest only n general terms. By studying a funder’s recent grant making history, an applicant may verify what the declared interests may mean for its specific prospects.

 

 

Grant seekers use the grant maker profiles found in foundation directories to sort out strong leads from weak ones. This is the first of three posts on using foundation directories in prospect research. It covers: physical location, websites, limitations, types of grant makers, and 990-PF forms.

 

Directories Graphic 1

 

Physical Location

 

In general, the more distant a private foundation is from a grant seeker the less likely it is to award a grant. Its address on a map is merely a first small clue in the search for potential funders. A local grant maker often is somewhat more familiar with local needs (or problems) and local priorities. Its directors and benefactors often also have resolved to try to meet (or solve) them. Thus, there is good reason to look locally first, but that by itself is no reason not to look farther afield later.

 

Websites

 

The contents of a foundation website are often much more current and more extensive than those of even the best print or online directory. A grant seeker can search a funder’s website to verify or qualify the information it finds there. It is often possible to use a foundation’s website to confirm deadlines, retrieve application forms and instructions, review grant history, identify current directors and trustees, and do other tasks helpful in doing prospect research and preparing grant applications.

 

Limitations

 

A limitation is a restriction on grant making. As a pre-condition, it shrinks the pool of applicants. A limitation may have to do with where, or what, or for what, or how many, or when, or how often, or how, or any other aspect of seeking a grant from a given funder. Often limitations pertain to geography, or purposes, or activities. If an applicant falls within – or sometimes, outside – the scope of one or more limitations it may need to look elsewhere for funding.

 

Types of Grant Makers

 

The type of grant maker (e.g., community foundation, family foundation, corporate charitable giving program) impacts the entire solicitation process and the likelihood of funding. In a corporate charity, for example, decision-making will follow different paths and obey different logics than in the foundations. A corporate charity may donate labor and products, not actual cash grants. A family foundation may make less predictable funding decisions than a corporate one. A non-family independent foundation may require a more rigorous evaluation plan than a family one. And a community foundation may gather and manage very distinct grant programs under its roof.

 

990-PF Forms

 

The 990-PF is a yearly financial statement that private foundations file with the United States Internal Revenue Service (IRS). The more recent the year of the form on file the more it should reflect the foundation’s present priorities and practices. Comparisons of several years of filings may disclose patterns and trends in grant making.

 

On each year’s filing, look for the ranges and amounts of grant awards. Look also at the types and locations of applicants winning them. How to analyze a 990-PF is an art in itself, one in which the Foundation Center offers some basic assistance.

 

Only private foundations must file the 990-PF. Grant-making public charities and community foundations file Form 990 in the same manner as other non-profit organizations. Corporate charitable giving programs do not file yearly reports with the IRS.

 

In examining a 990-PF filing, if grant seekers know what to look for and how to interpret what they see, they may improve the results of their prospect research.

 

This post covers useful aspects of 990-PF filings, such as contact information, application procedures, and grants awarded.

 

990 PF 2018

 

Reporting Period

 

On Page 1, near its top, are blanks for the period a Form 990-PF filing is to cover. By definition, a calendar year starts January 1 and ends December 31. Many foundations use it as their fiscal year. If a foundation’s fiscal year is not a calendar year, the blanks will state different start and end dates. The fiscal year governs the timing of a foundation’s grant-making activities and thus may affect the timing of an applicant’s proposals.

 

Contact Information

 

Page 1 asks for the foundation’s current name and address (Section G), and its telephone number (if it has one) (Section B). Potential applicants should crosscheck the specifics with the foundation’s website, if any, since the information may not be up to the moment.

 

Foundation Assets

 

On Page 1, Section I states the fair market value of the foundation’s assets as of year-end. This figure is one indicator of the foundation’s size. In general, each year, by rule, foundations must expend 5%or more of their assets in making qualified contributions, gifts, and grants. Consequently, at a bare minimum a foundation’s assets should be at least 20times greater in value than the applicant’s possible grant request.

 

Foundation Staff

 

In Part I, Lines 14-15, reports employee salaries, wages, and benefits. Sums significantly larger than zero imply that the foundation has at least part-time staff (one or more) to handle applicant queries.

 

Grants Awarded

 

In Part I, Line 25, Column D gives the total contributions, gifts, and grants the foundation paid during the year of filing. This amount reflects the foundation’s recent actual grant-making activity. It should be several multiples larger than the applicant’s possible grant request.

 

Foundation Management

 

In Part VIII, Section 1 names the officers, directors, trustees, and foundation managers, among others. The list represents who manages the foundation and who makes decisions about grant proposals. Researching their biographies may reveal possible connections between the applicant’s Board or staff and the foundation’s Board or staff; it may also disclose possible leads for initial contact and/or proposal selling points.

 

Charitable Activities

 

Part IX-A lists the foundation’s four largest direct charitable activities during the tax year. The list is one source of possible insights into the foundation’s priority beneficiaries and program areas. Review of the foundation’s website and publications, if any, may verify whether these priorities remain in effect after the reporting period ended.

 

Application Procedures

 

In Part XV, Section 2 summarizes the foundation’s application submission procedures: to whom to address the application (Line A), what type and content of application are required (Line B), submission deadlines (Line C), and restrictions and limitations (Line D). A checkbox, if left blank, will indicate that a foundation accepts unsolicited requests for funds. Again, applicants should crosscheck the particulars by reviewing a foundation’s website, if any.

 

Grant-Making History

 

In Part XV, Section 3 lists recipients – and amounts awarded – of grants and contributions made during the year or approved for future payments. The more that the details (e.g., type of recipient, location of recipient, amount awarded) match those describing the applicant and its contemplated grant proposal, the stronger the foundation should be as a lead for future funding.

 

 

Finding good leads for grant funding (or doing prospect research) can be one of the most difficult and time-consuming aspects of grant seeking. A very helpful step in this search – in the American context at least – is to look up a grant maker’s recent filings of Form 990-PF or Form 990. Such forms, and others like them, are the detailed yearly information returns grant makers submit to the Internal Revenue Service (IRS).

 

This post discusses where grant seekers can find and view these returns. Alater post discusses how to extract useful information from a Form 990-PF. Its context is the United States of America.

 

Grant Maker Information Returns

 

Each year, private foundations must file a Form 990-PF with the IRS. The 990-PF is a public document that provides the filer’s address and contact information, identifies the filer’s executive officers, board of directors, and trustees, and presents financial data about the filer. It also describes the filer’s grant application procedures and deadlines and presents a complete list of grants awarded in the reporting period. Most such filings are available in Portable Document Format (PDF) and may be viewed using Adobe Reader, which is found on the Internet.

 

990 PF 2018

 

Only private foundations must file Form 990-PF. Community foundations and grant-making public charities must file Form 990. Direct corporate giving programs are not required to file any annual information returns with the IRS.

 

Foundation Center Finder Tools

 

The Foundation Center is an invaluable resource for researching grant makers’ filings of Forms 990 and 990-PF. They are accessible on its website via the 990 Finder, a free searchable database of 990-PFs and 990s filed by private foundations and charities. In addition, researchers may look up private grant makers in the United States using the Foundation Finder. This free tool provides basic information as well as access to 990-PFs and 990s.

 

Other Options

 

Researchers have other options than the Foundation Center. Registered users of GuideStar may use a free feature to examine or retrieve 990-PFs in its searchable database. They may also subscribe to more extensive and specialized premium services that provide access to information on Forms 990 and 990-PF. In addition, the Economic Research Institute has an extensive database of Form 990s, which is searchable for free and without pre-registration.

 

Beyond these large searchable databases, many individual grant makers now post their three most recent filings of Form 990s or Form 990-PFs on their websites.

 

This post examines the potential usefulness of meta-analysis for winning grants. It looks at how grant seekers can apply available research in seeking funding for parental involvement programs designed to improve academic achievement in Grades K-12. The post’s few examples by no means exhaust the sources of evidence available to applicants seeking grants to improve educational outcomes – or outcomes in other domains for which grants may be awarded.

 

Effect Sizes of Overall Parental Involvement Programs

 

In a meta-analysis (2012), Dr. William Jeynes, using Hedge’s g, found that overall parental involvement programs yielded effect sizes of 0.19 to 0.31 standard deviations (SD) on academic achievement in urban elementary schools, and of 0.32 to 0.35 SD on academic achievement in urban secondary schools. Effect sizes varied by specific measures of academic achievement – e.g., standardized tests (0.31 and 0.33 SD) or non-standardized assessments (0.19 and 0.32 SD). The magnitudes of such effect sizes are small, but positive.

For urban Latino students in Grades K-5, in Reading, in the context of overall parental involvement programs in urban schools, an effect size of 0.31 SD on standardized tests equates to gains of 0.17 to 0.6 school years; an effect size of 0.19 SD on non-standardized assessments equates to gains of 0.1 to 0.4 school years.

 

Overall PI and ES Graphic 1

 

For urban Latino students in Grades 6-12, in Reading, in the same context, an effect size of 0.33 SD on standardized tests equates to gains of 0.9 to 5.5 school years; an effect size of 0.32 SD on non-standardized assessments equates to gains of 0.83 to 5.3 school years.

 

Overall PI and ES Graphic 2

 

Effect Sizes of General Parental Involvement

 

In a meta-analysis (2017), Dr. William Jeynes, using Hedge’s g, found that general parental involvement yielded overall effect sizes on academic achievement of urban Latino students of 0.30 standard deviations (SD) in Grades K-5, 0.29 SD in Grades 6-12, and 0.45 SD in Grades K-12. Overall effect sizes varied by specific measures of academic achievement among urban Latino students – standardized tests (0.24), non-standardized assessments (0.64), or behavior (0.16). The magnitudes of such effect sizes are small to moderate, and positive.

 

In the context of general parental involvement, an overall effect size of 0.30 SD equates to gains of 0.17 to 0.6 school years in Reading for urban Latino students in Grades K-5; and an effect size of 0.29 SD equates to gains of 0.87 to 5.0 school years in Reading for urban Latino students in Grades 6-12. The magnitudes of such effect sizes are small, but positive.

 

For urban Latino students in Grades K-5, in Reading, in the context of specific measures of academic achievement, a gain of 0.24 SD on standardized tests equates to gains of 0.13 to 0.5 school years; a gain of 0.64 SD on non-standardized assessments equates to a gain of 0.4 to1.3school years; and a gain of 0.16 SD on behavior equates to gains of 0.08 to 0.32 school years.

 

General PI and ES Graphic 1

 

For urban Latino students in Grades 6-12, in Reading, in the context of specific measures of academic achievement, a gain of 0.24 SD on standardized tests equates to gains of 0.67 to 4.1 school years; a gain of 0.64 SD on non-standardized assessments equates to a gain of 1.85 to 10.9 school years; and a gain of 0.16 SD on behavior equates to gains of 0.48 to 2.7 school years in Reading for urban Latino students in Grades 6-12.

 

General PI and ES Graphic 2

 

Observations

 

Both overall and general programs of parental involvement appear to yield small to moderate – and positive – effect sizes on several measures of academic achievement among urban Latino students in Grades K-12. Such research is potentially useful as an element of a evidence-based rationale for a plan of action in a competitive grant proposal.

 

Available meta-analyses – such as the examples provided here – demonstrate the practical significance of overall and general parental involvement on measures of the academic achievement of urban students. Their findings are useful evidence of the likelihood that implementing such programs will contribute to improved educational outcomes. As such, these meta-analyses – and others similar to them – are welcome and useful resources for applicants wishing to persuade public and private funders to award grants to create or expand programs of parental involvement in Grades K-12.

 

 

 

 

 

 

This post is part of a series about the potential usefulness of meta-analysis for winning grants. It explores how grant seekers can apply available research in seeking funding for parental involvement programs designed to improve academic achievement in urban elementary and secondary schools. The post’s few examples by no means exhaust the sources of evidence available to applicants seeking grants to improve educational outcomes – or outcomes in other domains for which grants may be awarded.

 

Effect Sizes in Elementary Schools

 

In a meta-analysis (2005), Dr. William Jeynes, using Hedge’s g, found that general parental involvement yielded overall effect sizes of 0.37 to 0.85 standard deviations (SD) on academic achievement in urban elementary schools, and parental involvement programs yielded overall effect sizes of 0.27 to 0.40 SD on academic achievement in urban elementary schools. Overall effect sizes of general parental involvement varied by specific measures of academic achievement – e.g., overall (0.74), grades (0.85), standardized tests (0.37) or academic attitudes and behaviors (0.34). Overall effect sizes of specific programs of parental involvement also varied by specific measures of academic achievement – e.g., overall (0.27), grades (0.32), standardized tests (0.40) or academic attitudes and behaviors (0.30).

 

In the context of general parental involvement, an overall effect size of 0.74 standard deviations (SD) on overall academic achievement equates to gains of from 0.45 to 1.45 school years for Reading in Grades K-5. An overall effect size of 0.85 SD on grades equates to gains of from 0.55 to 1.7 school years for Reading in Grades K-5. An overall effect size of 0.37 SD on standardized tests equates to gains of from 0.2 to 0.7 school years for Reading in Grades K-5. An overall effect size of 0.34 SD on academic attitudes and behaviors equates to gains of from 0.2 to 0.7 school years for Reading in Grades K-5.

 

General PI in ES

 

In the context of specific programs of parental involvement, an overall effect size on overall academic achievement of 0.27 standard deviations (SD) equates to a gain of from 0.17 to 0.67 school years for Reading in Grades K-5. An overall effect size on grades of 0.32 SD equates to gains of from 0.2 to 0.7 school years for Reading in Grades K-5. An overall effect size on standardized tests of 0.40 SD equates to gains of from 0.23 to 0.73 school years for Reading in Grades K-5. An overall effect size on academic attitudes and behaviors of 0.30 SD equates to gains of from 0.19 to 0.69 school years for Reading in Grades K-5.

 

Specific PI in ES

 

Effect Sizes in Secondary Schools

 

In a meta-analysis (2007), Dr. William Jeynes, using Hedge’s g, found that general parental involvement yielded overall effect sizes of 0.40 to 0.47 standard deviations (SD) on academic achievement in urban secondary schools, and parental involvement programs yielded effect sizes of 0.25 to 0.36 SD on academic achievement in urban secondary schools. Overall effect sizes of general parental involvement varied by specific measures of academic achievement – e.g., overall (0.46), grades (0.40), standardized tests (0.47) or academic attitudes and behaviors (0.43). Overall effect sizes of specific programs of parental involvement also varied by specific measures of academic achievement – e.g., overall (0.36), grades (0.25), standardized tests (0.36) or academic attitudes and behaviors (0.25).

 

In the context of general parental involvement, an overall effect size on overall academic achievement of 0.46 standard deviations (SD) equates to gains of from 1.25 to 7.1 school years for Reading in Grades 6-12. An overall effect size on grades of 0.40 SD equates to gains of from 1.1 to 6.9 school years for Reading in Grades 6-12. An overall effect size on standardized tests of 0.47 SD equates to gains of from 1.3 to 7.2 school years for Reading in Grades 6-12. An overall effect size on academic attitudes and behaviors of 0.43 SD equates to gains of from 1.15 to 7.0 school years for Reading in Grades 6-12.

 

General PI in SS

 

In the context of specific programs of parental involvement, an overall effect size on overall academic achievement of 0.36 standard deviations (SD) equates to gains of from 1.0 to 5.9 school years for Reading in Grades 6-12. An overall effect size on grades of 0.25 SD equates to gains of from 0.67 to 4.2 school years for Reading in Grades 6-12. An overall effect size on standardized tests of 0.36 SD equates to gains of from 1.0 to 5.9 school years for Reading in Grades 6-12. An overall effect size on academic attitudes and behaviors of 0.43 SD equates to gains of from 1.15 to 7.0 school years for Reading in Grades 6-12.

 

Specific PI in SS

 

Observations

 

Available meta-analyses – such as the examples provided here – demonstrate the practical significance of general and specific programs of parental involvement on measures of the academic achievement of urban elementary and secondary students. Their findings are useful evidence of the likelihood that implementing such programs will contribute to improved educational outcomes. As such, these meta-analyses – and others in the research literature similar to them – are welcome and useful resources for applicants wishing to persuade public and private funders to award grants to create or expand programs of parental involvement in Grades PK-12.

 

 

This post is the first in a series illustrating the value of meta-analysis in grant seeking. It explores local findings concerning the effect sizes associated with implementing the HIPPY (Home Instruction for Parents of Preschool Youngsters) Program. Its purpose is to illustrate how grant applicants might utilize the results of available meta-analyses in the Research-Based Rationales of their competitive grant proposals for public and private funding.

 

Bracken School Readiness Assessment

 

The Bracken School Readiness Assessment, 3rd edition (or BSRA-3), is an individual, standardized, cognitive test. It appears to be a reliable and valid assessment of school readiness. It has a moderate to high test-retest reliability of 0.76 to 0.92; its split-half reliability is high (0.95) for its overall normative sample of 750 children. Its content validity, based on the correlation of the BSRA-3 with the BSRA is high (0.85). It is used on a pre-post basis to assess children’s conceptual knowledge at a school year’s start and end.

 

The research-based HIPPY Program, as implemented in Houston Independent School District (HISD), uses results on the BSRA-3 “to assess school readiness, considering children’s knowledge of concepts that kindergarten teachers traditionally teach to prepare children for formal education (HISD, 2018, p. 13).” In Texas, the University of North Texas administers the BSRA-3, designed for children in grades PK-2, on a pre-post, fall-spring basis, to children ages 3-5 years whose families are enrolled in HIPPY (HISD, 2018, p. 9).

 

The BSRA-3 measures six basic skills: sizes, shapes, numbers/counting, letters, colors, and comparisons (HISD, 2018, p. 9). “Bracken data are based on parents’ perceptions of their child’s abilities in the targeted areas (HISD, 2018, p.13).” HISD calculates and reports descriptive data after each 30-week HIPPY program year (HISD, 2018, p. 18; HISD, 2017, p. 27; HISD, 2016, p. 22).

 

Effect Sizes

 

During the past three school years, HISD has served 637 children (in 2015-16), 762 children (in 2016-17), and 499 children (in 2017-18) (HISD, 2016, p. 11; HISD, 2017, p. 12; HISD, 2018, p. 13). For each program year, it reports its means, standard deviations, and effect sizes, as well as abundant demographic data about children served.

 

For 2017-18, HISD reports effect sizes using Hedge’s g; for 2015-16 and 2016-17, it reports effect sizes using Cohen’s d. “Hedge’s follows similar criteria to Cohen’s for determining the strength of an intervention with an effect size of 0.2 = small effect, 0.5 = moderate effect, and 0.8 = large effect (HISD, 2018, p. 9; HISD, 2017, p. 15; and HISD, 2016, p. 15).” The What Works Clearinghouse considers effect sizes of 0.25 standard deviations or larger to be “substantively important (HISD, 2018, p. 9).”

 

  • In 2015-16, effect sizes range from 0.57 to 0.63. These are moderate effect sizes. HISD reports the magnitude of ‘the effect size on the overall school readiness composite’ as 0.72 (p < .001) (HISD, 2016, p. 7 and p. 22).

 

  • In 2016-17, effect sizes range from 0.78 to 0.90. These are large effect sizes. HISD reports the magnitude of ‘the total effect of HIPPY on school readiness’ as 0.99 (p < .001), a large effect size (HISD, 2017, pp. 3-4 and p.27).

 

  • In 2017-18, effect sizes range from 0.65 to 0.90. These are moderate to large effect sizes. HISD does not report overall effect size or statistical significances for 2017-18 (HISD, 2018, p.9).

 

 

Observations

 

The findings of annual meta-analyses, as reported by a large urban school district, present promising evidence that adoption or expansion of the HIPPY Program would be a worthwhile investment of scarce grant funding. As Lee et al. (2012) indicate, in Pre-Kindergarten, in Reading, an effect size  (Cohen’s d) of 0.50 equates to 0.5 years (or 90 days) of instruction for children in Pre-Kindergarten, and to 0.3 years (or 54 days) of instruction for children in Kindergarten. Further, an effect size  (Cohen’s d) of 0.80 equates to 0.8 years (or 144 days) of instruction for children in Pre-Kindergarten, and to 0.5 years (or 90 days) of instruction for children in Kindergarten. In Mathematics, for Pre-Kindergarten and Kindergarten, Lee et al. (2012) calculate that the payoffs are almost as large as for Reading. HISD reports effect sizes for its HIPPY Program that are consistently larger than 0.50 – and often larger than 0.80 – both for sub-scale results and for composite results.

 

When combined with the findings of other studies of the practical significance for school readiness of families participating in HIPPY (e.g., Brown and Johnson, 2014, available here), applicants can make a strong case to grant makers (and other potential funders, arguably) for the potential short-term (and long-term) value of offering HIPPY in their communities.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This post explores interpretations of effect sizes in the context of writing proposals for competitive grants in PK-12 education. It translates effect sizes into time-indexed measures of academic growth in Grades PK-11 for instruction in Reading. Such conversion helps to transform the unfamiliar into the familiar.

 

Time Indexed Effect Sizes and Academic Growth

 

Research has generated time-indexed effect sizes based on national norms of academic growth in Reading and Mathematics (Lee et al., 2012). It’s possible now to convert Cohen’s (standardized group mean differences) to d’ (school years of schooling).

 

Reading (Grades K-5)

 

In the context of the United States of America, a school year is commonly 180 instructional days (±5 days). Based on the results of research on time-indexed effect sizes – and assuming, for simplicity of calculation, a school year of 180 days – the list summarizes the results of research on time-indexed effect sizes in Reading in Grades K-5:

 

  • In K, an effect size (d) of 0.2 equates to 0.1 of a school year (18 school days), and an effect size (d) of 0.5 equates to 0.3 of a school year (54 school days).
  • In Grade 1, an effect size (d) of 0.2 equates to 0.1 of a school year (18 school days), and an effect size (d) of 0.5 equates to 0.3 of a school year (54 school days).
  • In Grade 2, an effect size (d) of 0.2 equates to 0.2 of a school year (36 school days), and an effect size (d) of 0.5 equates to 0.4 of a school year (72 school days).
  • In Grade 3, an effect size (d) of 0.2 equates to 0.2 of a school year (36 school days), and an effect size (d) of 0.5 equates to 0.6 of a school year (108 school days).
  • In Grade 4, an effect size (d) of 0.2 equates to 0.4 of a school year (72 school days), and an effect size (d) of 0.5 equates to 0.9 of a school year (162 school days).
  • In Grade 5, an effect size (d) of 0.2 equates to 0.4 of a school year (72 school days), and an effect size (d) of 0.5 equates to 1.0of a school year (180 school days).

 

Examples

 

A meta-analysis of parental involvement in urban elementary schools (Jeynes, 2005) found overall effect sizes of 0.37 for general parental involvement on elementary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 36 school days (in Grades K-1), to 54 school days (Grade 2), to 60 school days (Grade 3), to 118 school days (Grade 4), and to 126 school days (Grade 5).

 

 

The same meta-analysis of parental involvement in urban elementary schools (Jeynes, 2005) found overall effect sizes of 0.40 for programs of parental involvement on elementary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 42 school days (in Grades K-1), to 60 school days (Grade 2), to 84 school days (Grade 3), to 132 school days (Grade 4), and to 144 school days (Grade 5).

 

 

Reading (Grades 6-12)

 

Based on the results of the same research on time-indexed effect sizes, the list below summarizes the noteworthy results of research on time-indexed effect sizes in Reading in Grades 6-12:

 

  • In Grade 6, an effect size (d) of 0.2 equates to 0.6 of a school year (108 school days), and an effect size (d) of 0.5 equates to 1.4 of a school year (252 school days).
  • In Grade 7, an effect size (d) of 0.2 equates to 0.8 of a school year (144 school days), and an effect size (d) of 0.5 equates to 1.9school years (342 school days).
  • In Grade 8, an effect size (d) of 0.2 equates to 0.5 of a school year (180 school days), and an effect size (d) of 0.5 equates to 2.5 school years (450 school days).
  • In Grade 9, an effect size (d) of 0.2 equates to 0.8 of a school year (144 school days), and an effect size (d) of 0.5 equates to 1.9 school years (342 school days).
  • In Grade 10, an effect size (d) of 0.2 equates to 0.5 of a school year (90 school days), and an effect size (d) of 0.5 equates to 1.3 school years (214school days).
  • In Grade 11, an effect size (d) of 0.2 equates to 0.5 of a school year (90 school days), and an effect size (d) of 0.5 equates to 1.3 school years (214 school days).

 

Examples

 

A meta-analysis of parental involvement in urban secondary schools (Jeynes, 2007) found overall effect sizes of 0.47 for general parental involvement on secondary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 238 school days (Grade 6), to 320 school days (Grade 7), to 420 school days (Grade 8), to 320 school days (Grade 9), to 200 school days (Grade 10), and to 200 school days (Grade 11).

 

 

The same meta-analysis of parental involvement in urban secondary schools (Jeynes, 2007) found overall effect sizes of 0.36 for programs of parental involvement on secondary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 185 school days (Grade 6), to 234 school days (Grade 7), to 315 school days (Grade 8), to 234 school days (Grade 9), to 152 school days (Grade 10), and to 152 school days (Grade 11).

 

 

Observations

 

Conversion of effect sizes into instructional day equivalents is one way that seekers of competitive grants can translate abstruse research findings into more concrete and familiar terms.

 

The meta-analyses cited here are by no means the only ones available to eligible applicants for competitive grants in PK-12 Education. They are purely illustrative of what’s available. Grant seekers may use such findings in Research Rationales or Reviews of Literature – and elsewhere in proposals – to persuade reviewers that a project is likely to yield results of practical significance (e.g., improved academic achievement through parental involvement), and thus worthy of an investment of a funder’s scarce resources.

 

Note

 

The conversions of effect sizes into instructional days, as represented in this post and its graphics, derive from: Jaekyung Lee, Jeremy Finn, and Xiaoyan Liu, “Time-indexed Effect Size for P-12 Reading and Math Program Evaluation.” Paper presented at the Society for Research on Educational Effectiveness (SREE) Spring 2012 Conference, Washington, DC on March 9, 2012. It is available here.

This post explores interpretations of effect sizes in the context of writing proposals for competitively awarded grants in PK-12 Education. It translates effect sizes of several selected magnitudes into changes in comparative percentile ranks. It also refers to meta-analyses that have reported effect sizes at or near these selected magnitudes.

 

Research cited in examples throughout this post reported its results using Hedge’s g. Hedge’s is a more conservative measure than Cohen’s d; it is less likely to overstate effect sizes. The primary difference between the two is that Hedge’s uses pooled weighted standard deviations while Cohen’s uses pooled standard deviations.

 

Effect Size of 0.2

 

Once one calculates an effect size, one can interpret it in terms of changes in percentile rank (or changes in relative position along a bell curve distribution).

 

With an effect size of 0.2, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 11th. With an effect size of 0.2, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 58%.

 

Examples: A meta-analysis of efforts to reduce the academic achievement gap across racial/ethnic subgroups (Jeynes, 2015) found an overall effect size of 0.22 for family factors as a variable in the reduction of the gap. An earlier meta-analysis of overall programs of urban parental involvement (by program type) in grades PK-12 (Jeynes, 2012) found an overall effect size of 0.21 on non-standardized measures of academic achievement.

 

Effect Size of 0.3

 

With an effect size of 0.3, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 10th. With an effect size of 0.3, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 62%.

 

Examples: A meta-analysis of specific programs of parental involvement in urban elementary schools (Jeynes, 2005) found an overall effect size of 0.27 on students’ overall academic achievement. A second meta-analysis of specific programs of parental involvement in urban secondary schools (Jeynes, 2007) found an overall effect size of 0.36 on students’ overall academic achievement. A third meta-analysis of general programs of parental involvement (Jeynes, 2017) found an overall effect size of 0.30 on Latino students’ overall academic achievement in grades K-5.

 

Effect Size of 0.5

 

With an effect size of 0.5, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 8th. With an effect size of 0.5, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 69%.

 

Examples: A meta-analysis of general programs of parental involvement in urban secondary schools (Jeynes, 2007) found an overall effect size of 0.47 on students’ performance on standardized tests. A second meta-analysis of general programs of parental involvement for African American students (Jeynes, 2003) found an effect size of 0.48 on students’ overall academic achievement.

 

Effect Size of 0.7

 

With an effect size of 0.7, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 6th. With an effect size of 0.7, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 76%.

 

Example: A meta-analysis of general programs of parental involvement in urban elementary schools (Jeynes, 2005) found an overall effect size of 0.74 on students’ overall academic achievement.

 

Observations

 

The meta-analyses cited here are by no means the only ones available to eligible applicants for competitive grants in PK-12 Education. Those selected are for illustration only.

 

 

 

Grant seekers may use such findings in Research-Based Rationales or Reviews of Literature – and elsewhere in proposals – to persuade review panels that a project is likely to yield results of practical significance (e.g., improved academic achievement through parental involvement), and thus worthy of an investment of a funder’s scarce resources.

 

Note

 

Data represented in both graphics in this post come from Coe, “It’s the effect size, stupid: what effect size is and why it is important.” Paper presented at the Annual Conference of the British Educational Research Association (2002), University of Exeter, England, 12-14 September 2002; it is available here.

%d bloggers like this: