Skip navigation

Tag Archives: scientifically based research

This post explores interpretations of effect sizes in the context of writing proposals for competitive grants in PK-12 education. It translates effect sizes into time-indexed measures of academic growth in Grades PK-11 for instruction in Reading. Such conversion helps to transform the unfamiliar into the familiar.

 

Time Indexed Effect Sizes and Academic Growth

 

Research has generated time-indexed effect sizes based on national norms of academic growth in Reading and Mathematics (Lee et al., 2012). It’s possible now to convert Cohen’s (standardized group mean differences) to d’ (school years of schooling).

 

Reading (Grades K-5)

 

In the context of the United States of America, a school year is commonly 180 instructional days (±5 days). Based on the results of research on time-indexed effect sizes—and assuming, for simplicity of calculation, a school year of 180 days—the list summarizes the results of research on time-indexed effect sizes in Reading in Grades K-5:

 

  • In K, an effect size (d) of 0.2 equates to 0.1 of a school year (18 school days), and an effect size (d) of 0.5 equates to 0.3 of a school year (54 school days).
  • In Grade 1, an effect size (d) of 0.2 equates to 0.1 of a school year (18 school days), and an effect size (d) of 0.5 equates to 0.3 of a school year (54 school days).
  • In Grade 2, an effect size (d) of 0.2 equates to 0.2 of a school year (36 school days), and an effect size (d) of 0.5 equates to 0.4 of a school year (72 school days).
  • In Grade 3, an effect size (d) of 0.2 equates to 0.2 of a school year (36 school days), and an effect size (d) of 0.5 equates to 0.6 of a school year (108 school days).
  • In Grade 4, an effect size (d) of 0.2 equates to 0.4 of a school year (72 school days), and an effect size (d) of 0.5 equates to 0.9 of a school year (162 school days).
  • In Grade 5, an effect size (d) of 0.2 equates to 0.4 of a school year (72 school days), and an effect size (d) of 0.5 equates to 1.0 of a school year (180 school days).

 

Examples

 

A meta-analysis of parental involvement in urban elementary schools (Jeynes, 2005) found overall effect sizes of 0.37 for general parental involvement on elementary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 36 school days (in Grades K-1), to 54 school days (Grade 2), to 60 school days (Grade 3), to 118 school days (Grade 4), and to 126 school days (Grade 5).

 

 

The same meta-analysis of parental involvement in urban elementary schools (Jeynes, 2005) found overall effect sizes of 0.40 for programs of parental involvement on elementary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 42 school days (in Grades K-1), to 60 school days (Grade 2), to 84 school days (Grade 3), to 132 school days (Grade 4), and to 144 school days (Grade 5).

 

 

Reading (Grades 6-12)

 

Based on the results of the same research on time-indexed effect sizes, the list below summarizes the noteworthy results of research on time-indexed effect sizes in Reading in Grades 6-12:

 

  • In Grade 6, an effect size (d) of 0.2 equates to 0.6 of a school year (108 school days), and an effect size (d) of 0.5 equates to 1.4 of a school year (252 school days).
  • In Grade 7, an effect size (d) of 0.2 equates to 0.8 of a school year (144 school days), and an effect size (d) of 0.5 equates to 1.9school years (342 school days).
  • In Grade 8, an effect size (d) of 0.2 equates to 0.5 of a school year (180 school days), and an effect size (d) of 0.5 equates to 2.5 school years (450 school days).
  • In Grade 9, an effect size (d) of 0.2 equates to 0.8 of a school year (144 school days), and an effect size (d) of 0.5 equates to 1.9 school years (342 school days).
  • In Grade 10, an effect size (d) of 0.2 equates to 0.5 of a school year (90 school days), and an effect size (d) of 0.5 equates to 1.3 school years (214school days).
  • In Grade 11, an effect size (d) of 0.2 equates to 0.5 of a school year (90 school days), and an effect size (d) of 0.5 equates to 1.3 school years (214 school days).

 

Examples

 

A meta-analysis of parental involvement in urban secondary schools (Jeynes, 2007) found overall effect sizes of 0.47 for general parental involvement on secondary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 238 school days (Grade 6), to 320 school days (Grade 7), to 420 school days (Grade 8), to 320 school days (Grade 9), to 200 school days (Grade 10), and to 200 school days (Grade 11).

 

 

The same meta-analysis of parental involvement in urban secondary schools (Jeynes, 2007) found overall effect sizes of 0.36 for programs of parental involvement on secondary students’ performance on standardized tests. For those standardized tests that measured performance in Reading, this equates to: 185 school days (Grade 6), to 234 school days (Grade 7), to 315 school days (Grade 8), to 234 school days (Grade 9), to 152 school days (Grade 10), and to 152 school days (Grade 11).

 

 

Observations

 

Conversion of effect sizes into instructional day equivalents is one way that seekers of competitive grants can translate abstruse research findings into more concrete and familiar terms.

 

The meta-analyses cited here are by no means the only ones available to eligible applicants for competitive grants in PK-12 Education. They are purely illustrative of what’s available. Grant seekers may use such findings in Research Rationales or Reviews of Literature – and elsewhere in proposals – to persuade reviewers that a project is likely to yield results of practical significance (e.g., improved academic achievement through parental involvement), and thus worthy of an investment of a funder’s scarce resources.

 

Note

 

The conversions of effect sizes into instructional days, as represented in this post and its graphics, derive from: Jaekyung Lee, Jeremy Finn, and Xiaoyan Liu, “Time-indexed Effect Size for P-12 Reading and Math Program Evaluation.” Paper presented at the Society for Research on Educational Effectiveness (SREE) Spring 2012 Conference, Washington, DC on March 9, 2012. It is available here.

This post explores interpretations of effect sizes in the context of writing proposals for competitively awarded grants in PK-12 Education. It translates effect sizes of several selected magnitudes into changes in comparative percentile ranks. It also refers to meta-analyses that have reported effect sizes at or near these selected magnitudes.

 

Research cited in examples throughout this post reported its results using Hedge’s g. Hedge’s is a more conservative measure than Cohen’s d; it is less likely to overstate effect sizes. The primary difference between the two is that Hedge’s uses pooled weighted standard deviations while Cohen’s uses pooled standard deviations.

 

Effect Size of 0.2

 

Once one calculates an effect size, one can interpret it in terms of changes in percentile rank (or changes in relative position along a bell curve distribution).

 

With an effect size of 0.2, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 11th. With an effect size of 0.2, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 58%.

 

Examples: A meta-analysis of efforts to reduce the academic achievement gap across racial/ethnic subgroups (Jeynes, 2015) found an overall effect size of 0.22 for family factors as a variable in the reduction of the gap. An earlier meta-analysis of overall programs of urban parental involvement (by program type) in grades PK-12 (Jeynes, 2012) found an overall effect size of 0.21 on non-standardized measures of academic achievement.

 

Effect Size of 0.3

 

With an effect size of 0.3, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 10th. With an effect size of 0.3, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 62%.

 

Examples: A meta-analysis of specific programs of parental involvement in urban elementary schools (Jeynes, 2005) found an overall effect size of 0.27 on students’ overall academic achievement. A second meta-analysis of specific programs of parental involvement in urban secondary schools (Jeynes, 2007) found an overall effect size of 0.36 on students’ overall academic achievement. A third meta-analysis of general programs of parental involvement (Jeynes, 2017) found an overall effect size of 0.30 on Latino students’ overall academic achievement in grades K-5.

 

Effect Size of 0.5

 

With an effect size of 0.5, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 8th. With an effect size of 0.5, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 69%.

 

Examples: A meta-analysis of general programs of parental involvement in urban secondary schools (Jeynes, 2007) found an overall effect size of 0.47 on students’ performance on standardized tests. A second meta-analysis of general programs of parental involvement for African American students (Jeynes, 2003) found an effect size of 0.48 on students’ overall academic achievement.

 

Effect Size of 0.7

 

With an effect size of 0.7, the rank of a person in a control group of 25 who would be equivalent to the average person in the experimental group would change from 13th to 6th. With an effect size of 0.7, the percentage of the control group of 25 who would be below the average person in the experimental group would change from 50% to 76%.

 

Example: A meta-analysis of general programs of parental involvement in urban elementary schools (Jeynes, 2005) found an overall effect size of 0.74 on students’ overall academic achievement.

 

Observations

 

The meta-analyses cited here are by no means the only ones available to eligible applicants for competitive grants in PK-12 Education. Those selected are for illustration only.

 

 

 

Grant seekers may use such findings in Research-Based Rationales or Reviews of Literature—and elsewhere in proposals—to persuade review panels that a project is likely to yield results of practical significance (e.g., improved academic achievement through parental involvement), and thus worthy of an investment of a funder’s scarce resources.

 

Note

 

Data represented in both graphics in this post come from Coe, “It’s the effect size, stupid: what effect size is and why it is important.” Paper presented at the Annual Conference of the British Educational Research Association (2002), University of Exeter, England, 12-14 September 2002; it is available here.

Earlier posts here have described ways to use PESTLE Analysis, SWOT Analysis, Logic Models, and other tools for developing competitive grant proposals.

 

As a powerful project planning and evaluation tool, meta-analysis also belongs in every grant proposal writer’s repertoire. This post is the first of a series that explores the uses of meta-analysis in competitive proposals for grants.

 

Context

 

Grant awards are scarce; competition for them is intense. Applicants must persuade review panels that their proposals are worth an investment of finite funds. One means of persuasion is meta-analysis.

 

Some funders already require a meta-analysis of existing research as part of an application for a grant to support further research. Research-based rationales are also commonplace as review criteria for many programs that award grants for direct services. Since the utility of meta-analysis extends beyond research grants, applicants for grants for direct services might well take heed of meta-analysis and use it in their proposals.

 

Advantages

 

Meta-analysis reviews existing research literature. As one educational researcher put it: “…A meta-analysis statistically combines all the relevant existing studies on a given subject in order to deter­mine the aggregated results of said research…. [Jeynes, 2011, p. 10].”

 

One task of meta-analysis is to calculate effect sizes. As a second researcher has stated: “One of the main advantages of using effect size is that when a particular experiment has been replicated, the different effect size estimates from each study can easily be combined to give an overall best estimate of the size of the effect. This process of synthesizing experimental results into a single effect size estimate is known as ‘meta-analysis.’… [Coe, 2002, p. 8].”

 

Attributes

 

Effect size is a standardized, scale-free measure of the relative size of the practical difference that an intervention makes on some aspect of an experimental group. It is particularly useful for quantifying effects measured on unfamiliar or arbitrary scales. It is also useful for comparing the relative sizes of effects from different studies. Interpretation of effect sizes generally depends on the assumptions that “control” and experimental groups are “normally distributed” and have the same standard deviations.

 

In calculating effect sizes, “…it is often better to use a ‘pooled’ estimate of standard deviation. The pooled estimate is essentially an average of the standard deviations of the experimental and control groups…. [This] is not the same as the standard deviation of all the values in both groups ‘pooled’ together … The use of a pooled estimate of standard deviation depends on the assumption that the two calculated standard deviations are estimates of the same population value… [i.e.,] that the experimental and control group standard deviations differ only as a result of sampling variation [Coe, 2002, p. 9].”

 

Standard Deviation

 

Before calculating effect size, one must calculate standard deviation. Standard deviation measures the dispersion within a dataset relative to its mean. It is used in calculating effect sizes. Steps in calculating standard deviation (SD) are:

  1. Calculate the mean valueAdd all of the data points and divide by the number of data points.
  2. Calculate the variance of the dataFirst subtract the value of the data point from the mean. Next, square each of the resulting values. Next, sum the results. Next, divide the result by the number of data points less one.
  3. Take the square root of the variance to find the standard deviation.

 

After having found the standard deviation, one can calculate effect sizes.

 

Effect Sizes

 

The formula for the often-used Cohen’s = [Mean of experimental group] – [Mean of control group] ÷ Standard Deviation.

 

For Cohen’s effect size, the magnitude generally ranges from -3.0 to +3.0. Different measures of effect sizes apply different thresholds of magnitude before one interprets them as ‘Small’ or ‘Medium’ or ‘Large’. The table presents standard interpretations of Cohen’s – only to be used as guidelines and interpreted in the context of the research.

 

Magnitudes of Effect Sizes Graphic

 

Measurement of effect size varies by the context and the measure used. Available formulas allow researchersand grant proposal writersto convert one type of effect size to another.

 

After calculating effect sizes, many researchers commonly calculate their confidence intervals.

 

Confidence Intervals

 

A confidence interval (CI) is the probability that a value will fall between the upper and lower bounds of a probability distribution. In educational research meta-analyses, most often the 95% confidence level is selected. Calculation of CI uses the sample’s mean and standard deviation, and it assumes a normal distribution (the familiar bell curve). The CI reflects the degree of certainty associated with a sampling method, so that, when set at 95%, the upper and lower bounds will contain the true mean 95% of the time.

 

There are several steps in calculating confidence intervals:

  1. Find the number of observations (n), calculate their mean (X), and calculate their standard deviation (s).
  2. Select a confidence interval, look up its Z value in a Z table, then use the Z value in the formula: X ± Z s/ √n

 

Where X = mean, Z = selected Z value, s = standard deviation, √ = square root, and n = number of observations

 

Grant seekers can explore examples of the use of confidence intervals associated with effect sizes for educational interventions at [Jeynes, 2012, p. 726]  and in other research cited there.

 

Limitations

 

Like any statistical method, the use of meta-analyses has its limitations. Among these are:

  1. Interpretations of effect sizes assume the normal distribution of values for “control groups” and “experimental groups” and that the groups have the same standard deviations.
  2. Confidence intervals for effect sizes are not always calculated and reported in published meta-analyses.
  3. Some situations render problematic the interpretation of standardized effect sizes, such as: (1) when a sample has restricted range, or (2) when a sample does not come from a normal distribution, or (3) when the measurement from which it was derived has unknown reliability.

 

The next post in the series explores interpretations of effect sizes in the context of writing proposals for competitive grants in PK-12 education.

Proposals that win grants for K-12 education have many predictable information needs. Those for professional development projects or with professional development components are no exception! Applicants that have information ready at hand before responding to a grant opportunity greatly improve the likelihood of funding.

 

PD in Educ Grants Graphics

 

Although applicants may not need every item listed here for every proposal budget, among the elements it is prudent to anticipate are: salaries and fringe benefits, travel, consultants, and indirect costs.

 

Scientifically Based Research Rationale

  1. Models of effective (or best) practices
  2. Educational theory of effective (or best) practices
  3. Evaluation studies of effective (or best) practices
  4. Team and interdisciplinary teaching
  5. Integrating technology and telecommunications in instruction
  6. Teaching to diverse learning styles
  7. Teaching in cooperative and multiage settings
  8. Teaching language minority students
  9. Educational methods (pedagogy)
  10. Effective content area instruction (all subjects)
  11. Technology integration and library/instructional media
  12. Effective teaching in inclusive settings

 

Educational Standards

  1. State/national professional standards for teachers and administrators
  2. State content area and student academic performance standards

 

Activities

  1. List of areas/topics in need of professional development
  2. Locations and venues for all key professional development activities
  3. Local resources for technology-related activities
  4. Types of activity to be conducted (e.g., retreats, seminars, courses, institutes)
  5. Locations, dates, and costs for all conferences to attend (local, state, regional, national)
  6. Locations, types, and costs of available courses
  7. Description of plans to utilize internal resources
  8. Description of plans to utilize cost-free resources
  9. List of universities that can provide credit-awarding proposal-related courses
  10. Agreements with universities to provide proposal-related courses
  11. Letters of commitment from each source of professional development
  12. Sample credit-awarding course syllabi or course outlines and course schedules
  13. Arrangements covering tuition reimbursements and staff release time
  14. Locations and roles of proposal-related technical assistance providers

 

Later posts will cover information needs for other aspects of educational grant proposals.