Skip navigation

Monthly Archives: September 2019

This post is the first in a series about integrating evidence-based practices (EBPs) and root-cause analysis (RCA) into proposals for competitively awarded state and federal educational grants. Its context is the United States of America.

 

Evidence-Based Practices Graphic

 

Tiers of Evidence

 

By one definition, EBPs are “instructional practices, strategies, programs, and interventions that have been shown through rigorous evaluation to be effective at improving outcomes (AIR, 2019, p. 1).” As the new decade opens, applications on behalf of low-performing schools must propose to implement at least one EBP. Each selected Evidence-Based Practice must demonstrate a statistically significant effect on student outcomes or other relevant outcomes. The same EBP must fall into one of three tiers of evidence, which, in descending order, require:

 

  1. Strong evidence from at last one well-designed and well-implemented experimental study, or
  2. Moderate evidence from at least one well-designed and well-implemented quasi-experimental study, or
  3. Promising evidence from at least one well-designed and well-implemented co-relational study with statistical controls for selection bias

 

Selection Process

 

Lead time is essential. Identifying and adopting each Evidence-Based Practice may demand up to eight weeks in advance before a proposal is due. While readily found state and national data may provide key benchmarks, “…Education leaders should ensure that they select EBPs based on a data-driven needs assessment, rigorous evidence, and an appropriate fit with other regional or local criteria (AIR, 2019, p. 14).”

 

During the EBP selection period, various stakeholders in improving academic outcomes of low performing schools can expect to engage in a multi-step process. Each of its steps will take time (e.g., several meetings) to execute and will require options for stakeholder engagement (e.g., in meetings and/or online surveys). Few steps, if any, will afford any shortcuts.

 

Decision-Making Cycle

 

The use of EBPs for school improvement involves a five-phase decision-making cycle (USDE, 2016):

  1. Identify local needs
  2. Select relevant evidence-based interventions
  3. Plan for implementation
  4. Implement interventions
  5. Examine and reflect on interventions

 

Using this schema, all five phases can surface as a topic in the Needs Assessment (Phase 1), Research-Based Rationale (Phase 2), Work Plan (Phases 3, 4, 5), and Evaluation Plan (Phase 5) of a grant proposal.

 

Action Steps

 

Among the steps that grant applicants should anticipate in selecting each EBP are:

 

Action 1 – Review the data and practices to identify improvement areas

  1. Do a data review and interpretation to identify student outcomes that EBPs should address
  2. Do a root-cause analysis to identify strategies for improvement
  3. Create an inventory of current practices and interventions

 

Action 2 – Explore key actions to flag EBPs that meet evidence requirements

  1. Review existing online clearinghouses for potential practices
  2. Do a local review of research (e.g., district evaluation reports, area university studies)

 

Action 3 – Apply other criteria to identify EBPs that meet local priorities

  1. Apply local EBP selection factors (e.g., available infrastructure, fit with priorities, alignment with goals, ability to replicate and bring to scale, and ability to measure progress formatively and summatively)

 

Every action step, and its subordinate steps, can appear – if succinctly stated – under one or several review criteria in a grant proposal narrative, e.g., in Needs Assessment (Action 1), Research-Based Rationale (Actions 1,2,3), Program Design (Action 3), and Evaluation Plan (Action 3).

 

Resources

 

For further exploration of using Evidence-Based Practices in the context of education, see: