Last March, the What Works Clearinghouse (WWC) of the Institute of Education Sciences (IES) released its review of the portion of a Mathematica study showing that students attending KIPP middle schools scored higher than matched non-KIPP students. The study involved use of a quasi-experimental, matched-student research design, and WWC found that it meets WWC evidence standards with reservations (see definitions below).
In its recently released final report on the KIPP study, the WWC determined that the research described in the lottery-based, randomized-control trial (RCT) portion of the same study meets WWC evidence standards without reservations for the one-year follow-up and meets standards with reservations for the later-year follow-ups because of high sample attrition in those years. In the RCT portion of the study, students who entered the lottery and won were compared with those students who entered the lottery but did not win. While the WWC has conducted reviews of other studies focused on the charter sector, the only charter model that the WWC has reviewed, both in this review and in previous reviews, is the KIPP model.
Specifically, the experimental portion of the study found that students who were offered admission to 13 KIPP middle schools scored significantly higher on mathematics assessments in the first and second years after the lottery as well as in the fall of the third year after the lottery than students who entered the lottery but did not win admission to KIPP charters. For the comparisons of reading assessments between the KIPP and non-KIPP students, however, there were not statistically significant differences in any of the years.
For the quasi-experimental portion of the study, in all years examined, students enrolled in a larger sample from 41 KIPP middle schools scored significantly higher on state assessments in mathematics and reading achievement than their matched peers who attended non-KIPP public middle schools.
Lessons beyond the Mathematica study comparisons
While it is encouraging that KIPP is having these successes, the more critical question is: What characteristics of the KIPP model contribute to its success, and what can others in the charter sector learn from this to improve outcomes for their students?
Although Mathematica researchers were not able to identify all characteristics of the KIPP model that contributed to its success, they did find two key strategies that had an impact:
- Student achievement impacts were greater in KIPP schools with a more comprehensive school-wide behavior system. In the context of the Supportive School Discipline Initiative, jointly led by ED and the Department of Justice, and of the updated guidance regarding school discipline recently released by the two agencies, these results may speak to the impact that school climate can have on student achievement.
- Student achievement impacts were found to be larger in KIPP schools that spent relatively more time on core academic activities, defined in the study as English language arts, math, science, and history. Those schools in which the school day was longer, but in which more time was spent on non-core academic activities, had smaller impacts on student achievement. One of the “Five Pillars” of the KIPP model focuses on more time spent learning, and this seems to be yielding positive results.
The CREDO 2013 National Charter School Study, released last fall, found that, charter school students in 26 states and New York City in the aggregate had greater learning gains in reading and similar learning gains in math than their counterparts in traditional public schools. Taken together, these add to findings that show that high-quality charter schools do improve achievement. Other researchers, such as Roland Fryer and Will Dobbie, have tried to identify the characteristics of certain charter schools that make them more successful. The knowledge of why schools are successful is key in scaling up these models to reach more students.
New EDGAR evidence of effectiveness definitions
New Education Department General Administrative Regulations (EDGAR), published in August 2013, provide the following ways for ED grant competitions to incorporate evidence of effectiveness:
- establishing four levels of evidence — strong theory, evidence of promise, moderate evidence of effectiveness, and strong evidence of effectiveness;
- creating a mechanism that allows discretionary grant programs to require that applications are supported by moderate or strong evidence of effectiveness by establishing a separate competition for such applications;
- giving a competitive preference to such applications; and
- creating selection criteria for strong theory and evidence of promise that competitions can use to evaluate applications during the review process.
For applicants to meet the requirements for moderate and strong evidence of effectiveness, their proposed activities must meet WWC standards, although each definition includes parameters for meeting WWC with and without reservations. More ED programs are incorporating evidence of effectiveness in their grant competitions in an effort to support evidence-based practices and to build a knowledge base of successful practices for the field. As this continues, it will change the dynamics for applicants whose proposed activities are not supported by evidence of effectiveness. With this shift in ED’s review criteria, the KIPP Mathematica study can serve as a model for others in the charter sector as well as the education sector overall for evaluating their programs in alignment with WWC requirements.
Click here to read the notice of the EDGAR revisions of this past August in the Federal Register.
KIPP’s OII-supported evaluation efforts
KIPP currently has a 2010 Investing in Innovation (i3) Scale-up grant and three grants under the Charter School Program (CSP) grants for Replication and Expansion of High-Quality Charter Schools (2010, 2011, 2012). The i3 Scale-up grant required that applicants meet the requirements for strong evidence, the precursor to the strong evidence of effectiveness. In addition to requiring evidence of effectiveness as an entry requirement, the i3 program requires all i3 grantees to build evidence through independent evaluations of their projects. Similarly, the CSP Replication and Expansion program requires grantees to demonstrate their success in increasing student achievement and closing achievement gaps to be eligible to apply. While the KIPP middle school study is not a component of KIPP’s OII grants, Mathematica is conducting a similar evaluation that includes both an RCT and a quasi-experimental component as part of their i3 grant activities.
Soumya Sathya is a program specialist with the Charter Schools Program division of the Office of Innovation and Improvement.