Evaluation Resources Definitions

The fields used in the database “Evaluation Resources: Supporting Effective Educator Development” are defined below. You can use these fields to narrow your search results and find resources that address specific areas of interest.

Evaluation Lifecycle

Plan and Design

Randomized Controlled Trials (RCT) study designs are evaluations that randomly assign study participants to either a “control group” that does not receive the intervention or to a “treatment group” that does receive the intervention.
Quasi-Experimental Designs (QED) cover a wide variety of research designs. Unlike randomized controlled trials, participants are not randomly assigned to a treatment group or control group prior to the intervention. Instead, in a Quasi-Experimental Design, the treatment group includes individuals who received the intervention. The comparison group is made up of individuals who did not receive the intervention but had similar characteristics to the treatment group. Note that a special type of Quasi-Experimental Group Design, Regression Discontinuity Design, is in a separate category below.
Regression Discontinuity Designs are a special type of Quasi-Experimental Group Design.Regression Discontinuity Designs can be applied when participants are accepted into a program intervention based on a cut-off score. Individuals just below the cut-off serve as a good comparison group for those individuals just above the cut-off, or vice versa.


Identify and Follow Participants

Guidance on how to identify or select a comparison group for your study. A comparison group should be as similar as possible to the treatment group. Baseline Equivalence is the technical term used by the WWC to measure how similar the comparison group is to the treatment group.
Guidance on how to determine the appropriate sample size for your study.
Consider how people leaving or entering your study will affect the integrity of your study, and guidance as to how to respond if this is the case.
Guidance on how to recruit sites and participants, including gaining “buy-in” for your study.
Consider how people leaving or entering your study will affect the integrity of your study, and guidance as to how to respond if this is the case.
Guidance on how to identify and avoid bias in your research design. Bias exists when it is unclear whether the intervention or another factor is responsible for the difference in outcomes between the treatment and comparison groups. “Confounding factors” is the technical term for these other factors in WWC.


Collect and Store Data

Ensure that your outcome measures are appropriate for your research design.  WWC refers to these issues as Outcome Eligibility.
Consider issues related to identifying, getting permission to use, and obtaining existing administrative data for a research study. This may be from states, districts, schools, or other entities.
Guidance related to original data collection for a research study, such as designing and fielding surveys.


Analyze Data

Consider issues in combining data systems for analysis.  For example, this includes issues such as combining data from different sources on similar variables or pooling data across districts or states.
Models used to analyze the data to determine if the program had a casual impact on the outcome of interest. Analytic models such as Hierarchical Linear Modeling, Propensity Score Matching, and Difference-in-Differences are addressed.
Consider adjustments that researchers make to ensure that the results of their analytic models are valid. For example, this can include methods to deal with missing data, testing multiple outcomes, and cluster corrections.


Report Findings

Explore how to report findings. For example, this includes issues such as interpreting effect sizes, improvement index, statistical significance of results, and/or interaction effects.
Consider methods to visually depict data to more easily interpret its meaning.


Additional Focus

Special Evaluation Topics

Explore low-cost approaches to evaluation. For example, this includes naturally occurring experiments and use of administrative data or publicly available data sources.
Explore short-cycle approaches to evaluation.  For example, this includes rapid-cycle evaluation, continuous improvement methods, and Plan-Do-Study-Act cycles.
Consider evaluation strategies in the context of educator development. This includes teacher or principal recruitment, selection, and preparation; professional development for teachers of academic subjects; and/or advanced certification and advanced credentialing. 
WWC Standards and view documents that reference What Works Clearinghouse; WWC standards. 


Examples of Outcome Measures

View examples of outcome measures related to student achievement outcomes. Measures may include: graduation rates, reading and/or writing, math, science, college entrance exams, grade progression, or civic knowledge & skills.
View examples of outcome measures related to student behavior. Measures may include: student attitudes, student attendance, student retention, discipline and behavior outcomes, student dispositions, self – efficacy, or classroom climate perception.
View examples of outcome measures related to teachers. Measures may include: teacher quality/effectiveness measures (e.g., value-added measures), classroom observations, lesson plans, surveys of teacher knowledge and/or attitudes, teacher retention, or teacher evaluation scores.
View examples of outcome measures related to principals and/or schools. Measures may include: school instructional climate surveys, principal evaluation scores, school performance measures (e.g., test scores), or principal retention.
View examples of outcome measures related districts. Measures may include: district value added scores, district leadership participation, or training ratings.