Evaluation Resources

Below, we offer a set of resource documents to help you understand, design, and implement evaluation methods for educator development programs in a way that is consistent with the Department of Education What Works Clearinghouse (WWC) standards.

How to use this page: To find resources that have the most relevance to your interests, use the filters on the right side of the page. You can find the definition for each field as you hover over the field name. Click here for more detailed definitions of these fields (or click here to download a PDF version). To learn more about each resource, click on the resource title. Click here to download a User Guide on this database.

Resource # Title Author(s) Hosting/Publishing Organization Short Description Year of Publication Weblink Type 1 Type 2 Length Audience References WWC Standards Short Cycle Low Cost Educator Development Other Education Topics RCT QED RDD SCD Identifying Comparison Groups Determining Sample Sizes Recruiting Study Participants Addressing Changes in your Sample Reducing Bias in Comparison Groups Acquiring Administrative Data Selecting Appropriate Outcome Measures Collecting New Data Combining Data Systems Understanding Data Analytic Models Addressing Analysis Challenges Reporting Findings Visualizing Data Student Achievement(measure) Student Behavior(measure) Teacher(measure) Principal/School(measure) District(measure) Associated Keywords Length Value Group Any Examples of Outcome Measures Is Brief or Summary? Is Guide? Is Tool? Is Methods Reports? Is Video? Is Webinar? Is Slide Presentation? Initial Planning
1 5-minute evaluation resource series None Office of Innovation and Improvement, U.S. Department of Education This series is designed to provide a non-technical audiences information on evaluation design in short, informative, and easy to read resources. Each resource is meant to be read or watched in 5 minutes or less. The series is organized into suites of resources that focus on a specific evaluation component. The first suite focuses on the evaluation life cycle and the What Works Clearinghouse, the U.S. Deparment of Education’s reliable source for rigorous research. The suites that follow focus on different stages within the evaluation life cycle. 2019 Link to Resource Brief or summary guide 5 P True False False True True True True True False True True True True True True True False False False True True False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 1 False True True False False False False False True
2 Pathway to Effective Evaluation: Stage 1–Plan and Design Armstrong, K.R.; Brandt, C. Office of Innovation and Improvement, U.S. Department of Education This guide describes the first stage of the evaluation life cycle, the plan and design stage. The steps include 1. Determine your research question; 2. Create your logic model; 3. Carefully describe the people and context of your program; and 4. Select your research design. Each step is explained in further detail on the second page of the resource, with suggestions and links for resources that provide readers with more in-depth information about each step. 2018 Link to Resource Guide Brief or Summary 2 P True False False True True True True True False True False False False False False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 1 False True True False False False False False True
3 Pathway to Effective Evaluation: Stage 2–Identify and Recruit Participants Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This guide describes the second stage of the evaluation life cycle, identify and recruit your participants. The steps include 5. Identify comparison groups; 6. Determine sample size; 7. Recruit people to participate in your study; 8: Address changes in your sample, and 9. Reduce bias in your comparison groups. Each step is explained in further detail on the second page of the resource, with suggestions and links for resources that provide readers with more in-depth information about each step. 2019 Link to Resource Guide Brief or Summary 2 P True False False True True True True False False True True True True True False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, sample size; confounding factors, attrition, baseline equivalence, reducing bias, changes in your sample, 1 False True True False False False False False True
4 Evaluation Life Cycle Armstrong, K.R.; Brandt, C. Office of Innovation and Improvement, U.S. Department of Education This infographic provides an overview of the stages of the evaluation life cycle: plan and design; identify and follow participants; collect and store data; analyze data; and report and use findings. 2017 Link to Resource Guide Brief or Summary 1 P True False False True True True True True False True False False False False False False False False False False False False False False False False False Evaluation Life Cycle; Evaluation Design; 1 False True True False False False False False True
5 Group Evaluation Comparison Chart Armstrong, K.R.; Brandt, C. Office of Innovation and Improvement, U.S. Department of Education This infographic provides an overview of three group designs for evaluation: randomized controlled trials, regression discontinuity design, and matched comparison design. 2017 Link to Resource Guide Brief or Summary 1 P False False False True True True True True False False False False False False False False False False False False False False False False False False False Evaluation Design; QED; RDD; RCT 1 False True True False False False False False False
6 5-Minute Evaluation Resource Series: Illustrated script for Video #5, The Power of Sample Size, Part 5 Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This illustrated script provides the text and screen shots of the vdieo of the same name. The short video builds on the ‘The Power of Sample Size’ part 1 video, and discusses how decisions around cluster designs and pre-intervention data affect the number of people you need in your study to detect a meaningful difference. The video is designed for a non-technical audience. 2019 Link to Resource Video 5 P False False False True True True False False False True True False False False False False False False False False False False False False False False False Comparison Groups; sample size; power analysis; MDE; Minimum Detectable effect; confidence intervals; simple experimental design 1 False False False False False True False False True
7 5-Minute Evaluation Resource Series: Video #1, Evaluation for Educators Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This introductory video begins a series designed to help practitioners understand the basic concepts behind program evaluation, particularly educator effectiveness programs (although the concepts can be applied to other areas of education as well). The introductory video explains the goal of the series, which is to help practitioners engage with their evaluators, be more involved in the evaluation process, and understand WWC standards. The video also introduces the suite of resources that accompany it. 2018 Link to Resource Video guide 5 P True False False True True True True True False True False False False False False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 1 False False True False False True False False True
8 5-Minute Evaluation Resource Series: Video #2, Life cycle of an evaluation Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education The life cycle evaluation video features an overview of the stages of the life cycle: plan and design, identify and follow participants, collect and store data, analyze data, report and use findings. This is the second video in a series designed to help practitioners understand the basic concepts behind program evaluation, particularly educator effectiveness programs (although the concepts can be applied to other areas of education as well). 2018 Link to Resource Video guide 5 P True False False True True True True True False True False False False False False True False False False False True False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 1 False False True False False True False False True
9 5-Minute Evaluation Resource Series: Video #3, The case for comparison groups Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education The case for comparison groups video explains the concept of comparison groups, why they are necessary, and how they are chosen to ensure a valid evaluation that can meet WWC standards. This is the 3rd video in the 5 minute evaluation resources series, designed to help practitioners understand the basic concepts behind program evaluation, particularly educator effectiveness programs (although the concepts can be applied to other areas of education as well). 2018 Link to Resource Video guide 5 P True False False True True False False False False True False False False False False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, ; 1 False False True False False True False False True
10 5-Minute Evaluation Resource Series: Video #4, The Power of Sample Size, Part 1 Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This short video illustrates why having a large enough sample size is essential for identifying whether your program made a meaningful difference. It also addresses some of the decisions that are made in conducting a power analysis, including what a minimum detectable effect (MDE) is and how it affects the number of people you need in your study. The video is designed for a non-technical audience. 2019 Link to Resource Video 5 P True False False True True True False False False True True False False False False False False False False False False False False False False False False Comparison Groups; sample size; power analysis; MDE; Minimum Detectable effect; confidence intervals; simple experimental design 1 False False False False False True False False True
11 5-Minute Evaluation Resource Series: Video #5, The Power of Sample Size, Part 5 Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This short video builds on the ‘The Power of Sample Size’ part 1 video, and discusses how decisions around cluster designs and pre-intervention data affect the number of people you need in your study to detect a meaningful difference. The video is designed for a non-technical audience. 2019 Link to Resource Video 5 P True False False True True True False False False True True False False False False False False False False False False False False False False False False WWC; Comparison Groups; sample size; power analysis; MDE; Minimum Detectable effect; confidence intervals; simple experimental design 1 False False False False False True False False True
12 5-Minute Evaluation Resource Series: Video Script #1, Evaluation for Educators Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This pdf is an illustrated script of the introductory video that begins a series designed to help practitioners understand the basic concepts behind program evaluation, particularly educator effectiveness programs (although the concepts can be applied to other areas of education as well). The introductory video explains the goal of the series, which is to help practitioners engage with their evaluators, be more involved in the evaluation process, and understand WWC standards. The video also introduces the suite of resources that accompany it. 2018 Link to Resource Guide Brief or Summary 8 P True False False True True True True True False True False False False False False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 2 False True True False False False False False True
13 5-Minute Evaluation Resource Series: Video Script #2, Life cycle of an evaluation Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This pdf provides an illustrated script to the life cycle evaluation video which features an overview of the stages of the life cycle: plan and design, identify and follow participants, collect and store data, analyze data, report and use findings. This is the second video in a series designed to help practitioners understand the basic concepts behind program evaluation, particularly educator effectiveness programs (although the concepts can be applied to other areas of education as well), and WWC standards. 2018 Link to Resource Guide Brief or Summary 8 P True False False True True True True True False True False False False False False True False False False False True False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 2 False True True False False False False False True
14 5-Minute Evaluation Resource Series: Video Script #3, The case for comparison groups Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This pdf is an illustrated script for the case for comparison groups video. It explains the concept of comparison groups, why they are necessary, and how they are chosen to ensure a valid evaluation that can meet WWC standards. This is the 3rd video in the 5 minute evaluation resources series, designed to help practitioners understand the basic concepts behind program evaluation, particularly educator effectiveness programs (although the concepts can be applied to other areas of education as well). 2018 Link to Resource Guide Brief or Summary 9 P True False False True True False False False False True False False False False False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, ; 2 False True True False False False False False True
15 5-minute evaluation resource series–Suite 1: The evaluation life cycle None Office of Innovation and Improvement, U.S. Department of Education The first suite of the 5-minute evaluation series focuses on the evaluation life cycle and the What Works Clearinghouse, the U.S. Deparment of Education’s reliable source for rigorous research. This series is designed to provide a non-technical audiences information on evaluation design and WWC standards in short, informative, and easy to read resources. Each resource is meant to be read or watched in 5 minutes or less. The series is organized into suites of resources that focus a specific evaluation issue. 2019 Link to Resource Brief or summary guide 5 P True False False True True True True True False True True True True True True True False False False True True False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 1 False True True False False False False False True
16 5-minute evaluation resource series–Suite 2: The Planning and Design Stage None Office of Innovation and Improvement, U.S. Department of Education The second suite of the 5-minute evaluation series focuses on the first stage of the evaluation life cycle –the planning and design stage. This stage includes 4 steps: 1. Determine your research questions; Create your logic models; Carefully describe the people and context of your program; and 4 select your research design. This series is designed to provide a non-technical audiences information on evaluation design and WWC standards in short, informative, and easy to read resources. Each resource is meant to be read or watched in 5 minutes or less. The series is organized into suites of resources that focus a specific evaluation issue. 2019 Link to Resource Brief or summary guide 5 P True False False True True True True True False True False False False False False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 1 False True True False False False False False True
17 5-minute evaluation resource series–Suite 3: Identify and Follow Participants. None Office of Innovation and Improvement, U.S. Department of Education The third suite of the 5-minute evaluation series focuses on the second stage of the evaluation life cycle –identify and follow participants. This stage includes five steps: 1. Identify comparison groups; 2. Determine sample sizes; 3. Recruit study participants; 4.Address changes in your sample; 5. Reduce bias in comparison groups. This series is designed to provide a non-technical audiences information on evaluation design and WWC standards in short, informative, and easy to read resources. Each resource is meant to be read or watched in 5 minutes or less. The series is organized into suites of resources that focus a specific evaluation issue. 2019 Link to Resource Brief or summary guide 5 P True False False True True True True True False True False False False False False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, sample size; confounding factors, attrition, baseline equivalence, reducing bias, changes in your sample, 1 False True True False False False False False True
18 Identifying a Valid Comparison Group Armstrong, K.R.; Brandt, C. Office of Innovation and Improvement, U.S. Department of Education This guide defines what the counterfactual condition is and why it is important. The second page of the guide describes the factors that need to be considered when selecting appropriate comparison groups. 2018 Link to Resource Guide Brief or Summary 2 P True False False True True True True True False True False False False False False False False False False False False False False False False False False WWC, Comparison groups, Evaluation life cycle, standards, evaluation design, RCT, QED, RDD; 1 False True True False False False False False True
19 The Power of Sample Size Armstrong, Karen R. Office of Innovation and Improvement, U.S. Department of Education This 2-page brief addresses why its important to have enough people in your sample to detect that your program made a meaningful difference. The second page of the brief identifies some of the evaluation decisions that will increase or decrease the number of people you will need. It provides a quick resource for seeing how sample size needs change depending on (1) the size of the minimum detectable affect; (2) whether you are conducting a simple or cluster evaluation design, (3) how many individuals are in each cluster (4) whether students within a cluster represent a homogenous or heteorgenous group, and (5) the predictive strength of pre-intervention data that you have available. 2019 Link to Resource Brief or summary guide 2 P False False False True True True False False False True True False False False False False False False False False False False False False False False False Comparison Groups; sample size; power analysis; MDE; Minimum Detectable effect; confidence intervals; simple experimental design 1 False True True False False False False False True
20 Definitions for Database Fields: Evaluation Resources for Supporting Effective Educator Development None Office of Innovation and Improvement, U.S. Department of Education This document provides definitions for each field used in the SEED Evaluation Resources Database. 2017 Link to Resource Guide 3 P True False False True True True True True False True True True True True True True True True True True True True False False False False False None 1 False False True False False False False False True
21 Recruiting participants: Excerpt from “Recognizing and conducting opportunistic experiments in education; a guide for policymakers and researchers Resch, A., Berk, J., & Akers, L. REL This excerpt of the guide for policy makers and researchers focuse on strategies to consider when recruiting participants for an education rigorous evaluation. 2014 Link to Resource Brief or summary guide 1 P False False False False True False False False False True False True False False False False False False False False False False False False False False False Recruiting participants 1 False True True False False False False False True
22 Technical Terms for a Non-Technical Audience None Office of Innovation and Improvement, U.S. Department of Education This glossary lists evaluation terms that may not be familiar to non-technical readers and gives brief definitions for each term. 2017 Link to Resource Guide Brief or Summary 3 P True False False True True True True True True False False False False False False False False False False False False False False False False False False Attition, Confounding Factors; Evaluation design; Evaluation terms; baseline equivalence, effect size, WWC 1 False True True False False False False False True
23 5-Minute Evaluation Resource Series: User’s Guide: Getting the most out of this series IMPAQ International, U.S. Dept of Education Office of Innovation and Improvement, U.S. Department of Education This user’s guide accompanies the 5-Minute Evaluation Resource Series. The guide explains what is contained in each suite within the series and offers suggestions for the easiest way to use the suites. It also points readers to the Evaluation Resource Database that accompanies the series if they want more in-depth information about a particular topic. 2017 Link to Resource Guide 1 P True False False True True False False False False False False False False False False False False False False False False False False False False False False None 1 False False True False False False False False True
24 User’s Guide: Database of Evaluation Resources for Supporting Effective Educator Development None Office of Innovation and Improvement, U.S. Department of Education This guide provides the reader with instruction on the use of the SEED Evaluation Resource Database, including how to use the filter and keyword search features. 2017 Link to Resource Guide 2 P True False False True True False False False False False False False False False False False False False False False False False False False False False False None 1 False False True False False False False False True
25 Theory of Change: Excerpt from ‘Recognizing and conducting opportunistic experiments in education’ Resch, A., Berk, J., & Akers, L. None This short excerpt illustrates how to use a theory of change as you develop your research design. 2014 Link to Resource None None None False False False False False False False False False False False False False False False False False False False False False False False False False False False None 0 False False False False False False False False False
26 Evaluation Life Cycle Armstrong, K.R.; Brandt, C. Office of Innovation and Improvement, U.S. Department of Education This infographic provides an overview of the stages of the evaluation life cycle: plan and design; identify and follow participants; collect and store data; analyze data; and report and use findings. 2017 Link to Resource Guide Brief or Summary 1 P True False False True True True True True False True False False False False False False False False False False False False False False False False False Evaluation Life Cycle; Evaluation Design; 1 False True True False False False False False True
50 90-Day Cycle Handbook Park, S.; & Takahashi, S. Carnegie Foundation for the Advancement of Teaching The Carnegie Foundation for the Advancement of Teaching developed this handbook as an introduction to 90-day, short-duration research cycles. This handbook provides a broad overview of the purpose and essential characteristics of a 90-day cycle, followed by a discussion of the processes and steps involved in conducting a 90-day cycle (post-cycle). In addition, the handbook includes tips and examples to facilitate their execution. The handbook concludes with a description of the roles and responsibilities of participants in a 90-day cycle. 2013 Link to Resource Guide 32 pages P False True False False False False False False False False False False False False False False False False False False False False False False False False False SANDRA PARK SOLA TAKAHASHI driver diagrams literature short list big picture picture goal broad overview cycle day team product phase scan process topic 3 False False True False False False False False False
51 A Composite Estimator of Effective Teaching Mihaly, K.; McCaffrey, D.; Staiger, D.; and Lockwood, J. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this report to help educator evaluation program designers select weights to place on multiple measures of effectiveness in an effort to most accurately assess teaching quality. The report focuses on a statistical method to select optimal weights and then compares those weights to others, such as ones used in state educator evaluation systems. It also examines the extent to which optimal weights vary depending on how much data are available. This guide is a valuable resource for program designers who want to understand and replicate the statistical analyses that underlie the non-technical “Ensuring Fair and Reliable Measures of Effective Teaching” (Resource # 205). 2013 Link to Resource Methods Report 51 pages R False False False True True False True False False False False False False False True True False False True True True False True False True False False None 3 True False False False False False False False False
52 A Conceptual Framework for Studying the Sources of Variation in Program Effects Weiss, M.J.; Bloom, H.S.; & Brock, T. MDRC This MDRC paper presents a conceptual framework for designing and interpreting research on variation in program effects and the sources of this variation. The goals of the framework are to enable researchers to offer better guidance to policymakers and program operators on the conditions and practices that are associated with larger and more positive results. Throughout the paper, the authors include concrete empirical examples to illustrate points. 2013 Link to Resource Methods Report 59 pages R False False False False False True True False False False False False False False False False False False False False False False True False False False False Latino males eleventh graders prima faci Head Start Construct Possible Home Visiting Scholarship Demonstration Thomas Brock cognitive behavioral program treatment effects implementation services group contrast example variation research 3 True False False False True False False False False
53 A Guide for Monitoring District Implementation of Educator Evaluation Systems Cherasaro, T.; Yanoski, D.; & Swackhamer, L. Regional Education Laboratory Program, Institute of Education Sciences This Regional Educational Laboratory (REL) Central-developed guide walks users through a three-step process that states can use to monitor district implementation of educator evaluation systems to comply with the requirements of the Elementary and Secondary Education Act flexibility requests. The guide also offers example tools developed together with the Missouri Department of Elementary and Secondary Education that states can adapt to provide districts with greater clarity on expectations about educator evaluation systems. 2015 Link to Resource Guide 38 pages P False False False True True False False False False False False False False False False False True False False False False False False False False False False Description Early childhood Educational Laboratory NEE/MU Regional Educational adequate duration either formally goals aimed starting point Authors’ creation checkbox evaluation district teacher practice criteria student system principal 3 False False True False False False False False False
54 A Guide for Monitoring District Implementation of Educator Evaluation Systems Cherasaro, T. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Central developed this video to help professionals determine whether the associated report, The Examining Evaluator Feedback Survey, would be useful to them. The video provides an overview of the iterative process used to develop the survey, a three-step process which includes developing state guidelines that align district systems to state expectations, developing data collection methods for both policy and practice data primarily through surveys, and developing adherence criteria and review data against criteria. The video includes a description of the contents of the survey and how states and districts can use the survey. Finally, there is a short discussion about resources that are available to help those interested in using the survey. This video will be useful to program staff interested in developing their own monitoring systems or adapting the tools provided. 2016 Link to Resource Video Brief or Summary 4min P False False False True False False False False False False False False False False False False False False False False False False False False False False False None 1 False True False False False True False False False
55 A Guide to Finding Information on Studies Reviewed by the What Works Clearinghouse What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this video to help researchers, program evaluators, and program staff effectively use the WWC website. The WWC evaluates research studies that look at the effectiveness of education programs, products, policies, and practices, and generally employ randomized or quasi-experimental designs. The video goes through screen shots to help the viewer find whether a particular study of interest has been reviewed by the WWC and why (e.g., grant competition) or what studies the WWC has reviewed on a topic of interest. It provides information on study ratings, sample characteristics, statistically significant positive findings, protocols under which studies are reviewed (which can help with outcome selection), and associated products such as practice guides that describe evidence-based interventions. It describes search strategies with examples, starting at 1:05, and types of search terms that can be used (4:54). 2017 Link to Resource Video Guide 6:54 P True False False False True True True False False False False False False False False False False False False False True False False False False False False References WWC Standards, Any Education Topic, Initial Planning, RCT, QED, Reporting, Interpreting, and Using Findings 2 False False True False False True False False True
56 A Practical Approach to Continuous Improvement in Education Rodriguez, S.; Shakman, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed this webinar to help professionals considering implementing a continuous improvement process potentially as part of a network. Presenters describe continuous improvement as a way of thinking about systems change and quality improvement and its underlying six principles. Listeners for whom this approach is appropriate will then receive a set of implementation tools they can use in their own settings, such as how to define a problem, select key questions, set goals, and leverage tools such as Plan-Do-Study-Act cycles, aim statements, and fishbone and driver diagrams. The webinar provides a hands-on opportunity to practice as well as examples from school districts. This webinar will be useful for those interested in leveraging tools to help in the planning of a continuous monitoring process. 2016 Link to Resource Webinar Guide 1h21min P False True False False True False False False False False False False False False False False False False False False False False False False False False False None 3 False False True False False False True False True
57 A Practical Guide to Regression Discontinuity Jacob, R., Zhu, R., Somers, M-A., & Bloom, H. MDRC This MDRC methodology paper is intended to serve as a practitioners’ guide to implementing regression discontinuity (RD) designs. It uses an approachable language and offers best practices and general guidance to those attempting an RD analysis. This paper discusses in detail the following: 1) approaches to estimation, 2) assessing the internal validity of the design, 3) assessing the precision of an RD design, and 4) determining the generalizability of the findings. In addition, the paper illustrates the various RD techniques available to researchers and explores their strengths and weaknesses, using a simulated dataset. 2012 Link to Resource Methods Report 100 pages R False False False False False False True True False True False False False True False False False False True True True True False False False False False Internal Validity decision makers independently academic warning broadly generalizable difference observations Web site gender regression rating discontinuity data simulated ratings treatment distribution plot bin 3 False False False False True False False False False
58 A Primer for Continuous Improvement in Schools and Districts Shakman, K.; Bailey, J; Breslow, N. Teacher Incentive Fund and Teacher & School Leader Incentive Program The Teacher Incentive Fund and Teacher & School Leader Incentive Program developed this guide to orient educational practitioners to the continuous improvement process in educational settings. It discusses implementing and studying small changes and making revisions in iterative cycles with the goal of making lasting improvement. The brief offers a model for improvement that consists of three essential questions: What problem are we trying to solve? What changes might we introduce and why? How will we know that a change is actually an improvement? It shares six principles of improvement the Carnegie Foundation for the Advancement of Teaching developed for education-focused audiences (page 4). It then describes in depth the components of the Plan-Do-Study-Act (PDSA) model for improvement. The PDSA cycle provides a structure for testing a change and guides rapid learning through four steps that repeat as part of an ongoing cycle of improvement: 1) Plan: defining the problem (using the Fishbone Diagram tool in Appendix A) and establishing an aim (using the Driver Diagram tool in Appendix B); 2) Do: implementing an initiative and measuring success using “practical measures,” i.e., those that practitioners can collect, analyze, and use within their daily work lives; 3) Study: investigating the data; and 4) Act: determining next steps. The brief presents an example in the context of teacher turnover. Finally, it points the reader to additional resources. 2017 Link to Resource Guide Tool 14 P False True False True True False False False False False False False False False False True True False False False False False False False True False False None 2 True False True True False False False False True
59 Addressing Attrition Bias in Randomized Controlled Trials: Considerations for Systematic Evidence Reviews Deke, J.; Sama-Miller, E.; & Hershey, A. U.S. Department of Health and Human Services This U.S. Department of Health and Human Services paper discusses whether an attrition standard based on information from education research is appropriate for use with research that the Home Visiting Evidence of Effectiveness Review (HomVEE) examines. The paper also provides an example of how to assess whether the attrition standard for one systematic evidence review fits other systematic reviews, along with considerations for adopting or modifying the standard for alternative contexts. 2015 Link to Resource Methods Report 21 pages R True False False False False True False False False False True False True False False False False False False False False False False False False False False Controlled Trials Primary caregiver’s Randomized Controlled What Works Clearinghouse follow-up Evidence Reviews Human Services Systematic Evidence attrition bias homvee boundary standard effect program rate outcomes 3 False False False False True False False False False
60 Addressing Challenges and Concerns about Opportunistic Experiments Hartog, J. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) developed a video series that explains how schools, districts, states, and their research partners can use a cost-effective approach, known as “opportunistic experiments,” to test the effectiveness of programs. Opportunistic experiments take advantage of a planned intervention of policy change. This video describes four common concerns that come up before starting an opportunistic experiment; such as fairness, access, expense, and disruption, and three key challenges that are common once the experiment in underway; study-exempt participants, building support, and observing long-term outcomes. This video will be useful to professionals considering opportunistic experiments and using research to make decisions and improve programs. 2016 Link to Resource Video 11min P False True True False True True False False False True False False False False False False False False False False False False False False False False False None 2 False False False False False True False False False
61 An educator’s guide to questionnaire development Harlacher, J. Institute of Education Sciences This Regional Educational Laboratory Central guide designed for education administrators offers a five-step, questionnaire-development process: determining the goal(s) of the questionnaire, defining the information needed to address each goal, writing the questions, reviewing the questionnaire for alignment with goals and adherence to research-based guidelines for writing questions, and organizing and formatting the questionnaire. 2016 Link to Resource Tool 22 pages P False False False True False False False False False False False False False False False True True False False False False False False True True True False New York emotionally charged released publicly Authors compilation Public Instruction Retrieved September Sage Publications Thousand Oaks personality traits mutually exclusive questions questionnaire information respondents example question response responses data may 3 True False False True False False False False False
62 An Environmental Scan of Tools and Strategies that Measure Progress in School Reform Griffin, P.; Woods, K.; & Nguyen, C. Victorian (AU) Department of Education and Training The Victorian (AU) Department of Education and Training developed this guide to help professionals evaluate school reform. The guide begins with an overview of tools and methods for measuring the progress and impact of school reform initiatives in the U.S. and other countries. It reviews the trend for “evidence- based” reform in education and discusses the importance of rigor in the design of evaluation studies, sampling and data collection procedures, analysis and interpretation of evidence. After focusing on what not to do, the guide switches to providing detailed guidelines for evaluation that can be applied to a range of reform programs. One section discusses the evaluation of the implementation, processes, and outcomes of learning standards in Victoria. The final section briefly describes three case studies in which measurement and monitoring of student outcomes have been incorporated into program evaluation. This guide is a valuable resource for evaluators who need to identify appropriate indicators of successful implementation, including student academic outcomes. 2005 Link to Resource Guide 87 Pages P False True False False False False False False False False False False False False False True False False True True True False False False False False False Manpower Bureau Environmental Scan General Accounting Primary Native Flagship Strategy Family Trust Fischer Family Quality Assurance Development Branch Native English-speaking reform outcomes evaluation data indicators section factors implementation students report 3 False False True False False False False False False
63 An Evaluation of Bias in the 2007 National Households Education Surveys Program: Results From a Special Data Collection Effort Van de Kerckhove, W.; Montaquila, J.M.; Carver, P. R.; Brick, J.M. National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to examine bias in estimates from the 2007 National Household Education Surveys Program due to nonresponse from both refusals and noncontact cases, as well as bias due to non-coverage of households that only had cell phones and households without any telephones. The most relevant section for this data base are appendices A through F which provide advance, refusal, and community Letters, as well as four main tools for conducting in-person follow-ups: a household folder, an interviewer observation form, a “sorry I missed you” card, an appointment card, and a non-interview report form. 2008 Link to Resource Tool 53 pages P False False False False True False False False False False False True False False False False True False False False False False False False False False False Bias Analysis; Interviews by Telephone; Random Digit Dialing; Response Rates 3 False False False True False False False False False
64 An Evaluation of the Data From the Teacher Compensation Survey: School Year 2006–07 Cornman. S.Q.; Johnson. F.; Zhou, L; Honegger,S.; Noel, A.M. National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to provide researchers with compensation, teacher status, and demographic data about all public school teachers from multiple states. Three sections are of use. Section 2 describes the data elements the survey collects. Section 4 describes data quality and section 6 survey limitations. The link to access the data is at the bottom of page 55. This survey is a valuable resource for researchers who want to analyze teacher compensation data and are concerned with the sampling error and self-reporting bias present in sample surveys, as this dataset contains universe data at the teacher level for multiple states. Sections 4 and 6 are also useful for program directors and evaluators who want to assess the quality and completeness of data by providing examples of data quality checks and red flags. 2009 Link to Resource Guide Tool 10 pages P False False False False True False False False False False False False False False False False True False False True False False False False True False False None 2 True False True True False False False False False
65 An SEA Guide for Identifying Evidence-Based Interventions for School Improvement Lee, L.; Hughes, J.; Smith, K;. Foorman, B. Florida Center for Reading Research, Florida State University The Florida Center for Reading Research developed this guide to help state education agencies consider the evidence supporting intervention options. Of particular use for evaluation purposes, the guide describes levels of evidence as defined by ESSA and informed by WWC on pages 11-18. The guide’s main focus is to describe the steps of a collaborative self-study process to identify interventions. Self-study is a process that facilitates thoughtful investigation and discussion of an issue or topic so that decisions can be made collaboratively. The guide also provides a wealth of tools, including role definitions, an intervention scoring template, a scoring guide, a consensus rating form, a planning form for providing guidance to districts, and a sample logic model. Individual scoring guides are provided in a range of areas — systemic change, leadership, instruction, staff development and retention, and school climate. 2016 Link to Resource Guide Tool 89 pages P True False False True True False False False False False False False False False False False False False False False False False False False False False False FLORIDA CENTER, Florida Department of Education, Grant Foundation, Casey Foundation, Self-Study Guide, FLORIDA STATE UNIVERSITY, Overdeck Family Foundation, SEA Guide, Kevin Smith, READING RESEARCH, School Improvement, Self-Study Process, Tennette Smith, Barbara Foorman, John Hughes, Laurie Lee, Identifying Evidence-Based Interventions, UPDATED NOVEMBER, Funding Foundations, Sandra Dilger, Holly Edenfield, Jason Graham, Shannon Houston, Eileen McDaniel, Sonya Morris, Melissa Ramsey, Michael Stowell, Jennifer Morrison, Kim Benton, Robin Lemonis, Roy Stehle, Nathan Oakley, Sonja Robertson, Table of Contents, Mississippi Department of Education, Responsibility, Roles, South Carolina Department of Education, Annie, Flexibility, Acknowledgments, iii, Introduction 3 False False True True False False False False True
66 Analysis Models and Baseline Equivalence For Impact Studies Price, C.; Wolf, A. Abt Associates This video provides guidance on the selection of analysis models for estimating impacts and assessing baseline equivalence in evaluations of education interventions. It uses practical examples from K-12 and postsecondary education. Specific analysis models are presented for common study designs along with a discussion of individual vs. cluster assignment, blocking, and inferences to a sample vs. generalization to a population. The video is also an orientation to a set of materials including a set of example data sets and programs that can be modified for use with SAS, Stata, SPSS, and R to meet the specific needs of an evaluation. See the “Analysis Models and Baseline Equivalence for Impact Studies: Guidance, Example Data Sets and Programs for SAS, Stata, SPSS, and R Users” resource in this database for these materials. This video is a valuable resource for evaluators who plan, conduct, and report results from rigorous impact evaluations of the sort that can meet WWC standards with or without reservations. 2018 Link to Resource Video Guide 47:40 R True False False False True True True False False True False False False False False False False False True True False False True False False False False None 3 True False True False False True False False True
67 Analysis Models and Baseline Equivalence For Impact Studies: Guidance, Example Data Sets and Programs for SAS, Stata, SPSS, and R Users Boulay, B.; Miller, H.; Price, C.; Wolf, A. Abt Associates This link provides a package of materials to support the selection of analysis models for estimating impacts and assessing baseline equivalence in evaluations of education interventions. It includes: a two-page overview of the contents of the zip file (000_Read Me First.pdf); 56-page long PowerPoint and PDF versions of a video (called webinar) that discusses analysis models for estimating impacts and assessing baseline equivalence, specifically using examples within postsecondary education (the video can be downloaded from the same link; a 28-page practical guide for evaluators on establishing baseline equivalence (or the similarity of treatment and comparison groups at the beginning of an evaluations so intervention effects can be attributed to the intervention); a 32-page guide to help choose an appropriate analysis model for evaluations using a group design—either a randomized controlled trial (RCT) or a quasi-experimental design (QED)—to evaluate the effectiveness of an intervention; and five data sets and programs that can be modified for use with SAS, Stata, SPSS, and R to meet the specific needs of an evaluation. This packet of materials is a valuable resource for evaluators who plan, conduct, and report results from rigorous impact evaluations that can meet WWC standards with or without reservations. (Note: In case of difficulty opening a file, rename the downloaded zip file with a shorter title e.g., Analysis Models.zip) 2017 Link to Resource Guide Tool 176 R True False False False True True True False False True False False False False False False False False True True False False True False False False False General Design Guidance, QED Tools and Resources, RCT Tools and Resources, Guidance 3 True False True True False False False False True
68 Analyzing Teacher Retention by Performance Level and School Need Examples From Maricopa County Nicotera, A.; Pepper, M.; Springer, J.; Milanowski, A. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to provide district staff with simple analyses to help identify teacher retention challenges. The body of the guide goes through various measures and presents results for one school district in user-friendly graphics. The appendices describe the data, measures, and analyses. This guide is a valuable resource for district research and human resources staff who want to understand how to understand and address teacher retention patterns. 2017 Link to Resource Guide 13 pages P False False False True True False False False False False False False False False True True False False False False True False False False True True False None 2 True False True False False False False False False
69 Applying the WWC Standards to Postsecondary Research What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this webinar to help researchers conduct effectiveness evaluations in postsecondary settings. The facilitators provide an overview of the WWC and discuss how the WWC determines study ratings, with special considerations for postsecondary settings. They recall eligible designs (randomized control trials and some quasi-experimental designs) and their potential to meet WWC standards. They also explain how certain aspects of a study, such as attrition, group comparability, and outcome measurements, can impact the WWC’s rating of a study. This webinar provides guidance on common, but frequently avoidable, challenges in postsecondary research that prevent studies from meeting standards. This webinar will be helpful for researchers who want to know what to look for in strong studies in postsecondary research. 2016 Link to Resource Video Guide 37 min R True False False False False True True True False True False False True True False True False False False False True False False False False False False None 3 False False True False False True False False False
70 Approaches to Evaluating Teacher Preparation Programs in Seven States Meyer, S.J.; Brodersen, R.M.; & Linick, M.A. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory Central conducted this qualitative study to inform policymakers, researchers, educators, and others who are concerned with the quality of preparing teachers for their profession. The report focuses on how seven central states evaluate their teacher-preparation programs and the changes they are making to improve the states’ approaches to evaluation. Most changes involve paying more attention to the performance of program graduates, developing common data-collection tools and data systems, and developing new ways to report evaluation data. This report is a valuable resource for policymakers and practitioners and provides a good example of a qualitative research study that can inform decision-making. 2014 Link to Resource Guide 40 pages P False True False True False False False False False False False True False False False False True False False True True False False False True False False Colorado Public Instruction American Progress Washington standards bachelors degree on-site visit lack formal count multiple teacher programs state preparation program education evaluation data states department 3 True False True False False False False False False
71 Approaches to TIF Program Evaluation: Cost-Effectiveness Analysis Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors and evaluators understand cost effectiveness analysis. The one-page guide focuses on a simple, concrete example; then lists the question this type of analysis answers, the data required, how it works, and how to use results. This guide is a valuable resource for practitioners who must choose among different programs. It is also useful for evaluators in the early stages of planning an evaluation. 2016 Link to Resource Guide 1 page P False False False True True False False False False False False False False False False False False False True False False False False False False False False None 1 False False True False False False False False True
72 Approaches to TIF Program Evaluation: Difference in Differences Design Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors and evaluators understand difference-in-differences designs. The one-page guide focuses on a simple, concrete example; then lists the question this type of analysis answers, the data required, how it works, and how to use results. This guide is a valuable resource for program directors who want to measure the impact of programs by comparing the difference in outcomes between the intervention and comparison groups before and after the intervention group experienced the intervention. It is also useful for evaluators in the early stages of planning an evaluation. 2016 Link to Resource Guide 1 page P False False False True True False True False False False False False False False False False False False True False False False False False False False False None 1 False False True False False False False False True
73 Approaches to TIF Program Evaluation: Interrupted Time-Series (ITS) Designs Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors and evaluators understand interrupted time series designs. The one-page guide focuses on a simple, concrete example; then lists the question this type of analysis answers, the data required, how it works, and how to use results. This guide is a valuable resource for program directors who want to measure the impact of programs by comparing multiple observations on an outcome before and after an intervention. It is also useful for evaluators in the early stages of planning an evaluation. 2016 Link to Resource Guide 1 page P False False False True True False True False False False False False False False False False False False True False False False False False False False False None 1 False False True False False False False False True
74 Approaches to TIF Program Evaluation: Regression Discontinuity Design Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors and evaluators understand regression discontinuity designs. The one-page guide focuses on a simple, concrete example; then lists the question this type of analysis answers, the data required, how it works, and how to use results. This guide is a valuable resource for program directors who want to measure the impact of programs by comparing outcomes of similar participants on either side of a cut-off point that determines whether the participant participates. It is also useful for evaluators in the early stages of planning an evaluation. 2016 Link to Resource Guide 1 page P False False False True True False False True False True False False False False False False False False True False False False False False False False False None 1 False False True False False False False False True
75 Asking Students About Teaching: Student Perception Surveys and Their Implementation – Policy and Practice Brief Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help program directors use student perception surveys as part of educator evaluations. The guide describes how to measure what matters — what teachers do, the learning environment they create, the theory of instruction that defines expectations for teachers in a system; how to ensure accuracy in student responses; how to ensure reliability with adequate sampling and items number; how to support improvement in teaching; and how to engage stakeholders. The guide provides real-life examples, tools, and graphics. This guide is a valuable resource for program directors and evaluators who want to leverage student perception as part of program evaluations. 2012 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False True False False False False False None 1 False False True False False False False False False
76 Attrition Knowledge Checks: Module 2, Chapter 5, WWC Training What Works Clearinghouse Institute of Education Sciences This video provides examples of study design decisions related to IES standards on attrition. Viewers are provided time to choose the best answer before the correct answer is provided. Reasons for answers are well described. 2016 Link to Resource Video 6 min P True False False False True True False False False False False False True False False False False False False False False False False False False False False None 2 False False False False False True False False False
77 Attrition Thresholds: Module 2, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences The video discusses WWC’s attrition thresholds, distinguishing between liberal threshold and conservative threshold. The video also points the viewer to additional WWC resources to calculate and understand the attrition thresholds. 2016 Link to Resource Video 8 min P True False False False True True False False False False False False True False False False False False False False False False False False False False False None 2 False False False False False True False False False
78 Baccalaureate and Beyond Longitudinal Study (B&B:08/09) Field Test Methodology Report – Working Paper Series Wine. J.; Cominole, M.; Janson, N.; Socha, T.; National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to describe the methodology and findings of the Baccalaureate and Beyond Longitudinal study. The relevant section for this data base is appendix C which provides examples of materials related to recruiting and maintaining participants, including parent letters, a data collection announcement letter and flyer, and thank you letters. 2015 Link to Resource Guide Tool 284 pages P False False False False True False False False False False False True False False False False True False False False False False True False False False False None 3 True False True True False False False False False
79 Baseline Equivalence Knowledge Checks: Module 3, Chapter 5, WWC Training What Works Clearinghouse Institute of Education Sciences This video includes a knowledge check on how to define baseline equivalence, when it’s appropriate to use statistical adjustments, and in what baseline conditions can studies receive the ‘meets standards with reservations’ rating. 2016 Link to Resource Video 10 min R True False False False True True True False False True False False False True False False False False False False False False False False False False False None 2 False False False False False True False False False
80 Benchmarking Education Management Information Systems Across the Federated States of Micronesia Cicchinelli, L. F.; Kendall, J.; Dandapani, N. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Pacific developed this report to provide information on the current quality of the education management information system (EMIS) in Yap, Federated States of Micronesia, so that data specialists, administrators, and policy makers might identify areas for improvement. The relevant sections of the report are in the appendices: data sources and methodology (appendix B) and data collection and instruments (appendix C). These sections are valuable resource for program directors who want to assess the quality of their education management information system (EMIS) or who want to see an example of a data collection effort that attempts to quantify answers from interviews. In particular, this helps ensure that education policy, planning, and strategy decisions are grounded in accurate information. 2016 Link to Resource Guide Tool 34 Pages P False False False False True False False False False False False False False False False False True False False False False False False False False False False education management information system, aspects of system quality, Accessibility of education data, overall system, system overall, reliability of education statistics, Department of Education, Integrity of education statistics, aspects of quality, Institute of Education Sciences, data reporting, scores, place, t i o, Data specialists, r t m e n t o f E d u c, state of Yap, D e p, John, Prerequisites of quality, King, benchmark levels, relevance, timeliness, Deputy Director, Policy, Acting Secretary, example of best practice, McREL International, Key findings, Louis, Cicchinelli, Nitara Dandapani, Delegated Duties, Ruth Neild, Serviceability, consistency, Federated States of Micronesia, indicators, rubric, Research, institutional frameworks, process of implementation, Accuracy, supporting resources, stakeholders, meeting standards, Kendall, follows, February 3 False False True True False False False False False
81 Better Feedback for Better Teaching: A Practical Guide to Improving Classroom Observations Archer, J.; Cantrell, S.; Holtzman, S..; Joe, J..; Tocci, C.; Wood, J. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide focuses on the components of high-quality evaluator training including priorities, delivery methods, the creation of training videos, the effective use of an observation rubric, and data use for continuous improvement. It also provides a series of tools from states and districts to support each step of the process. This guide is a valuable resource for program directors who want to understand how to design and implement a high quality evaluation. It is also useful for evaluators of educator evaluation systems. 2016 Link to Resource Guide Tool 364 pages P False False False True True False False False False False False False False False False False True False False False True False False False True False False None 3 True False True True False False False False True
82 Building Trust in Observations: A Blueprint for Improving Systems to Support Great Teaching [Policy and Practice Brief] Wood, J.; Tocci, C.; Joe, K.; Holtzman, S.; Cantrell, S; Archer, J. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide focuses on the components of observation systems that help build trust – observation rubric, evaluator training, observer assessment, and monitoring. Each component comes with a specific set of action steps in a user-friendly table format. This guide is a valuable resource for program directors who want to design and implement a high quality observation system with an eye towards building trust and buy-in. 2014 Link to Resource Guide Tool 28 pages P False False False True True False False False False False False False False False False False True False False False True False False False True False False None 3 True False True True False False False False True
83 Causal Inference and the Comparative Interrupted Time Series Design: Findings from Within-Study Comparisons St. Clair, T.; Hallberg, K.; Cook, T.D. Society for Research on Educational Effectiveness This resource is designed to help researchers understand the circumstances under which a comparative interrupted time series (CITS) design produces estimates that are comparable to those from a randomized controlled trial (RCT). In a simple interrupted time series design, researchers compare the pre-treatment values of a treatment group time series to post-treatment values in order to assess the impact of a treatment, without any comparison group to account for confounding factors. In the CITS design, a treatment and a comparison group are evaluated before and after the onset of a treatment. The authors employ two datasets to determine whether the treatment and comparison group pretreatment time series gets estimates comparable to those from an RCT. They find very different results with the two datasets. They consider multiple approaches to correctly model pre-treatment trends as the treatment and comparison groups may have different slopes in the pretreatment period, and which approach to select may depend on how much data are available. 2013 Link to Resource Brief or summary Methods report 14 pages R False False False False True True True False False False False False False False False False False False True True False False True False False False False Measurement Techniques, Time, Randomized Controlled Trials, Comparative Analysis, Performance, Benchmarking, Grade 4, Grade 5, Grade 6, Scores, Achievement Tests, English, Language Arts, Mathematics 2 True True False False True False False False False
84 Challenges and Strategies for Assessing Specialized Knowledge for Teaching Orrill, C.H.; Kim, O.K.; Peters, S.A.; Lischka, A.E.; Jong, C.; Sanchez, W.B.; & Eli, J.A. Mathematics Teacher Education and Development Journal The authors of this Journal of Mathematics Teacher Education article wrote it to help policy makers, grant-funding agencies, mathematics teachers, and others to understand the knowledge necessary for effective teaching of mathematics and how to assess it. The guide begins with an overview of what is known about measuring teachers’ knowledge. It then highlights the challenges inherent in creating assessment items that focus specifically on measuring teachers’ specialized knowledge for teaching: creating items with appropriate difficulty levels, creating items for the target constructs, using precise language, incorporating pedagogical concerns appropriately, and writing clear stems and distracters. The article also offers insights into three practices that the authors have found valuable in their own work: creating an interdisciplinary team to write the assessment, using interviews to understand whether an item is measuring the appropriate construct and whether it is at an appropriate level of difficulty by providing insight into participants’ thinking, and adopting an iterative process to item writing. This guide is a valuable resource for those who create assessment items for measuring teachers’ knowledge for teaching. 2015 Link to Resource Guide 18 pages P False False False True False False False False False False False False False False False True False False False False False False False False True False False Teacher Knowledge, Mathematics’ Teacher Knowledge, Measurement, Item Development, TEDS-M Testing Service United States high-quality Ferrini-Mundy subject matter rational numbers concerns appropriately story problem Elementary School knowledge teacher mathematics item items teachers teaching teachers’ assessment education 2 True False True False False False False False False
85 Checklist for Assessing USAID Evaluation Reports USAID USAID The U.S. Agency for International Development Evaluation Report Checklist helps evaluators review and strengthen draft evaluation reports. The report details 15 critical factors (noted in boldface type) that should be addressed in early drafts of the evaluation report. The authors recommend assessing the final report against all 76 factors to ensure high technical quality, a strong executive summary, and the targeting of recommendations for decision-making purposes. 2013 Link to Resource Tool 7 pages P False False False False False False False False False False False False False False False False False False False False False False False False False False False evaluation report project data information questions include evaluation report evaluation questions data collection report identify evaluation team collection instruments cost structure evaluation design 2 False False False True False False False False False
86 Checklist For Reviewing an RCT of a Social Program or Project, To Assess Whether It Produced Valid Evidence Coalition for Evidence-Based Policy Coalition for Evidence-Based Policy The Coalition for Evidence-Based Policy developed this tool to identify key items to help researchers assess whether the results of a randomized controlled trial of a social program, project, or strategy (“intervention”) produced valid evidence on the intervention’s effectiveness. The tool is a (non-exhaustive) checklist that addresses designing a study, gauging the equivalence of the intervention and control groups, selecting outcome measures, reporting the intervention’s effects, and determining whether there is enough evidence to assert that the intervention is effective. This checklist is a valuable aid, but good judgment may still be needed to gauge, for example, whether a deviation from one or more checklist items is serious enough to undermine the study’s findings. 2010 Link to Resource Tool 8 pages R True False False False False True False False False True True False False True False True False False False False True False False False False False False intervention study group groups program effects randomized sample control outcomes illustrative examples controlled trial random assignment randomized controlled randomly assign housing projects WWC stamdards 2 False False False True False False False False False
87 City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design Walsh, M.; Raczek, A.; Sibley, E.; Lee-St. John, T.; An, C.; Akbayin, B.; Dearing, E.; Foley, C. Society for Research on Educational Effectiveness This Society for Research on Educational Effectiveness (SREE) report shares the results from three analyses that provide different pieces of evidence for a causal relationship between a City Connects intervention and student achievement. The authors provide an overview of a multi-year effort for the evaluation of a large-scale, school-based intervention for which there are rich, longitudinal data and many opportunities to exploit design features to ask: does the intervention promote the achievement of children in high-poverty, urban schools? The authors recommended that other school-based interventions that cannot feasibly use a randomized control design instead employ a range of methods aimed at reducing endogeneity and getting closer to understanding whether a causal relationship might exist between the intervention and its outcomes. 2015 Link to Resource Brief or Summary 6 pages R False False False False False False True True False True False False False True False False False False False True False False True False False False False Academic Achievement, Quasiexperimental Design, Intervention, Elementary School Students, Urban Schools, Low Income Students, Longitudinal Studies, Program Effectiveness, Hierarchical Linear Modeling, Regression (Statistics), Student Support Not included causal inference interrupted time 2015 Conference SREE Spring Spring 2015 report card Abstract Template city connects students school intervention student schools comparison effects outcomes 2 True True False False False False False False False
88 Classroom Assessment for Student Learning: Impact on Elementary School Mathematics in the Central Region Randel, B.; Beesley, A. D.; Apthorp, H., Clark, T.F.; Wang, X.; Cicchinelli, L. F.; Williams, J. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Central developed this report to provide evaluators with a detailed description of an implementation and impact evaluation of a professional development program in classroom and formative assessment. The report describes the intervention, study design and implementation, implementation fidelity, and impact on student and teacher outcomes. In the process it raises a number of challenges and limitations such as attrition, non-response, missing data, inability to detect very small effects, and generalizability. Appendices provide a rich level of detail as well as data collection instruments This report is a valuable resource for program evaluators who seek an example of a program evaluation that addresses both implementation and impact and that examines a range of threats to the reliability and validity of results. 2010 Link to Resource Methods report Tool 153 pages R False False False True True True False False False True True True True True False False True False True True True False True False True False False None 3 True False False True True False False False False
89 Cluster Assignment Studies: Data Collection and Reporting Issues Price, C. Abt Associates This slide presentation introduces researchers to What Works Clearinghouse revised cluster standards. It explains the concepts of risk of bias from individuals who enter the intervention after the study has begun. The PPT also addresses: (1) how to anticipate what data elements to consider collecting and reporting on for impact studies; (2) clusters and individuals as units of assignment; (3) causal inferences regarding impacts on individuals and clusters; (4) clear sequencing of questions and conditions that lead to determining whether revised standards make a difference; (5) stayers and early and late joiners and the risk of bias from individuals entering clusters after randomization. This slide presentation is a valuable resource for researchers planning and implementing randomized controlled trials or quasi-experimental studies. 2017 Link to Resource Slide Presentation 64 R True False False False True True True False False True False False True True False False False False False False False False True False False False False Implementation Study Tools and Resources 3 True False False False False False False True True
90 College Enrollment Patterns for Rural Indiana High School Graduates Burke, M. R.; Davis, E.; Stephan, J. L. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Midwest developed this report to describe the findings from a descriptive study of rural and non-rural differences in college enrollment patterns among Indiana’s 2010 public high school graduates enrolling in Indiana public colleges. The relevant sections of the report are the limitations section on pages 18-19 and Appendix B on data and methodology. Limitations include incomplete data, restricted ability to generalize, and inability to make causal inferences. Appendix B describes data sources, analytic sample description, missing data handling, variable creation, descriptive statistics, statistical tests, mapping analyses, a rubric on college selectivity, and regression models. These sections are a valuable resource for program directors and evaluators who want to learn or review a range of analyses that can be conducted with administrative data. 2016 Link to Resource Methods report 16 pages R False False False False True False True False False False False False False False True True False False True True True False True False False True False None 2 True False False False True False False False False
91 Common Circumstances in WWC Reviews: Module 4, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video explains that for a study to be judged as having a confounding factor, all three conditions of confounding factors must hold: (1) the study component must be observed (2) the component is aligned completely with only of the study conditions, and the component is not part of the intervention that the study is testing. If any of these conditions do not apply, then WWC defines it as a ‘non-confounding factor’ 2016 Link to Resource Video 6 min P True False False False True False False False False False False False False True False False False False False False False False False False False False False None 2 False False False False False True False False False
92 Compliance-Effect Correlation Bias in Instrumental Variables Estimators Reardon, S. F. SREE Spring 2010 Conference Abstract The study author developed this brief (SREE abstract) to help researchers’ appropriately use instrumental variables estimators to identify the effects of mediators in multi-site randomized trials under conditions of heterogeneous compliance. The brief describes how and when instrumental variables can be used under these conditions, provides a stylized example of a randomized trial investigating the impact of teacher professional development on student achievement, presents the structural model and findings, and concludes with the potentially substantial bias misuse of these estimators can yield. 2010 Link to Resource Methods report Brief 8 pages R False False False False False True False False False False False False False True False False False False True True False False False False False False False Social Science Research, Least Squares Statistics, Computation, Correlation, Educational Research, Statistical Bias, Sampling, Research Design, Evaluation Methods, Intervention, Research Methodology, Research Problems, Measurement 2 False True False False True False False False False
93 Conducting Implementation-Informed Evaluations: Practical Applications and Lessons from Implementation Science REL Mid-Atlantic Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Mid-Atlantic developed this webinar to help districts and schools define new initiatives and measure changes needed to ensure that new skills are learned and used. It begins with a reminder of why implementation is difficult and which three components are required for educational outcomes to change. The focus then turns to two active-implementation frameworks to change a system and effectively implement evidence-based practices. The first is implementation drivers: the critical supports needed to make change happen and what the infrastructure is. The second is improvement cycles: how we can create more hospitable environments, efficiently solve problems, and get better. The webinar also explores practical applications and examples of an implementation-informed, decision-support-data system, including fidelity assessments. It is a valuable resource for educators who want to identify, implement, and evaluate initiatives with a high level of fidelity and meet the goal of improving student outcomes. The webinar is useful for identifying how improvement cycles (rapid-cycle problem solving and usability testing) can be used to put effective practices into operation. 2015 Link to Resource Webinar Slide Presentation 71 slides, 87 minutes P False True False False False False False False False False False False False False False True True False False False False False False False False False False Administration Decision Support Data System Change Integrated Improved Student Infrastructure Improved Intervention Facilitative Administration Decision Support Selection Systems Intervention Facilitative Coaching Training Selection Systems Regional Educational next set Division H 3 False False False False False False True True False
94 Confounding Factors Knowledge Check: Module 4, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences The video provides scenarios of different studies, and the viewer is asked to determine if the scenario illustrates a study with confounding factors. The video carefully explains each correct answer. 2016 Link to Resource Video 6 min P True False False False True False False False False False False False False True False False False False False False False False False False False False False None 2 False False False False False True False False False
95 Content Knowledge for Teaching and the MET Project – Report Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement educator evaluation systems that use multiple measures of effectiveness. The guide focuses on pedagogical content knowledge, a subject-specific professional knowledge that bridges content knowledge and knowledge about the practice of teaching. The guide introduces assessments of content knowledge for teaching mathematics and English language arts. These assessments can be administered to teachers in order to establish the links among content knowledge for teaching, instructional practice, and student achievement. This guide is a valuable resource for program directors who want to design and implement a high quality educator evaluation system. 2010 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
96 Continuous Improvement in Education Excerpt 1: The Model for Improvement REL program Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to implement and continually monitor programs. This first excerpt introduces the three essential questions that guide the model for improvement: what problem are we trying to solve, what changes may we introduce and why, and how will we know that the change is actual improvement. It provides an example in an education setting. This video will be helpful to program staff interested in implementing a continuous improvement process and wanting a very basic introduction. 2016 Link to Resource Video Webinar 2 min P False True False False True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False False False True True False False
97 Continuous Improvement in Education Excerpt 2: The Plan Do Study Act (PDSA) Cycle REL Northeast & Islands Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to implement and continually monitor programs. This second excerpt provides a brief overview of the Plan-Do-Study-Act process, a four-step approach to systematically implement, monitor, and improve programs. It mentions tools that help in its implementation and provides an example in an education setting. This video will be helpful to program staff interested in implementing a continuous improvement process and wanting a very basic introduction. 2016 Link to Resource Video Tool 3 min P False True False False True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False True False True False False True
98 Continuous Improvement in Education Excerpt 3: Fishbone Diagram REL Northeast & Islands Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to continually monitor programs. This third excerpt introduces the participants to the fishbone diagram, a tool that helps identify the causes of a problem. It illustrates its use with an example. This video will be helpful to program staff interested in implementing a continuous improvement process and anyone looking to explore several layers of causes of an issue. 2016 Link to Resource Video Tool 3 min P False True False False False False False False False False False False False False False False False False False False False False False False False False False None 1 False False False True False True False False False
99 Continuous Improvement in Education Excerpt 4: Practical Measures and the Driver Diagram REL Northeast & Islands Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to implement and continually monitor programs. This fourth excerpt provides guidance on selecting process and outcome measures with a focus on measures that are useful to practitioners and embedded in existing practice. It introduces a tool that can be used to identify measures – the driver diagram – and illustrates its use with examples. This video will be helpful to program staff interested in implementing a continuous improvement process and anyone looking to identify a sequence of testable items that have an effect on outcomes. 2016 Link to Resource Video Tool 5 min P False True False False False False False False False False False False False False False True False False False False False False False False False False False None 1 False False False True False True False False True
100 Continuous Improvement in Education Excerpt 5: Practitioner Testimonial REL Northeast & Islands Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a webinar series to help professionals put in place a process to implement and continually monitor programs. This fifth excerpt features a principal who discusses her school’s use of continuous improvement to improve math discourse. She notes the types of data teachers collected, the challenges they encountered, and how they addressed them through multiple Plan-Do-Study-Act cycles. This video will be helpful to program staff considering implementing a continuous improvement process and who could benefit from a real-life example and how the experience was valuable. 2016 Link to Resource Video Webinar 4 min P False True False False True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False False False True True False False
101 Contrast Tool Goodson. B.; Price, C.; Wolf, A.; Boulay, B. Abt Associates Abt Associates developed this tool to help researchers design program evaluations. This “contrast tool” is an Excel spreadsheet designed to support and supplement the development of an evaluation plan. It lists each impact that an evaluation will estimate to test program effectiveness. Individual worksheets can be used to enter information on research questions, outcome measures, baseline measures, samples, and contrasts (i.e., impacts) to be tested. Four worksheets can be used for reporting results: impacts, attrition (for RCTs only), and Baseline Equivalence (for RCTs with high attrition and quasi-experimental designs), and representativeness at baseline. There is an instructions tab for each worksheet. This tool is a valuable resource for evaluators who want to align planned analyses with research questions and to outline key information about the plan’s outcome measures, baseline measures, and analytic samples. The database also includes two examples (https://ies.ed.gov/ncee/projects/xls/ExContrastToolStudentRCT.xlsm and https://ies.ed.gov/ncee/projects/xls/ExContrastToolQED.xlsm) that can help complete the template. This resource can be used in combination with the evaluation plan template also included in this database (https://ies.ed.gov/ncee/projects/pdf/EvaluationPlanTemplate.pdf). 2017 Link to Resource Tool 24 R True False False False False True True False False False False False False False False False False False False False False False False False False False False None 3 False False False True False False False False True
102 Cost-Analysis Methods: What Are These Methods & How Can School Districts Benefit From Using Them? DeCesare, D.; Fermanich, M. Institute of Education Sciences The Institute for Education Sciences developed this webinar to introduce professionals to cost analyses in education research. The webinar focuses on three key types of cost analyses which are valid and answer different questions: cost effectiveness, cost feasibility, and cost benefit. The first eleven minutes of the webinar provide introductions and an agenda for the session; recall that education research has focused on the effectiveness of interventions more than on their costs and benefits and why it is important to consider costs in tight fiscal environments; point interested participants to the Center for Benefit Cost Studies at Columbia University for resources and training; and mention cost-utility analyses, an approach this webinar does not cover. At about 11:00, the webinar goes in depth into cost-effectiveness analyses, which are used to compare alternative programs. A discussion cost-feasibility analysis begins at about 22:25. This is the method that is employed when cost is the only or primary decision factor. At 24:10, the webinar moves on to a discussion of cost-benefit analysis, which is used to identify the cost of a program and the range of its benefits. Pros and cons of each type of analyses are discussed and hypothetical examples provided (a series begins at about 32:00). At about 50:00, the webinar introduces the ingredients method for identifying all components of a program and their cost. At 58:46, the webinar goes into three methods to measure benefits: experimental or quasi-experimental or correlational studies (58:56), self-reported willingness to pay (59:48), and comparable marketplace services (60:08). More examples begin at about 61:00. A summary starts just before 64:00 before the webinar moves on to a real-world example at 67:20. The webinar concludes starting at 86:48 with a short Q&A. 2017 Link to Resource Webinar Guide 1:29:33 P False False False False True True True False False False False False False False False False True False False False False False True False True False False None 3 True False True False False False True False True
103 Danielson’s Framework for Teaching for Classroom Observations Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide presents the elements that constitute the Danielson framework, a research-based protocol to evaluate math and English language arts lessons in all grades. This guide is a valuable resource for program directors who want to select a teacher observation tool. 2010 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
104 Data Collection and Use in Early Childhood Education Programs: Evidence from the Northeast Region Zweig, J.; Irwin, C. W.; Kook, J. F.; Cox, J. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed this report to describe preschools’ collection and use of data on early learning outcomes, the amount of time children spend in early childhood education, and classroom quality. The relevant sections of the report are the limitations section on page 18 and the appendices. Limitations include selection bias, small sample size, and missing data. Appendix A describes the sample, recruitment strategy, interview protocol, and procedures for analyzing the interviews. Appendix B describes the sample, variables, and methodology used to analyze the data (descriptive statistics and statistical tests), including how missing data were handled. Figure C1 in Appendix C illustrates the process used to combine data from multiple sources. Appendices D and E provide interview protocols. These sections are a valuable resource for program directors and evaluators who want to learn as much as possible from a limited sample. 2015 Link to Resource Methods report Tool 18 pages R False False False False True False False False False False True True False False True True True True False True False False True False True False False None 2 True False False True True False False False False
105 Data Quality Essentials – Guide to Implementation: Resources for Applied Practice Watson, J.; Kraemer, S.; Thorn, C. Center for Educator Compensation Reform The Center for Educator Compensation Reform published this article to help school systems identify, address, and plan for data quality problems before performance decisions are put under scrutiny of system stakeholders. The goal of this article is to identify the dimensions of data quality from a compensation reform perspective, provide Teacher Incentive Fund (TIF) project leaders with a data quality focus, and describe common data quality problems and solutions. The second goal is to help TIF leaders focus their attention on the quality of student-teacher linkage data. The article concludes with a brief discussion of continuous improvement methods through follow-up and feedback cycles for teachers. 2016 Link to Resource Guide 18 pages P False False False False True False False False False False False False False False True False False True False True False False True False True False False None 2 True False True False False False False False False
106 Defining and Measuring Baseline Equivalence: Module 3, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video defines and describes the importance of baseline equivalence between intervention and comparison groups in the analytic sample. It provides WWC’s standard for equivalence (if the difference between baseline’s effect size is above 0.25, it does not satisfy baseline equivalence). The video also describes the use of Hedges’ g and Cox’s Index to calculate effect sizes. 2016 Link to Resource Video 10 min R True False False False True True True False False True False False True True False False False False False False False False False False False False False None 2 False False False False False True False False False
107 Defining Confounding Factors: Module 4, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences Confounding factors are components of a study that make it difficult or impossible to isolate the effect of the intervention. To be identified as a confounding factor, the study component must be observed, aligned completely with only one of the study conditions, and is not part of the intervention that the study is testing. The video provides examples of these conditions. 2016 Link to Resource video 8 min P True False False False True False False False False False False False False True False False False False False False False False False False False False False None 2 False False False False False True False False False
108 Demystifying the What Works Clearinghouse: A Webinar for Developers and Researchers Constantine, J.; Cody, S.; Seftor, N.; Lesnick, J.; and McCallum, D. Institute of Education Sciences: What Works Clearinghouse at the U.S. Department of Education The What Works Clearinghouse (WWC) developed this webinar with materials to increase researchers’ knowledge of the key features of the WWC review process, including how they identify research studies and evaluate them against WWC standards. This discussion also provided details on: the role of the WWC, why it is important to have standards for research, how WWC reviews are conducted, what is done to ensure the quality and accuracy of WWC reviews and reports, how to submit questions to the WWC, how to submit a study for review, and WWC resources for researchers and developers. This webinar is a valuable resource for researchers who want to understand the features of high quality research, especially if they intend to submit their work for WWC review. 2013 Link to Resource Webinar 15 slides, 1 hour R True False False False False False False False False False False False False False False False False False False False False False False False False False False None 3 False False False False False False True False False
109 Designing and Conducting Strong Quasi-Experiments in Education. Version 2 Scher, L.; Kisker, E.; & Dynarski, M. Institute of Education Sciences The Institute of Education Sciences (IES) developed this guide to help researchers design and implement strong quasi-experimental designs (QED) when assessing the effectiveness of policies, programs, or practices. The guide first discusses the issues researchers face when choosing to conduct a QED, as opposed to a more rigorous randomized controlled trial design. Next, it documents four sets of best practices in designing and implementing QEDs, including: 1) considering unobserved variables, 2) selecting appropriate matching strategies for creating comparison groups, 3) following general guidelines for sound research, and 4) addressing the What Works Clearinghouse (WWC) standards. The guide then presents 31 frequently asked questions related to QEDs meeting WWC standards with reservations, and discusses common pitfalls that cause studies not to meet WWC standards. Topics covered in the frequently-asked questions section include: study design/group formation, outcomes, confounding factors, baseline equivalence, sample loss and power, and analytic techniques. A detailed checklist provides valuable items for researchers to consider when designing a strong QED study, and a table clarifies why each issue is important and how it relates to WWC standards. The guide is also useful for researchers looking for additional detail, as it provides links to related IES resources. 2015 Link to Resource Guide Tool 27 pages R True False False False False False True False False True True False True True False True False False True True False False False False False False False Educational Research, Quasiexperimental Design, Best Practices, Predictor Variables, Check Lists, Comparative Analysis, Standards, Clearinghouses, treatment administrative records random assignment Design Characteristics developmental math mirror image completely aligned field settings best practices wwc study treatment standards comparison group groups outcomes researchers intervention 3 False False True True False False False False False
110 Designing Quasi-Experiments: Meeting What Works Clearinghouse Standards Without Random Assignment: What Works Clearinghouse Lesnick, Joy; Seftor, Neil; Knab, Jean Institute of Education Sciences: What Works Clearinghouse at the U.S. Department of Education The Webinar aims to respond to questions from the What Works Clearinghouse about standards and procedures for studies that do not use random assignment. The intent is to inform high-quality non-experimental research design. Within a QED, baseline equivalence must be established for the control and treatment groups. WWC determines that effect size (ES) units 0-.05 ES is satisfactory, between .05 and .25 ES is moderate, and < .25 ES does not satisfy baseline equivalence. The Webinar overview includes the What Works Clearinghouse and Standards, Quasi-experimental designs (QEDs) characteristics as well as tips and cautions, resources, and staying informed. The Webinar closes with questions and answers. 2015 Link to Resource Webinar 1 hour, 13 slides R True False False False True True True False False True False False False False False True True False True True False False False False False False False None 3 False False False False False False True False False
111 Designing Strong Studies: Developing Studies Consistent with What Works Clearinghouse Evidence Standards Agodini, R.; Constantine, J.; Seftor, N.; & Neild, R.C. Institute of Education Sciences An Institute of Education Sciences webinar to help researchers design and execute studies more likely to meet What Works Clearinghouse’s (WWC) standards for research studies. The webinar focuses on several key aspects of successfully conducting randomized controlled trials and quasi-experimental study designs in an educational setting, including such issues as attrition. The webinar draws on WWC’s resources to explain how to identify and maintain a research sample, how to collect the necessary data, how to conduct analysis consistent with WWC standards, and strategies to address some of the common pitfalls in effectiveness research. 2014 Link to Resource Webinar 88 min., 34 slides R True False False False False True True False False True True True True True False True False False False True True False True False False False False you study what standards attrition intervention treatment wwc assignment comparison quasi experimental confounding factors qualitative conclusion causal statements 3 True False False False False False True False False
112 Designing Teacher Evaluation Systems: New Guidance from the Measures of Effective Teaching Project Kane, T.; Kerr, K.; Pianta, R. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation published this book to share the lessons learned from its multi-year Measures of Effective Teaching project. The book is a compilation of research papers that use a large database the Foundation created in the course of the project. It focuses on three aspects of the study: using data for feedback and evaluation, connecting evaluation measures with student learning, and the properties of evaluation systems (quality, frameworks, and design decisions). This guide is a valuable resource for professionals designing and evaluating educator evaluation systems. The guide is also useful for program designers and evaluators who want to anticipate issues that are likely to arise due to their design decisions. 2015 Link to Resource Methods Report 617 pages R False False False True True False True False False False False True False False True True True False True True True True True False True False False None 3 True False False False False False False False False
113 Developing Surveys Using the Collaborative Survey Development Process Irwin, C.; Stafford, E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a video series to help professionals develop, administer, and analyze surveys in practical education contexts. The series builds off on a series of guides available online. The presentation (from about 0:08:55 to 1:15:00) addresses collaborative survey development. It defines it as a process that includes educators, researchers, and content experts, which allows the leveraging of expertise, reduces the burden on anyone individual, and promotes survey quality. The video describes the establishment of a development team and the five steps in the process: identifying topics of interest, identifying survey items, drafting new items and adapting existing ones, reviewing draft items with stakeholders and content experts, and refining the draft survey with pretesting using cognitive interviewing. It provides practical examples, tools, and activities for each step. This video will be helpful to those interested in developing and administering surveys as a means of collecting data to inform education policy and practice. 2016 Link to Resource Video Guide 1 hour 32 min P False False False False True False False False False False False False False False False False True False False False False False False False False False False None 3 False False True False False True False False False
114 Developing Surveys Using the Collaborative Survey Development Process Excerpt 2: Rewriting Items Irwin, C.; Stafford, E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a video series to help professionals collaboratively develop and administer surveys. The series briefly summarizes information from a related guide. This video proposes three ways to phrase a survey question and asks participants to list their weaknesses. This video will be helpful to those wanting to build their awareness of the complexity of item writing. 2016 Link to Resource Video Brief 4 minutes P False False False False True False False False False False False False False False False False True False False False False False False False False False False None 1 False True False False False True False False False
115 Developing Surveys Using the Collaborative Survey Development Process Excerpt 1: Survey Research Process and Collaborative Survey Development Irwin, C.; Stafford, E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a video series to help professionals collaboratively develop and administer surveys. The series very briefly summarize information from a related guide. This video mentions the three stages of the survey research process: survey development, sample selection and survey administration, and data analysis and reporting. It then asks participants why they think the development process should be collaborative. Next, it defines collaborative survey development as a process that includes educators, researchers, and content experts, which allows the leveraging of expertise, reduces the burden on anyone individual, and promotes survey quality. This video will be helpful to those considering the utility of developing surveys as part of a collaboration. 2016 Link to Resource Video 5 minutes P False False False False True False False False False False False False False False False False True False False False False False False False False False False None 1 False False False False False True False False False
116 Developing Surveys Using the Collaborative Survey Development Process Excerpt 3: Q&A. Irwin, C.; Stafford, E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Northeast & Islands developed a video series to help professionals collaboratively develop and administer surveys. The series briefly summarizes information from a related guide. This video defines cognitive interviewing as a process to identify and correct problems with surveys by watching respondents take one. This can reveal issues such as confusing items, selection bias, people who open the survey and close it almost immediately. The authors have used this process as a way to pilot/pre-test a survey and recommended conducting this prior to a full-on pilot. This video will be helpful to those who have absolutely no knowledge of cognitive interviewing. 2016 Link to Resource Video 4 min P False False False False True False False False False False False False False False False False True False False False False False False False False False False None 1 False False False False False True False False False
117 Dimensions of Dosage: Evaluation Brief for TIF Grantees Bailey, J.; Shakman, K.; Ansel, D.; Greller, S. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to introduce evaluators to dosage as a tool that can be used when it is not possible to identify a good comparison group for evaluating the effects of a program. Dosage refers to the amount of the intervention that is delivered—for example, the number of hours teachers spend with a teacher leader. The guide describes dosage, its dimensions, associated data requirements, and considerations to keep in mind when thinking about measuring dosage. It provides an example from a school district and a list of questions to guide discussions around dosage. This guide is a valuable resource for program directors planning a high quality evaluation. 2016 Link to Resource Guide 8 pages P False False False True True False False False False True False False False False False False False False False False False False False False False False False None 2 False False True False False False False False False
118 Dynamic Effects of Teacher Turnover on the Quality of Instruction. Working Paper 170 Hanushek, E.; Rivkin, S.; Schiman, J. National Center for Analysis of Longitudinal Data in Education Research The National Center for Analysis of Longitudinal Data in Education Research developed this report to provide researchers with a thorough analysis of the confounding factors that affect conclusions about the relationship between teacher turnover and quality instruction. The report focuses on analyses that aim to account for nonrandom sorting of students into classrooms, endogenous teacher exits, and grade-switching. A literature review describes the steps taken in previous research, lessons learned, and limitations that this paper addresses. 2016 Link to Resource Methods report 52 pages R False False False False True False False False False False False False False False True True False False True True False False True False True False False Labor Turnover, Teacher Persistence, Educational Quality, Urban Schools, Disadvantaged Youth, Low Achievement, Teacher Distribution, Teacher Placement, Teaching Experience, School Districts, Teacher Effectiveness, Productivity, Public Schools, Academic Achievement, Comparative Analysis, Instructional Program Divisions, Records (Forms), Student Records, Statistical Analysis 3 True False False False True False False False False
119 Educator Evaluation System Data: What Exists and How do we use it? Lemke, M.; Livinston, T.; & Rainey, K. Regional Education Laboratory Program, Institute of Education Sciences This Regional Educational Laboratory (REL) Midwest webinar presents research on different designs of teacher evaluation systems and provide examples of how educator evaluation data is being used to support decision-making and inform professional development. The presenters also share insights on the implementation of evaluation systems and use of data at the state and district levels. 2015 Link to Resource Webinar Slide presentation 68 slides, 78 minutes P False True False True True False False False False False False False False False False False False False False False False False False False False False False Educator Effectiveness implementation research data assessment effective leads cohort practices circle 3 False False False False False False True True False
120 Estimating causal effects using experimental and observational designs Schneider, B.; Carnoy, M.; Kilpatrick, J.; Schmidt, W.; & Shavelson, R. American Educational Research Association This American Educational Research Association (AERA) report is intended to help researchers, educators, and policymakers understand causal estimation by describing the logic of causal inference and reviewing designs and methods that allow researchers to draw causal inferences about the effectiveness of educational interventions. The purpose of the report is to explain the value of quasi-experimental techniques that can be used to approximate randomized experiments. The report does not address many of the nuances of experimental and quasi-experimental designs. 2007 Link to Resource Methods Report 158 pages R True False True False False True True True False True False False True True True False False False True True False False True False False False False American Children’s Cognitive Cognitive Growth Left Behind North Carolina Rosenbaum characteristics coverage curriculum kinder effects students research causal treatment school achievement education student schools 3 True False False False True False False False False
121 Evaluating Principals Work: Design Considerations and Examples for an Evolving Field Kimball, S.; Clifford, M.; Fetters, J.; Arrigoni, J.; Bobola, K. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program directors design principal evaluation systems. The first section addresses leadership effectiveness literature and the documented need for principal evaluation. The second section includes the key components of principal evaluation systems supplemented with examples from innovative school districts and states. This section addresses challenges reformers often confront in the design process and solutions for some of those challenges. The third section summarizes the critical areas of evaluator training and pilot testing. The fourth section highlights stakeholder engagement and communication. The fifth section summarizes training and pilot testing. This guide is also a valuable resource for researchers who want to assess the quality of principal evaluation systems. 2012 Link to Resource Methods Report 23 pages P False False False True True False False False False False False False False False False True True False False False False False True False True True False None 3 True False False False False False False False False
122 Evaluating Programs for Strengthening Teaching and Leadership Milanowski, A.; Finster, M. Teacher and Leadership Programs The Teacher Incentive Fund developed this guide to help grantees evaluate their programs. The guide focuses on key aspects of the evaluation process, including identifying the logic of how the program will lead to the desired outcomes, developing evaluation questions that examine this logic, exploring methods for measuring these evaluation questions, choosing an appropriate evaluation design for assessing impacts, disseminating evaluation findings, and choosing the right evaluator. This guide is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The guide is also useful for seasoned evaluators who want to review important evaluation issues. 2016 Link to Resource Guide 67 pages P False False False True True True True False True True False False False True False True False False True True True False True True True True True None 3 True False True False False False False False True
123 Evaluation of the Effectiveness of the Alabama Math, Science, and Technology Initiative (AMSTI) Newman, D.; Finney, P. B.; Bell, S.; Turner, H.; Jaciw, A.; Zacamy, J.; Feagans G. L. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed this report to describe the findings from a study of a two-year intervention intended improve student achievement in math, science, and technology. An overview of findings starts in the last paragraph of page xxiv and ends at the bottom of page xxvii. The relevant sections of the report are selection and random assignment of schools (appendix C on pages C-1 to C-3), a teacher survey of math and science classroom practices and professional development and availability of technology (appendix G on pages G-1 to G-15), data cleaning and data file construction (appendix H on page H-1), and sections on attrition (appendix I — steps through which the analytic sample was selected for each confirmatory outcome on pages I-1 to I-8, N — attrition associated with the SAT reading outcomes, teacher content knowledge in mathematics and in science, and student engagement in mathematics and science through study stages for samples used in Year 1 exploratory analysis on pages N-1 to N-8, and R — on attrition through study stages for samples contributing to estimation of two-year effects on pages R-1 to R-9). 2012 Link to Resource Methods Report 44 Pages R False False False False True True False False False True False False True True True False False True True True False False True False True False False Student Achievement, Mathematics, Science 3 True False False False True False False False False
124 Evaluation of the Teacher Incentive Fund: Final Report on Implementation and Impacts of Pay-for-Performance Across Four Years Chiang, H.; Speroni, C.; Herrmann, M.; Hallgren, K.; Burkander, P.; Wellington, A. Institute of Education SciencesD3:D7 National Center for Education Evaluation and Regional Assistance The Institute of Education Sciences developed this report to disseminate evaluation findings on the Teacher Incentive Fund’s (TIF) implementation and impacts of pay-for-performance. The relevant sections of the report are the overview in Chapter II and the methodology appendices. Appendix A provides information on random assignment and matching of schools, attrition, selection of the teacher survey sample, sampling, nonresponse, and treatment of missing data. Appendix B provides the rationale for and technical details of the methods used in the report. It covers standardization of educator performance ratings and student test scores across districts; the technical approach for describing the distribution of performance ratings and TIF payouts in evaluation districts; the analytic methods in detail; interpretation of student achievement impacts; methods used to explore differences in impacts across teacher and student subgroups, districts, and schools; estimation of year-to-year changes in average teacher perceptions of TIF; imputation of educators’ beliefs about bonus amounts; and minimum detectable impacts. These sections are a valuable resource for program directors and evaluators who are designing and implementing high quality evaluations that involve random assignment and both qualitative and quantitaive data collection. 2017 Link to Resource Methods report 63 pages R False False False False True True False False False True True False True True True True True False True True False False True False True True False Incentive, teachers 3 True False False False True False False False False
125 Evaluation Plan Template Price, C.; Goodson, B.; Wolf, A.; Boulay. B. Abt Associates Abt Associates developed this guide to help researchers design program evaluations. It identifies the key components of an evaluation plan and provides guidance about the information typically included in each section of a plan for evaluating both the effectiveness and implementation of an intervention. It suggests which sections should be filled during initial evaluation planning and which can be filled later in the process. Guidance appears in italics in a box under each section heading. After the document is downloaded, saved, and opened in a pdf editor, text can be typed into each box and the guidance can be deleted. Throughout, there are references to additional resources or tools that can help develop an evaluation plan. This tool is a valuable resource for evaluators who want to align planned analyses with research questions and to outline key information about the plan’s outcome measures, baseline measures, and analytic samples. The database includes example plans for student-level (https://ies.ed.gov/ncee/projects/pdf/ExEPStudentLevelRCT.pdf) and cluster (https://ies.ed.gov/ncee/projects/pdf/ExEPClusterRCT.pdf) randomized controlled trials and quasi-experimental designs (https://ies.ed.gov/ncee/projects/pdf/ExEPQuasiExperimentalDesign.pdf). The template can be used in combination with a Contrast Tool also included in the database (https://ies.ed.gov/ncee/projects/xls/ContrastTool.xlsm) to document each impact that the evaluation will estimate to test program effectiveness. 2016 Link to Resource Guide Tool 15 R True False False False False True True False False True False False True False False True True False False False False False False False False False False None 2 False False True True False False False False True
126 Example Contrast Tool for a Quasi-Experimental Design Wolf, A.; Barbara Goodson, B.; Gan, K.; Price, C.; Boulay, B. Abt Associates Abt Associates developed this tool to help researchers design program evaluations. This “contrast tool” is an Excel spreadsheet designed to support and supplement the development of an evaluation plan. It contains information from an example study that investigates the impact of a mathematics intervention that aims to improve students’ successful completion of developmental math courses in order to prepare them to be able to meet core course requirements for college-level mathematics. Individual worksheets contain information on research questions, outcome measures, baseline measures, samples, and contrasts (i.e., impacts) to be tested. Two worksheets have reporting information on attrition and baseline equivalence. There is an instructions tab for each worksheet. This tool is a valuable resource for evaluators who want to align planned analyses with research questions and to outline key information about the plan’s outcome measures, baseline measures, and analytic samples. It can be used in combination with the matching example evaluation plan included in this database (https://ies.ed.gov/ncee/projects/pdf/ExEPQuasiExperimentalDesign.pdf) included in this database. A template version is included as well (https://ies.ed.gov/ncee/projects/xls/ContrastTool.xlsm). 2016 Link to Resource Tool 20 R True False False False True False True False False True False False True False False True False False False True True False True False False False False None 2 True False False True False False False False True
127 Example Contrast Tool for a Student-Level Randomized Controlled Trial Wolf. A.; Epstein, C.; Goodson, B.; Price, C.; Boulay, B. Abt Associates Abt Associates developed this tool to help researchers design program evaluations. This “contrast tool” is an Excel spreadsheet designed to support and supplement the development of an evaluation plan. It contains information from an example study that investigates the impact of a peer support community on college students at risk of failing to complete a Bachelor’s degree. Individual worksheets contain information on research questions, outcome measures, baseline measures, samples, and contrasts (i.e., impacts) to be tested. Three worksheets can be used for reporting results: impacts, attrition, and Baseline Equivalence. There is an instructions tab for each worksheet. This tool is a valuable resource for evaluators who want to align planned analyses with research questions and to outline key information about the plan’s outcome measures, baseline measures, and analytic samples. It can be used in combination with the matching evaluation plan example (https://ies.ed.gov/ncee/projects/pdf/ExEPStudentLevelRCT.pdf) included in this database. A template version is included as well (https://ies.ed.gov/ncee/projects/xls/ContrastTool.xlsm). 2016 Link to Resource Tool 20 R True False False False True True False False False True False False True False False True False False False True True False True False False False False None 2 True False False True False False False False True
128 Example Evaluation Plan for a Quasi-Experimental Design Wolf. A.; Goodson, B.; Gan, K.; Price, C.; Boulay. B. Abt Associates Abt Associates developed this guide to help researchers design quasi-experimental studies (QEDs). This evaluation plan template identifies the key components of an evaluation plan, provides guidance about the information typically included in each section of a plan for evaluating the effectiveness of an intervention, and illustrates each section with an example. It can be used in combination with a “contrast tool” available in this database (https://ies.ed.gov/ncee/projects/xls/ExContrastToolQED.xlsm), which is an Excel file that lists each impact that the example evaluation will estimate to test program effectiveness. The database also includes a template (https://ies.ed.gov/ncee/projects/pdf/EvaluationPlanTemplate.pdf). This guide is a valuable resource for evaluators who want to align planned analyses with research questions and to outline key information about the plan’s key measures and analytic samples. 2016 Link to Resource Guide Tool 21 R True False False False True False True False False True True False True True True True False False True True False False True False False False False None 3 True False True True False False False False True
129 Example Evaluation Plan for a Student-Level Randomized Control Trial Wolf, A.; Epstein, C.; Goodson, B.; Price, C.; Boulay, B. Abt Associates Abt Associates developed this guide to help researchers design student-level randomized controlled trials (RCTs). It identifies the key components of an evaluation plan, provides guidance about the information typically included in each section of a plan for evaluating both the effectiveness and implementation of an intervention, and illustrates each section with an example. The plan is for an RCT in which students are randomly assigned to an intervention or a control condition. It can be used in combination with a “contrast tool” available in this database (https://ies.ed.gov/ncee/projects/xls/ExContrastToolStudentRCT.xlsm), which is an Excel file that lists each impact that the example evaluation will estimate to test program effectiveness. The database also includes a template (https://ies.ed.gov/ncee/projects/pdf/EvaluationPlanTemplate.pdf). This guide is a valuable resource for evaluators who want to align planned analyses with research questions and to outline key information about the plan’s outcome measures, baseline measures, and analytic samples. 2016 Link to Resource Guide Tool 27 R True False False False True True False False False True True True True True True True True False True True False False True False False False False None 3 True False True True False False False False True
130 Face Validity and Reliability: Module 5, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video defines and provides examples of face validity and reliability. Face validity is ensuring that the chosen measure actually captures data on the outcome of interest. The reliability of a measure ensures that the measure would yield similar scores when used several times, and has a minimum risk of measurement error. 2016 Link to Resource video 6 min P True False False False True False False False False False False False False False False True False False False False False False False False False False False None 2 False False False False False True False False False
131 First in the World Project Directors Meeting: Implementing a Successful Quasi-Experimental Design Moss, M. Institute of Education Sciences Abt Associates developed this guide to help researchers design quasi-experimental designs (QEDs). The guide begins with a review of randomized controlled trials (RCTs), which are considered the strongest program evaluation design in the field. It focuses on QEDs as the next best alternative when RCTs are not feasible and describes their limitations. The guide suggests steps to take and pitfalls to avoid when selecting a comparison group and provides a link to a list of software for implementing matching methods. The final section of the guide discusses baseline equivalence. 2016 Link to Resource Slide Presentation Guide 26 R True False False False True True True False False True False False False True False False False False False True False False False False False False False References WWC Standards, Any Education Topic, Initial Planning, RCT, QED, Identifying Comparison Groups, Reducing Bias in Comparison Groups, Addressing Analysis Challenges 3 False False True False False False False True True
132 First in the World Project Directors Meeting: Understanding Challenges in Random Assignment Wolf, A. Institute of Education Sciences Abt Associates developed this guide to help researchers plan for a randomized controlled trial, focusing on conducting and monitoring random assignment (RA). It provides suggestions for five major activities in the RA process: designing it (identifying participants, timing the RA, and documenting the process); creating buy-in (with examples of communications strategies); identifying eligible participants; conducting the RA (including dealing with late identification and correct placement as well as documenting those); and monitoring it. The guide provides a link to a diagram for tracking sample numbers throughout the study (click on “CONSORT 2010 Flow Diagram”). Click on “CONSORT 2010 Checklist” for a checklist of information to include when reporting on an RCT. 2016 Link to Resource Slide Presentation Guide 31 P False False False False True True False False False True False True True True False False False False False False False False False False False False False Any Education Topic, Initial Planning, RCT, Identifying Comparison Groups, Recruiting Study Participants, Addressing Changes in Your Sample, Reducing Bias in Comparison Groups 3 False False True False False False False True True
133 Forming a Team to Ensure High-Quality Measurement in Education Studies Kisker, E.; & Boller, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) program developed this brief to help identify the expertise needed to formulate an appropriate and effective measurement strategy and to collect high-quality data. The primary audience for this brief is research teams – such as those at RELs – who work in partnership with school districts or states. The brief provides tips for finding staff and consultants with the needed expertise, and outlines their main responsibilities. It also outlines typical measurement tasks and discusses how measurement team members can work together to complete these tasks successfully. 2014 Link to Resource Guide 18 pages P True False False False False False False False False False False False False False False False False False False False False False False False False False False data professional networks Analytic Technical Works Clearinghouse study U.S. Department statistical power strategies listed measurement measures collection expert team measure constructs 2 False False True False False False False False False
134 Forum Guide to Decision Support Systems: A Resource for Educators. NFES 2006-807 U.S. Department of Education U.S. Department of Education This document was developed by educators for educators to remedy the lack of reliable, objective information available to the education community about decision support systems. The authors hope it will help readers better understand what decision support systems are, how they are configured, how they operate, and how they might be implemented in an education institution. The Decision Support System Literacy Task Force hopes that readers will find this document useful, and that it helps improve data-driven decision making in schools, school districts, and state education agencies across the nation. This document addresses the following broad questions: (Part I) What is a Decision Support System, and how does a decision support system differ from a data warehouse and a data mart? (Part II) What components, features, and capabilities commonly comprise a decision support system? How does each broad category of these components and features contribute to system operation? (Part III) How does an education organization buy or develop a decision support system, and how are stakeholders trained to use it? 2006 Link to Resource Guide 34 pages P False False False True False False False False False False False False False False True False True True True False True False False False False False False Department of Education, Education Statistics website, secondary education, NATIONAL COOPERATIVE EDUCATION STATISTICS SYSTEM, National Forum, National Center, local education agencies, early childhood education, state, Forum World Wide, Cooperative System, data, NCES World Wide Web Home Page, NCES World Wide Web Electronic Catalog, uniform information, local levels, principles of good practice, meeting, activities, Publications, policymaking, purpose, products, views, endeavors, formal review, resources, opinions, September 3 False False True False False False False False False
135 Gathering Feedback for Teaching: Combining High-Quality Observations with Student Surveys and Achievement Gains (Study) Kane,T.J.; & Staiger, D.O. Bill & Melinda Gates Foundation The Bill & Melinda Gates Foundation developed this guide to help policymakers and practitioners effectively implement observations of teacher practice. The guide reviews minimum requirements for high-quality classroom observations and investigates two properties of five instruments: reliability (the extent to which results reflect consistent aspects of a teacher’s practice and not the idiosyncrasies of a particular observer, group of students, or lesson) and validity (the extent to which observation results are related to student outcomes). This guide is a valuable resource for policymakers and practitioners at every level who are intensely focused on improving teaching and learning through better evaluation, feedback, and professional development. It is one outcome of the Gates’ Measures of Effective Teaching project, which investigates a number of alternative approaches to identifying effective teaching: systematic classroom observations, surveys collecting confidential student feedback, a new assessment of teachers’ pedagogical content knowledge, and different measures of student achievement. Those wanting to explore all the technical aspects of the study and analysis are referred to a companion research report. 2012 Link to Resource Guide 36 pages P False False False True False False False False False False False False False False False False False False False False False False True False False False False Teacher Effectiveness, Achievement Gains, Evaluation Methods, Teaching Methods, Observation, Feedback (Response), Video Technology, Research Papers (Students), Student Surveys, Teacher Competencies, Student Attitudes, Academic Achievement, Faculty Development, Teacher Improvement, Standards, Scoring, Language Arts, Mathematics Instruction, Test Validity, Test Reliability, student teachers scores teaching project met teacher observation observations instruments met project value added combined measure effective teaching student achievement observation scores competencies classroom observations achievement gains 3 True False True False False False False False False
136 Gold-Standard Program Evaluations, on a Shoestring Budget Baron, J. Education Week An Education Week article advocating for the use of low-cost randomized controlled trials (RCT) in education research. The article describes two examples of recent RCTs that were conducted at low cost, yet produced findings of policy and practical importance. The author suggests that with improvements in information technology and the availability of high-quality administrative data, it is now more feasible to conduct gold-standard randomized evaluations on a shoestring budget. 2011 Link to Resource Brief or Summary 2 pages P False False True False False True False False False False False False False False True False False False False False False False False False False False False already collected student achievement low cost key outcomes program study cost education student data group achievement conducted 1 False True False False False False False False False
137 Graphic Design for Researchers Institute of Education Sciences (ED); Decision Information Resources, Inc.; Mathematica Policy Research, Inc. Regional Education Laboratory Program, Institute of Education Sciences The Institute of Education Sciences developed this guide that offers a basic overview on how researchers can effectively use design to create engaging and visually appealing Regional Educational Laboratory products. This guide also touches on how researchers can use data visualization to make complex concepts accessible. 2014 Link to Resource Guide 14 pages R False False False False False False False False False False False False False False False False False False False False False True False False False False False What Works Clearinghouse approaching benchmarks benchmarks category transition courses visually appealing design elements example http data visual clip art 2 False False True False False False False False False
138 Graphical Models for Quasi-Experimental Designs Kim, Y.; Steiner, P.M.; Hall, C.E.; Su, D. Society for research on Educational Effectiveness This conference abstract was written to introduce researchers to graph-theoretical models as a way to formalize causal inference. They describe mechanics and assumptions of the causal graphs for experimental and quasi-experimental designs — randomized controlled trials, and regression discontinuity, instrumental variable, and matching and propensity score designs. For each design, they show how the assumptions required for identifying a causal effect are encoded in the graphical representation. They argue that causal graphs make the assumptions more transparent and easier to understand for applied researchers. This abstract will help users in getting a better understanding of the designs’ assumptions and limitations and conducting better causal studies. It will be useful for researchers who want to represent the features of the study designs and data collection process that affect the identification of causal effects. 2016 Link to Resource Brief or Summary 11 pages R False False False False False True True True False False False False False False False False False False True False True True False False False False False causal estimands of, threats, University of Wisconsin-Madison, Quasi-Experimental Designs, ubin Causal Model, causal inference, Abstract Title Page, Abstract Body Limit, Conference Abstract Template, Campbell, convenient way of defining causal estimands, conceptualization of internal validity, RCM, deriving identification assumptions, Mexico Public Education Department, SREE Spring, fields, central role, estimating cause-effect relationships, history, maturation, Hall, page count, Authors, Affiliations, Background, Context, stable-unit-treatment-value assumption, instrumentation effects, strong ignorability, potential outcomes frameworks, Shadish, behavioral sciences, pages single-spaced, practical point of view, selection, tradition of ruling, researchers, psychology, propensity scor, SUTVA, Cook, Steiner, Holland, formal representation, Courtney, Yongnam Kim, Dan Su, Graphical Models, Peter 2 False True False False False False False False False
139 Group Design Knowledge Checks: Module 1, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences This video includes a knowledge check for viewers to assess whether they have a clear understanding of what types of designs are eligible for WWC review. Randomized Controlled trials (RCTs) and Quasi-experimental designs (QED) are addressed. 2016 Link to Resource Video 12 min P True False False False True True True False False False False False False False False False False False False False False False False False False False False None 2 False False False False False True False False False
140 How (and What) to Train Teachers and Administrators on Education Data Privacy Vance, A. National Association of State Boards of Education The National Association of State Boards of Education developed this presentation to train teachers and administrators on the protection of student data. This video reviews state trends, common reasons for data breaches, legislation, what administrators need to know when using, saving, and sharing data and working with vendors, and ways to prevent and handle student data disclosure. This video provides several resources, most of them available online; lessons learned; and a communications toolkit for best practices while communicating with various stakeholders. Key points and resources can be easily accessed by scrolling along the timing line on screen and pausing on the slides the presenter shows. This video will be helpful to those working with sensitive data. 2016 Link to Resource Video Guide 44 min P False False False False True False False False False False False False False False True False False False False False False False False False False False False None 3 False False True False False True False False False
141 How to Facilitate the Logic Models Workshop for Program Design and Evaluation Rodriguez, S. & Shakman, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed this webinar as a guide for educational leaders who want to introduce logic models for program design, implementation, and evaluation using a workshop toolkit available from REL Northeast & Islands (the toolkit is also in this database – see “Logic models for program design, implementation, and evaluation: Workshop toolkit”. During the webinar, the authors of the toolkit focus on the facilitation, organization, and resources necessary to conduct the workshop virtually or in person. 2015 Link to Resource Webinar Slide presentation 1 hour 13 minutes P False False False True True False False False False False False False False False False False False False False False False False False False False False False None 3 False False False False False False True True False
142 I Wish I Could Believe You: The Frustrating Unreliability of Some Assessment Research Hunt, T.; Jordan, S. Practitioner Research in Higher Education The authors developed this guide to help professionals understand the limitations of assessment research and some mitigation strategies. The challenge is determining the impact of assessment practices on learning in authentic educational settings. The guide uses examples to highlight some of the difficulties: assuming that if two effects are correlated, then one must have caused the other; confounding variables obscuring the true relationships; experimental approaches that are too far removed from reality; and the danger that self-reported behavior and opinion is sometimes different from student’s actual behavior. As practical solutions, the guide proposes using: experimental or pseudo-experimental approaches; mixed methods; and meta-analyses. 2016 Link to Resource Guide 9 P False False False False True True True False False True False False False False False False False False False True False False False False False False False Research methodology 2 False False True False False False False False False
143 Identifying and Measuring Attrition: Module 2, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video defines attrition in RCTs (WWC does not consider attrition in QEDs) and discusses the risk of bias in a study due to attrition. The video provides examples of different types of attrition, defines overall versus differential attrition, and discusses how the WWC calculates attrition rates. WWC defines high attrition as bias if it is equal or greater than 0.05 standard deviations. 2016 Link to Resource Video 15 min P True False False False True True False False False False False False True False False False False False False False False False False False False False False None 2 False False False False False True False False False
144 Impact of a Checklist on Principal–Teacher Feedback Conferences Following Classroom Observations Mihaly,K.; Opper, I.; Rodriguez, L.; Schwartz, H.; Grimm, G.; Mariano; L. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southwest developed this report to describe the findings from a study of a principal-teacher feedback conferences. This study was a statewide experiment in New Mexico in 2015-16. It provided principals and teachers a checklist to use in the feedback conferences that principals had with teachers following formal classroom observations and tested whether it would improve the quality and impact of the conferences. The relevant section of the report is Appendix D. Appendix D presents the data, sample, and methodology. It includes a detailed description of how outcome measures were constructed, sample recruitment, the model, sensitivity and subgroup analyses, and treatment of missing data and crossovers. This section is a valuable resource for evaluators planning and implementing randomized controlled trials. 2018 Link to Resource Methods report 18 R False False False True True True False False False True False True True True True True True False True True False False True False True True False Principals (Administrator Role), Teacher 2 True False False False True False False False False
145 Impact of a Checklist on Principal–Teacher Feedback Conferences Following Classroom Observations Mihaly,K.; Opper, I.; Rodriguez, L.; Schwartz, H.; Grimm, G.; Mariano; L. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southwest developed this report to describe the findings from a study of a principal-teacher feedback conferences. This study was a statewide experiment in New Mexico in 2015-16. It provided principals and teachers a checklist to use in the feedback conferences that principals had with teachers following formal classroom observations and tested whether it would improve the quality and impact of the conferences. The relevant sections of the report are Appendices B and D. Appendix B provides a principal and a teacher checklist including a list of documents for principals and teachers to bring and items on how to start the conversation, acknowledging successes, identifying challenges, generating ideas for addressing them, and ending on a positive note. These checklists could be used to suggest elements to include in the development of a tool (e.g., survey or interview) to collect data to assess implementation fidelity and effectiveness of feedback conferences as part of evaluation systems. Appendix D presents the data, sample, and methodology. It includes a detailed description of how outcome measures were constructed, sample recruitment, the model, sensitivity and subgroup analyses, and treatment of missing data and crossovers. These sections are a valuable resource for evaluators planning and implementing randomized controlled trials. 2018 Link to Resource Tool Methods report 72 R False False False True True True False False False True False True True True True True True False True True False False True False True True False Principals (Administrator Role), Teacher 3 True False False True True False False False False
146 Impacts of Comprehensive Teacher Induction: Final Results from a Randomized Controlled Study Glazerman, S.; Isenberg,E.; Dolfin, S.; Bleeker, M.; Johnson,A.; Grider, M.; Jacobus, M. National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to provide researchers with a description of an evaluation of the impact of a comprehensive teacher induction initiative. The report focuses on the design and implementation of a randomized experiment using data collection from multiple stakeholders. It addresses issues of comparison group selection and bias. Appendix A provides detailed information on the model and how it addresses potential bias. Figures are provided throughout that are examples of various ways of displaying results. 2010 Link to Resource Methods report 272 pages R False False False True True True False False False True True True True True True False True False True True True True True False True True False None 3 True False False False True False False False False
147 Incomplete Reporting: Addressing the Prevalence of Outcome-Reporting Bias in Educational Research Trainor, B.; Polanin, J.; Williams, R.; Pigott, T. SREE Spring 2015 Conference Abstract Study authors developed this brief (SREE abstract) to raise researchers’ awareness of biases due to incomplete outcome reporting — the practice of omitting outcome variables from primary studies that were actually collected. Primary studies not reporting on all outcomes may lead to an incomplete understanding of a phenomenon that may be compounded when the study is included in a systematic review of research. The brief focuses on how authors plan to conduct analyses of a set of dissertations to identify factors that might be related to outcome-reporting bias such as whether the outcome was considered a primary or a secondary focus of an intervention, whether the outcome was a negative or potentially harmful effect of the intervention, and whether the measurement strategy for the outcome was reliable and valid. 2015 Link to Resource Methods report Brief 5 pages R False False False False True False False False False False False False False False False False False False False True True False False False False False False None 1 False True False False True False False False False
148 Indicators of Successful Teacher Recruitment and Retention in Oklahoma Rural School Districts Lazarev, V.; Toby, M.; Zacamy, J.; Lin, L.; Newman, D. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southwest developed this report to describe teacher, district, and community characteristics in rural Oklahoma that predict which teachers are most likely to be successfully recruited and retained longer term. Three sections of the report are relevant. The first one is study limitations on page 16, which discusses unavailability of data on key variables that could predict recruitment and retention; a ten-year data time span, which does not allow for long-term analyses; and the non-experimental design, which prevents making causal inferences. Appendix B describes how variables were identified. In Appendix C on data and methodology, the authors describe the sample, datasets, outcome variable creation, treatment of missing data, accounting for the censoring of duration data, and the approach they used for each research question, including descriptive statistics, probabilities, statistical tests, and regression analyses (logistic regression and discrete-time survival analysis). It also describes how results across models were compared to understand the predictive capacity of specific groups of variables, robustness checks, and the calculation of regression-adjusted marginal probabilities to account for the fact that variables are likely to be significant simply because of the size of the database. These sections are a valuable resource for program evaluators and researchers who want to identify patterns in large datasets, especially those that involve duration variables. 2017 Link to Resource Methods report 17 pages R False False False True True False False False False False True False False False True True False False True True False False False False True True True Retention (Of Employees), States, Teachers Attrition/Mobility 2 True False False False True False False False False
149 Introducing the Education Logic Model (ELM) Application REL Pacific Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Pacific developed this guide tool to provide program implementers and evaluators with a logic model template to capture information about program resources, activities, outputs, and short-, mid-, and long-term outcomes. Logic models are used to visually present the relationships among program elements and help document a path to reach expected outcomes. They can be used for program design, implementation, evaluation, and communications. To access the resource, open the weblink, click on the arrow by “Download ELM;” save the file (it takes several minutes to download); right click on elm.zip; select “extract all;” in the window that appears, select a folder for storing; click “Extract;” in that folder, click on the ELM folder; double-click on “index.” To test the process, click on “New Model,” enter “test” under program name and click next; follow the instructions to obtain a logic model that can be saved as a pdf file and printed by clicking on “View/Print.” Written and audio instructions are provided throughout. 2018 Link to Resource Tool 5 P False False False False True False False False False False False False False False False True False False False False False False False False False False False Any Education Topic, Initial Planning, Selecting Appropriate Outcome Measures 1 False False False True False False False False True
150 Introduction to Group Designs: Module 1, Chapter 1, WWC Training What Works Clearinghouse Institute of Education Sciences This video provides an overview of research designs reviewed under WWC Group Design Standards–randomized controlled trials (RCT) or Quasi-Experimental Designs (QED) 2016 Link to Resource Video 3 min P True False False False True True True False False True False False False False False False False False False False False False False False False False False None 1 False False False False False True False False False
151 Introduction to WWC Training: Intro–Chapter 1, WWC Training What Works Clearinghouse Institute of Education Sciences An introduction to WWC Group Design Training Modules, this video provides a brief summary of the purpose and the products of the WWC. 2016 Link to Resource video 6 min P True False False False True False False False False False False False False False False False False False False False True False False False False False False None 2 False False False False False True False False False
152 IWWC Group Design Standards: Intro–Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video provides a brief overview of the reason for WWC standards, and the three ratings that a study might receive after being reviewed by WWC: (1) Meets standards without reservations (2) Meets standards with reservations, and (3) Does not meet standards. 2016 Link to Resource video 2 min P True False False False True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False False False True False False False
153 Key Items to Get Right When Conducting Randomized Controlled Trials of Social Programs Evidence-Based Policy Team of the Laura and John Arnold Foundation Laura and John Arnold Foundation The Laura and John Arnold Foundation developed this checklist to help researchers and sponsors of research ensure that items critical to the success of a randomized controlled trial (RCT) in producing valid findings about a social program’s effectiveness are included. Items in this checklist are categorized according to the following phases of an RCT: planning the study, carrying out random assignment, measuring outcomes for the study sample, and analyzing the study results. This tool is a valuable resource for researchers who need a list of key items to get right when conducting an RCT to evaluate a social program or practice. 2016 Link to Resource Tool 12 pages P False False False False False True False False False True True True True True False True True False True True True False False False False False False Evaluating Public Experimental Methods With Experimental Larry L. Orr Programs With Public Programs get right into account take into evaluation educator teacher student data teachers learning district effectiveness system 2 False False False True False False False False False
154 Limitations of Experiments in Education Research Schanzenbach, D. Association for Education Finance and Policy The Association for Education Finance and Policy developed this brief as a caution to the growing traction of randomized experiments in education circles in recent years. The brief succinctly describes significant limitations of experiments, including feasibility, a program’s complexity, time, generalizability of results, lack of insight into what makes a program work, behavior changes due to participation in an experiment, cost, ethics, fidelity of implementation, the temptation to mine the data to obtain desired results, and the fact that experiments may not be superior to other evaluation approaches. This brief also suggests ways to mitigate these limitations when designing a study. 2012 Link to Resource Brief or Summary 14 pages P False False True False False True False False False False False False False False False True False False True True False False False False False False False experiments may impact program school policy group research experimental evaluation 2 False True False False False False False False False
155 Logic models for program design, implementation, and evaluation: Workshop toolkit Shakman, K, & Rodriguez, S. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed this toolkit to help practitioners use logic models as a tool for building a relevant evaluation design or working more effectively with evaluators whom they engage to conduct evaluations on their behalf. The toolkit describes two workshops, one (two hours long) to learn about logic models, and the other (one and half hour long) to move from logic models to program evaluation. Appendix one provides a very simple graphical representation of a logic model (page A-1). Appendix B (page B-1) is a template. Appendix C (page C-1) is a sample. Appendix D (page D-1) is an example of making the connection with a theory of action as part of using a logic model to evaluate a program. This toolkit is a useful resource for practitioners who want to understand how to use a logic model as an effective tool for program or policy design, implementation, and evaluation. It provides an opportunity to practice creating a logic model and using it to develop evaluation questions and indicators of success. Finally, it provides guidance in how to determine the appropriate evaluation for a specific program or policy. 2015 Link to Resource Tool 118 Pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False Education Evaluation, Department of Education, Education Development Center, Institute of Education Sciences, Regional Assistance, program evaluation, t i o, r t m e n t o f E d u c, research-based technical assistance, unbiased large-scale evaluations of education programs, Logic models, National Center, D e p, participant workbook, Associate Commissioner, results of research, facilitator workbook, Regional Educational Laboratory Northeast, Workshop toolkit, Sue Betka, Acting Director, Chris Boccanfuso, Officer, Amy Johnson, Editor, Arne Duncan, Secretary, Karen Shakman, Rodriguez, Sheila, overall purpose, Joy Lesnick, Ruth Curran Neild, appropriate steps, policymakers, educators, practices, widespread dissemination, help practitioners, different elements, implementation, slide deck, ED-IES, Islands, synthesis, NCEE, funds, United States, Contract, design, report, Tools, Overview, REL 3 False False False True False False False False False
156 Logic Models: A Tool for Designing and Monitoring Program Evaluations Lawton, B., Brandon, P. R., Cicchinelli, L., & Kekahio, W. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Pacific developed this tool to help educators understand how to use logic models as they plan and monitor program evaluations using logic models. This tool provides an introduction to how program components are connected using a logic model, and provides an example to demonstrate how an education program’s resources, activities, outputs, and outcomes are illustrated through a logic model. 2014 Link to Resource Tool 5 pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False Evaluation, Outcome 1 False False False True False False False False False
157 Making the Most of Opportunities to Learn What Works: A School District’s Guide. Akers, L.; Resch, A.; & Berk, J. Regional Education Laboratory Program, Institute for Education Sciences The Regional Educational Laboratory program developed this guide to help school district leaders assess the effectiveness of programs through a randomized controlled trial (RCT). The guide provides details on conducting an RCT, describing three key steps: 1) identifying opportunities to conduct an RCT, while minimizing the time and resources required, 2) gauging the feasibility of conducting an RCT, and 3) following the key steps in conducting an RCT. 2014 Link to Resource Guide 6 pages P False False True False False True False False False True True True False True False False False False False False False False False False False False False U.S. Department moderate risk What Works apples-to-apples comparison staggered rollout what works program students district may schools rct group school reading districts 2 False False True False False False False False False
158 Measurement instruments for assessing the performance of professional learning communities Blitz, C. L., & Schulman, R. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Mid-Atlantic developed this tool to help researchers, practitioners, and education professionals plan, implement, and evaluate teacher professional learning communities (PLCs). The PLC-related measurement instruments identified in this project include 31 quantitative and 18 qualitative instruments that assess a range of teacher/principal-, team-, and student-level variables. Each is described in detail in appendix D (pages D-1 to D-52). Appendices B (page B-1) and C (pages C-1 to C-2) are most relevant for users interested in identifying and applying an appropriate measurement instrument that can answer their questions about teacher PLCs. The document suggests proceeding in three steps: step 1) consult appendix B (the PLC logic model) to determine what information you want and for what purpose (planning, implementation, or evaluation); step 2) depending on your goal (planning, implementation, or evaluation), use the table in appendix C to identify the key tasks you are interested in (column 1) and select the key indicators you are most interested in measuring (column 2) — the class or classes of measurement instruments most relevant to measuring the indicators of interest are in column 3; step 3) click on the hyperlink of a specific instrument in appendix D’s table of contents to retrieve the information about the instrument, determine whether it meets your needs, and find out how to obtain a copy. Notably the body of the document is relevant for readers interested in better understanding logic models. 2016 Link to Resource Tool 72 pages P False False False False True False False False False False False False False False False False True False False False False False False False True True False Evaluation, Professional Learning, Teachers 3 True False False True False False False False False
159 Measuring Program Quality, Part 2: Addressing Potential Cultural Bias in a Rater Reliability Exam Richer, A.; Charmaraman, L.; Ceder, I. Afterschool Matters Afterschool Matters developed this guide to provide program implementers and evaluators with an example of a process to validate an observation tool with an eye towards cultural bias. The authors base their discussion on the Assessment of Program Practices Tool (APT). The guide focuses on (1) why cultural bias is an issue; (2) background for rater reliability, rater bias, and the APT; (3) the goals of the study (to develop rater reliability exams and understand what factors affect performance on the exams); (4) the rating process, including selecting videos, language, and master scorers; (5) the pilot and field tests, including participant selection; and (6) data analysis. The paper concludes with findings that although no significant differences were observed among raters of different characteristics, evidence suggests that familiarity (gained by training) with the APT anchors (detailed descriptions of what each point on the rating scale looks like) increases rater accuracy. They recommend further study. 2018 Link to Resource Guide Methods Report 9 P False False False True True False False False False False False True False False False False True False False False True False False False False False False Cultural Differences, Social Bias, Interrater Reliability, Program Evaluation, Measurement Techniques, Evaluation Research, Follow-up Studies, Educational Assessment, Educational Quality, Cultural Relevance, Program Validation, Cohort Analysis, Familiarity, Prior Learning, Evaluation Methods, Test Reliability, Test Validity, Educator Development, Any Education Topic, Recruiting Study Participants, Collecting New Data, Reporting, Interpreting, and Using Findings 2 False False True False True False False False False
160 Methods for Minimizing Differences Between Groups in Quasi-Experimental Designs Parsad, A.; Price, C. Institute of Education Sciences This video discusses methods to help improve causal inference in quasi-experimental evaluations, including methods for forming similar treatment and comparison groups, deciding on the best matching approach for your study, selecting sample characteristics for matching, and meeting the What Works Clearinghouse standards. The introduction recalls why equivalent groups are important and what matching is, provides an outline for the session, and reviews terminology. The video then describes how to select characteristics on which to match (6:45) It examines matching methods – propensity score matching, Mahalanobis distance, many-to-many matching or stratification, and weighting – and discusses matching with and without replacement (13:50). It provides direction on determining which comparison unit is the best match for a treatment unit (28:35). It concludes with a reminder to account for any matching method in analysis and points the audience to additional resources (36:10). 2016 Link to Resource Video Guide 39:29 R True False False False True False True False False True False False False True False False False False False False False False True False False False False References WWC Standards, Any Education Topic, QED, Identifying Comparison Groups, Reducing Bias in Comparison Groups, Student Achievement Measure, 3 True False True False False True False False False
161 Modern Regression Discontinuity Analysis Bloom, H. Journal of Research on Educational Effectiveness MDRC developed this paper to provide researchers with a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis. The paper briefly chronicles the history of RD analysis and summarizes its past applications; it explains how, in theory, an RD analysis can identify an average effect of treatment for a population and how different types of RD analyses — “sharp” versus “fuzzy” — can identify average treatment effects for different conceptual subpopulations; it introduces graphical methods, parametric statistical methods, and nonparametric statistical methods for estimating treatment effects from RD data plus validation tests and robustness tests for assessing these estimates; it considers generalizing RD findings and presents several views on and approaches to the issue; and it notes some important issues to pursue in future research about or applications of RD analysis. This paper is a valuable resource for researchers who want to use RD analysis for estimating the effects of interventions or treatments. 2009 Link to Resource Methods Report 62 pages R False False False False False False False True False False False False False False False False False False True True True True False False False False False treatment discontinuity regression point cut group effect control average outcomes 3 False False False False True False False False False
162 Module 8, Chapter 1: Introduction to Cluster-Level Assignment What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this series of seven recorded webinars to help researchers design and implement high quality studies that assign clusters, or groups, to conditions instead of individuals, and that are eligible for review using group design standards. The structure of these types of studies requires a special application of the WWC Group Design Standards. Each chapter focuses on a different aspect of the WWC Group Design Standards. Chapter 1 introduces Module 8 by describing when a study is eligible to be reviewed under the cluster design. The WWC recommends viewing the modules in numerical order. 2017 Link to Resource Video 06:00 R True False False False True True True False False True False False False False False False False False False False False False False False False False False None 2 False False False False False True False False False
163 Module 8, Chapter 2: Compositional Changes in Cluster RCTs What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this series of seven recorded webinars to help researchers design and implement high quality studies that assign clusters, or groups, to conditions instead of individuals, and that are eligible for review using group design standards. The structure of these types of studies requires a special application of the WWC Group Design Standards. Each chapter focuses on a different aspect of the WWC Group Design Standards. Chapter 2 presents compositional changes in cluster randomized control trials, specifically discussing differences between initial and analytic samples, both differences in clusters and individuals as they leave or enter the analytic sample, and how this affects the integrity of the random assignment and the rating of the study. The WWC recommends viewing the modules in numerical order. 2017 Link to Resource Video 10:00 R True False False False True True False False False False False False True True False False False False False True False False False False False False False None 2 False False False False False True False False False
164 Module 8, Chapter 3: Overview of Review Process for Cluster Studies What Works Clearinghouse Institute of Education Sciences This video lists the seven steps in the What Works Clearinghouse (WWC) process for reviewing cluster studies. In the first four steps, cluster studies are reviewed for evidence of effects on individuals. If the cluster study cannot satisfy WWC standards for evidence of effects on individuals, then the study is also reviewed for evidence of effects on clusters (steps 5 to 7). A study that does not satisfy WWC standards for evidence of either type of effect does not meet WWC group design standards. 2017 Link to Resource Video 01:00 R True False False False True True True False False True False False True False False False False False False False False False False False False False False None 1 False False False False False False False False False
165 Module 8, Chapter 4: Review for Effects on Individuals What Works Clearinghouse Institute of Education Sciences This video describes the first four steps of the What Works Clearinghouse (WWC) review of a cluster study. These steps examine the credibility of the study’s evidence of effects on individuals. The first three steps determine whether a cluster randomized controlled trial (RCT) is eligible to meet WWC group design standards without reservations. These steps assess three requirements. First, the cluster RCT has to have low cluster-level attrition. Second, it has to limit the risk of bias due to joiners – entrants to the sample after random assignment. Third, it has to have low individual non-response. The fourth step provides one way for other cluster RCTs, those that do not meet these three requirements, and cluster quasi-experimental designs to meet WWC group design standards with reservations. To receive this rating and satisfy WWC standards for evidence of effects on individuals, these studies must establish that the individuals in the analytic sample were equivalent at baseline. Studies that cannot establish equivalence of individuals may still meet WWC group design standards with reservations by satisfying WWC standards for evidence of effects on clusters. The video concludes with a knowledge check. 2017 Link to Resource Video 20:00 R True False False False True True True False False True False False True False False False False False False False False False False False False False False None 2 False False False False False True False False False
166 Module 8, Chapter 5: Review for Effects on Clusters What Works Clearinghouse Institute of Education Sciences This video describes the steps in the What Works Clearinghouse (WWC) review of studies for evidence of effects on clusters. A cluster study that has not satisfied WWC standards for evidence of effects on individuals can meet WWC group design standards with reservations for evidence of effects on clusters if it meets two requirements. First, the individuals in the analytic sample must be representative of the clusters. If the analytic sample includes only a small fraction of the individuals in the clusters, a study cannot satisfy WWC standards for evidence of effects on clusters. This representativeness requirement is assessed in step 5. Second, the study must either be a randomized controlled trial with low cluster-level attrition, which is assessed in step 6, or demonstrate equivalence of the analytic sample of clusters, as assessed in step 7. 2017 Link to Resource Video 14:00 R True False False False True True True False False True False False True False False False False False False False False False False False False False False None 2 False False False False False True False False False
167 Module 8, Chapter 6: Other Considerations in Cluster Studies What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this series of seven recorded webinars to help researchers design and implement high quality studies that assign clusters, or groups, to conditions instead of individuals, and that are eligible for review using group design standards. The structure of these types of studies requires a special application of the WWC Group Design Standards. Each chapter focuses on a different aspect of the WWC Group Design Standards. Chapter 6 discusses the implications for bias, study rating, and WWC reports of findings when a researcher excludes sample units from analysis . The WWC recommends viewing the modules in numerical order. 2017 Link to Resource Video 07:00 R True False False False True True False False False False False False True True False False False False False False False False False False False False False None 2 False False False False False True False False False
168 Module 8, Chapter 7: Wrap-up and Knowledge Checks What Works Clearinghouse Institute of Education Sciences This video provides a series of knowledge checks related to the What Works Clearinghouse (WWC) group design standards. Each question is based in an example study and each answer comes with a justification. The video ends with a short overview of the WWC process to identify and review cluster studies. 2017 Link to Resource Video 13:00 R True False False False True True True False False True False False True False False False False False False False False False False False False False False None 2 False False False False False True False False False
169 Multiple Regression as a Practical Tool for Teacher Preparation Program Evaluation Williams, C. Journal of Case Studies in Accreditation and Assessment The Journal of Case Studies in Accreditation and Assessment developed this report to help program evaluators understand the usefulness of regression analysis as a practical, quantitative evaluation method. The study describes its theoretical program evaluation framework, the input-environment-output theory regarding education (pages 2 and 3) and a graphic representation on page 30. The model and methodological considerations are presented on pages 9-12 and include power, sample size, effect size, and statistical significance; replicability and reliability; normality, omitted, confounding, and missing variables; the choice of interpretation of bivariate correlations rather than structure coefficients; suppressor variables; outcome selection; participants, instrumentation, variables, and data collection; and threats to internal and external validity. 2012 Link to Resource Methods report 7 pages R False False False True True False False False False False False False False False False True True False True True True False False False True False False Program Evaluation, Accountability, Assessment, Teacher Preparation, Multiple Regression, Higher Education 2 True False False False True False False False True
170 National Board Certification as Professional Development: What Are Teachers Learning? Lustick, D.; Sykes, G. Education Policy Analysis Archives This Education Policy Analysis Archives article evaluates the contribution of the professional development aspect of the National Board Certification to teacher learning, providing an example evaluation approach in the process. The report focuses on handling the biases introduced by the fact that teachers self-select into this certification process. It leverages the Recurrent Institutional Cycle Design (RICD), a quasi-experimental design that accounts for the voluntary, self-selected nature of the subjects’ participation while maintaining the pre-post collection of data. RICD helps control for non-random threats to internal validity while providing a means of establishing some degree of causality between the treatment and observed results. The report also discusses how to use interview data to measure evidence of learning. Appendix A lists limitations of the research and ways they could have been mitigated with appropriate resources. This report is a valuable resource for program evaluators who are concerned with self-selection bias and those planning to collect data through interviews. 2006 Link to Resource Methods report 46 pages R False False False True True False True False False True False True False True False False True False True True True False False False True False False Intervention, National Standards, Young Adults, Program Effectiveness, Teacher Certification, Faculty Development, Teacher Competencies, Interviews, Science Instruction, Knowledge Base for Teaching, Teacher Effectiveness, Teaching Skills, Science Teachers, Outcomes of Education, Teacher Improvement, Interrater Reliability, Inquiry, Portfolio Assessment 3 True False False False True False False False False
171 New Empirical Evidence for the Design of Group Randomized Trials in Education Jacob, R.; Zhu, P.; & Bloom, H.S. MDRC This MDRC working paper is a practical guide for designing group randomized studies to measure the impacts of educational interventions. The paper provides information about the values of parameters that influence the precision of impact estimates (intra-class correlations and R-squared values), and includes outcomes other than standardized test scores and data with a three-level, rather than a two-level, structure. The paper also discusses the role of error in estimates of design parameters and the implications error has for design decisions. 2009 Link to Resource Methods Report 57 pages R False False False False True True False False False False True False False False True True False False True True True False True True False False False not estimable Pediatric Symptom Symptom Checklist gene reliability Sage Publications Students Per randomized square root level class intra school schools outcomes correlations sample number correlation 3 True False False False True False False False False
172 Opportunities for Teacher Professional Development in Oklahoma Rural and Nonrural Schools Peltola, P.; Haynes, E.; Clymer, L.; McMillan, A.; Williams, H. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southwest developed this report to describe the findings from a principal survey on opportunities for teacher professional development in Oklahoma rural and non-rural schools. The relevant sections of the report are the appendices. Appendix A describes survey development and administration, sample selection, data file processing, and an analysis of nonresponse bias. Appendix B provides the survey. 2009 Link to Resource Tool 50 pages P False False False True True False False False False False False True False False True False True False False True False False False False True False False Rural Education, Rural Schools, Schools: Characteristics of, Teachers, Characteristics of, Professional Development 3 True False False True False False False False False
173 Other WWC Review Issues: Module 5, Chapter 6, WWC Training What Works Clearinghouse Institute of Education Sciences This video describes how the outcome measure standards apply to baseline measures. For example, pre-intervention measures must satisfy the same reliability criteria as outcome measures. 2016 Link to Resource Video 2 min R True False False False True False False False False False False False False False False True False False False False False False False False False False False None 1 False False False False False True False False False
174 Over alignment and Data Collection Differences: Module 5, Chapter 5, WWC Training What Works Clearinghouse Institute of Education Sciences The video discusses the problems associated with (1) over alignment of a measure with the intervention and (2) when a measure is collected differently from the intervention and the comparison group. A measure is considered to be over aligned to the intervention when it has been specifically tailored to the intervention. In these cases, the measure can provide an unfair advantage to the intervention group. Measures collected differently between intervention and comparison groups, such as different modes, timing, administering personnel, or construction of scored, will result in a ‘does not meet WWC standards’ rating. 2016 Link to Resource Video 4 min P True False False False True False False False False False False False False False False True False False False False False False False False False False False None 1 False False False False False True False False False
175 Partial Identification of Treatment Effects: Applications to Generalizability Chan, W. Society for Research on Educational Effectiveness This resource is designed to help researchers address the lack of generalizability of RCT results under some circumstances. Generalizability requires that samples and populations be similar. Methods that address these issues tend to require multiple assumptions, some of which cannot be tested. This paper proposes an approach that uses interval estimation or partial identification instead of point estimation to estimate treatment effects without the need for assumptions. The author derives partial identification bounds on the expected treatment effect for a population under three different frameworks: 1) Worst-case framework with no assumptions on the data; 2) Treatment optimization where subjects select the treatment that optimizes the expected outcome, and 3) Monotone treatment response where the response function is assumed to be weakly increasing in the treatment. 2016 Link to Resource Brief or summary Methods report 8 pages R False False False False True True False False False False False False False False False False False False True True False False True False False False False Generalization, Intervention, Validity, Identification, Guidelines, Outcomes of Treatment, Evidence Based Practice, Benchmarking, Language Arts, English, Mathematics Achievement, Achievement Tests, State Standards, Elementary School Students, Middle School Students 2 True True False False True False False False False
176 Partially Nested Randomized Controlled Trials in Education Research: A Guide to Design and Analysis Lohr, S.; Schochet, P.Z.; & Sanders, E. Institute of Education Sciences An Institute of Education Sciences report for applied education researchers to provide guidance to education researchers on how to recognize, design, and analyze data from partially nested, randomized controlled trials to rigorously assess whether an intervention is effective. The paper addresses design issues, such as possibilities for random assignment, cluster formation, statistical power, and confounding factors that may mask the contribution of the intervention. The paper also discusses basic statistical models that adjust for the clustering of treatment students within intervention clusters, associated computer code for estimation, and a step-by-step guide, using examples, on how to estimate the models and interpret the output. 2014 Link to Resource Methods Report 179 pages R False False False False False True True True False True True False True True False True False False True True False False False False False False False 21st Century Cov Parm Deviation Component softball team treatment group students control school design rct squared variance sigma squared control group 3 False False False False True False False False False
177 Peer Evaluation of Teachers in Maricopa County’s Teacher Incentive Fund Program Milanowski, A.; Heneman, H.; Finster, M. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help program designers and evaluators implement a peer observation system. The guide describes a peer observation system in Arizona, including advantages, challenges, and mitigation strategies; peer evaluator recruitment, selection, training, evaluation, and compensation; initial perceived effects; and issues of costs and sustainability. This guide is a valuable resource for program directors who want to include peers among their evaluators to increase the total number of evaluators or reduce the time burden on current evaluators. The guide is also useful for evaluators who want to leverage teachers to collect data on educator evaluations. 2015 Link to Resource Guide 16 pages P False False False True True False False False True False False True False False False False True False False False True False False False True False False None 2 True False True False False False False False False
178 Performance Management: Aligning Resources to Reforms Reform Support Network Reform Support Network The Reform Support Network (RSN) developed this document as part of a series designed to help education agencies pursue performance management of their key education reforms. Performance management is a systematic approach to ensure quality and progress toward organizational goals by aligning structures, processes, and routines through a set of reinforcing activities that enable an agency to methodically and routinely monitor the connection between the work underway and the outcomes sought. This brief addresses “alignment of resources,” i.e., how to direct or redirect resources (time, money, technology and people) to priority efforts that produce results and establish clear roles and responsibilities. Alignment of resources is the second of the four elements of performance management described in a Sustainability Rubric that RSN created to help education agencies improve their performance management practices (the rubric’s three categories are system capacity, performance management, and context for sustaining reform; the performance management category includes four elements: clarity of outcomes and theory of action, alignment of resources, collection and use of data, and accountability for results). The rubric offers a template through which agencies can identify key elements of sustainability and assess strengths and weaknesses to address in their sustainability planning. While developed for state education agencies (SEAs), the rubric is a valuable resource for policymakers and practitioners who want to gauge how well they are answering key questions related to aligning resources for student improvement. The brief also describes early, promising work from Hawaii and Tennessee that embodies the basic elements of performance management. 2014 Link to Resource Guide 4 pages P False False False False False False False False False False False False False False False False False False False False False False False False False False False effective teachers This brief must have produce results complex area staff members education agencies Harris Top States complex areas program study control group treatment sample members assignment random outcomes 1 False False True False False False False False False
179 Performance Management: Setting Outcomes and Strategies to Improve Student Achievement Reform Support Network Reform Support Network The Reform Support Network (RSN) developed this document as part of a series designed to help education agencies pursue performance management of their key education reforms. Performance management is a systematic approach to ensure quality and progress toward organizational goals by aligning structures, processes, and routines through a set of reinforcing activities that enable an agency to methodically and routinely monitor the connection between the work underway and the outcomes sought. This brief addresses “clarity of outcomes and theory of action,” i.e., how to establish and widely communicate priorities and set ambitious, clear, and measurable goals and outcomes with aligned strategies and activities. Clarity of outcomes and theory of action are the first of the four elements of performance management described in a Sustainability Rubric that RSN created to help education agencies improve their performance management practices (the rubric’s three categories are system capacity, performance management, and context for sustaining reform; the performance management category includes four elements: clarity of outcomes and theory of action, alignment of resources, collection and use of data, and accountability for results). The rubric offers a template through which agencies can identify key elements of sustainability and assess strengths and weaknesses to address in their sustainability planning. While developed for state education agencies (SEAs), the rubric is a valuable resource for policymakers and practitioners who want to gauge how well they are answering key questions related to clarifying expected outcomes for student improvement. The brief also describes early, promising work from Massachusetts and Tennessee that embodies the basic elements of performance management. 2014 Link to Resource Guide 6 pages P False False False False False False False False False False False False False False False False False False False False False False False False False False False Delivery Unit Strategic Plan after high This brief grade reading grade look at Kang aligned strategies must have outcomes plan strategic student performance four management goals Tennessee education 2 False False True False False False False False False
180 Planning for High-Quality Evaluation of Professional Learning Bocala, C., Bledsoe, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed a series of three webinars to help practitioners design and implement high-quality evaluations of educator professional development programs. Session 1 defines program evaluation, its purpose, its types (formative and summative), and steps in the process. It provides guidance on how to collaborate with stakeholders. It introduces logic models and includes a template for them. Finally, it explores how to develop evaluation questions. 2017 Link to Resource Webinar Guide 01:21:00 P False False False True True False False False False False False False False False False False False False False False False False False False False False False None 3 False False True False False False True False True
181 Planning for High-Quality Evaluation of Professional Learning, Session 2 Bocala, C., Bledsoe, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed a series of three webinars to help practitioners design and implement high-quality evaluations of educator professional development programs. Session 2 explains how to craft evaluation questions, leveraging logic models, and describes types of study designs available to answer different types of questions. It presents various data collection methods, quantitative and qualitative, and their appropriate uses. Finally, it providing guiding questions to assess the utility, feasibility, propriety, and accuracy of collected data. 2017 Link to Resource Webinar Guide 01:23:00 P False False False True True False False False False False False False False False False False True False False False False False False False False False False None 3 False False True False False False True False True
182 PowerUp!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-experimental Design Studies Maynard, R.A., & Dong, N. Journal of Research on Educational Effectiveness This paper complements exisitng power analysis tools by offering tools to compute minimum detectable effect sizes (MDES) and estimating minimum required sample sizes (MRSS) for studies under design. The tools that accompany the paper supports estimates of MDES or MSSR for 21 different study designs. 2013 Link to Resource tool guide 44 R False False False False False True True False False False True False False False False False False False False False False False False False False False False power analysis, minmum detectable effect, MDE, sample size 3 False False True True False False False False True
183 Practical Resource Guide for Evaluating STEM Master Teacher Programs Frechtling, J. Teacher Incentive Fund The Teacher Incentive Fund developed this guide to help grantees develop evaluations. It is written in the context of programs to improve science, technology, engineering, and mathematics education but is relevant to a broad range of topics. The guide focuses on key aspects of the evaluation process, including identifying evaluation questions, selecting data collection approaches, developing the evaluation design and analysis plan, and disseminating findings. This guide is a valuable resource for program managers, project implementation teams, and their evaluators who want to understand how to design a high quality evaluation. The guide is also useful for seasoned evaluators who want to introduce practitioners to evaluation. 2014 Link to Resource Guide 22 pages P False False False True True True True False False False False False False False False False False False True False True False False False False False False None 3 False False True False False False False False True
184 Practice Guide Level of Evidence What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this video to help researchers understand the levels of evidence associated with educational interventions included in the WWC practice guides. Practice guides provide educators with evidence-based practices, recommendations, and concrete steps they can implement in classrooms and schools to tackle current challenges. Each recommendation is assigned a level of evidence that characterizes the research supporting it and the extent to which improved outcomes for students can be attributed to the recommended practice. The video describes the three levels of evidence (00:27), how they are determined (1:03), what earns a recommendation a higher rating (01:45), and what a lower rating means (02:36). Lower ratings are common and mean that the intervention is supported by research but that further research is needed to examine its effects. The video concludes with a description of the information in practice guides’ appendices on ratings and underlying studies (03:19). 2017 Link to Resource Video 4:07 P True False False False True False False False False False False False False False False False False False False False True False False False False False False References WWC Standards, Any Education Topic, Initial Planning, Reporting, Interpreting, and Using Findings 1 False False False False False True False False True
185 Precision Gains from Publically Available School Proficiency Measures Compared to Study-Collected Test Scores in Education Cluster-Randomized Trials Deke, J.; Dragoset, L.; & Moore, R. Institute of Education Sciences The Institute of Education Sciences developed this report to help researchers determine whether pre-test scores or publicly available, school-level proficiency data yield the same precision gains in randomized controlled trials. The paper defines the key measures used and describes the data sources. It compares the precision gains associated with study-collected data of students’ test scores to the gains associated with school-level proficiency data. It explores approaches to reducing the costs of collecting baseline test scores. Finally, it examines the potential implications for attrition bias of not collecting pre-test data. This series is a valuable resource for those who fund and commission research, as publicly available, school-level proficiency data may be available at lower cost than pre-test scores are. 2010 Link to Resource Methods Report 44 pages R False False True False True True False False False False False False True False True True False False True True False False True False False False False Publically Available Random Assignment School-Level proficiency Bandeira de Effect Size New York Percent Subsample School-Level de Mello level school test proficiency study data baseline students model 3 True False False False True False False False False
186 Professional Experiences of Online Teachers in Wisconsin: Results from a Survey about Training and Challenges Zweig, J.; Stafford, E.; Clements, M.; & Pazzaglia, A. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Midwest developed this report to increase policymakers’ and educators’ knowledge of the range of professional learning experiences available to online teachers and the challenges they face teaching online. The survey instrument (Appendix A) is a valuable resource for state online learning programs, state departments of education, and employers of online teachers who want to examine the training and professional development provided to online teachers in their jurisdictions. The data and methodology section (Appendix D) is a useful example of how to develop a survey, administer it, and collect and analyze the data. The body of the report focuses on the results of the survey, which was administered to a virtual school in Wisconsin, and information about how online teacher training can vary in timing, duration, format, and content. In the current climate, in which few states have specific requirements about training for online teachers, these findings may interest policymakers; district and school administrators; and parents in other states, districts, and schools. 2015 Link to Resource Tool Guide 40 pages P False False False True False False False False False False False False False False False True True False False True True False False False True False False Online Courses, Teaching Experience, Teacher Surveys, Training, Teacher Competencies, Barriers, Teacher Education, Faculty Development, Student Behavior, Educational Methods, Elementary School Teachers, Secondary School Teachers, Technology Uses in Education, Academic Persistence, Student Responsibility, Act 222 Advanced Placement development little guidance grade levels professional working conditions online checkbox professional development percent teachers school training virtual Wisconsin 3 True False True True False False False False False
187 Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions. Report to Congressional Requesters. GAO-10-30 Kingsbury, N. U.S. Government Accountability Office The U.S. Government Accountability Office (GAO) initially developed this report to help congressional requesters choose which federal social programs to fund. After the private, nonprofit Coalition for Evidence-Based Policy undertook the Top Tier Evidence initiative to help federal programs identify interventions for which randomized experiments (REs) show sizable, sustained benefits to participants or society, GAO was asked to examine (1) the validity and transparency of the Coalition’s process, (2) how its process compared to that of six federally supported efforts to identify effective interventions, (3) the types of interventions best suited for assessment with randomized experiments, and (4) alternative rigorous methods used to assess effectiveness. The report assesses the Top Tier initiative process and standards, confirms that REs can provide the most credible evidence of effectiveness under certain conditions, and describes available rigorous alternatives (quasi-experimental comparison groups including regression discontinuity designs, statistical analyses of observational data including interrupted time-series, and in-depth case studies). This report is a valuable resource for researchers who aim to demonstrate program effectiveness using a variety of rigorous methods, especially as they seek to understand when a RE is necessary versus when other methods are suitable for demonstrating program effectiveness. 2009 Link to Resource Methods Report 49 pages R True False False False False True True True True True False False False True False False False False True True False False False False False False False Nancy Kingsbury Task Force Academies Press Conference proceedings Steps Seven broadcast media remain separate Major journals Nominations solicited evidence interventions intervention program evaluation studies top tier effectiveness outcomes 3 False False False False True False False False False
188 Properties of the Multiple Measures in Arizona’s Teacher Evaluation Model. REL 2015-050 Lazarev, V.; Newman, D.; Sharp, A. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) West developed this report to explore the relationship among assessments made through teacher observations, student academic progress, and stakeholder surveys to investigate how well the model differentiated between high- and low-performing schools. The study found only a few correlations between student academic progress and teacher observation scores, suggesting that a single aggregated observation score might not adequately measure independent aspects of teacher performance. This report can be useful to evaluators and education stakeholders who are considering using teacher observation scores as an evaluation measure or wish to gauge the quality of their agency’s educator evaluation system. In addition, appendix E in the report provides instructional information on how to detect nonlinear relationships between observation item scores and student academic progress metrics. 2014 Link to Resource Methods Report 42 pages R False False False True True False False False False False False False False False False True False True False True False False False False True False False Teacher Evaluation, Observation, State Departments of Education, Correlation, Teacher Competencies, Academic Achievement, Teacher Influence, Scores, Surveys, Mathematics Tests, Reading Tests, Mathematics Achievement, Reading Achievement, Teacher Effectiveness, Teacher Characteristics, Evaluation Methods, Public Schools, School Districts, Charter Schools, Planning, Classroom Environment, Teaching Methods, Teacher Responsibility, Standardized Tests, Elementary Schools, Secondary Schools, Grade 3, Grade 4, Grade 5, Grade 6, Grade 7, Grade 8, Grade 9, Grade 10, Grade 11, Grade 12, Student Attitudes, Teacher Attitudes, Parent Attitudes 3 True False False False True False False False False
189 Protocol for a Systematic Review: Teach For America (TFA) for Improving Math, Language Arts, and Science Achievement of Primary and Secondary Students in the United States: A Systematic Review Turner, H.; Boruch, R.; Ncube, M; & Turner, A. The Campbell Collaboration This Campbell Collaboration protocol provides researchers with an example of a proposal for a systematic review of a program. The purpose of a systematic review is to sum up the best available research on a question, which is done by synthesizing the results of several studies. A systematic review uses transparent procedures to find, evaluate, and synthesize the results of relevant research. Procedures are explicitly defined in advance to ensure that the exercise is transparent and can be replicated. This practice is also designed to minimize bias. Studies included in a review are screened for quality, so that the findings of a large number of studies can be combined. Peer review is a key part of the process; qualified independent researchers check the author’s methods and results. This protocol is a valuable resource for researchers who want guidance on structured guidelines and standards for summarizing research evidence. This example focuses on the Teach For America peer preparation program and its effect on student academic outcomes. As such, the protocol is also useful for professionals interested in the strategies and goals of interventions that aim to recruit teachers and improve their effectiveness. 2016 Link to Resource Guide 33 pages R False False False True False False False False False False False False False False False True False False True True True False True False True False False Language Arts Postal Code USA Phone accept responsibility minimum dosage United States absolute value confidence intervals take place Mackson Ncube tfa studies review effect study analysis outcome research teacher outcomes 3 True False True False False False False False False
190 Quasi-experimental Designs: Module 1, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video provides the WWC definition of quasi-experimental design (QED), and describes designs that are potentially acceptable to WWC. All eligible QEDs must compare the intervention group with distinct (non-overlapping) groups that are determined by a non-random process. Comparison groups can be determined through a convenient sample, national rates, or by using statistical methods. The highest WWC rating that a QED can receive is ‘Meets Standards With Reservations.” 2016 Link to Resource video 3 min P True False False False True False True False False True False False False False False False False False False False False False False False False False False None 1 False False False False False True False False False
191 Randomized Control Trials: Module 1, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video goes into detail on what WWC considers as ‘well executed randomization’ and provides examples of randomization that are well executed, and those that are not well executed. For example, choosing students by name, birth date, or class schedule is not considered randomization. The video also discusses how randomization can be compromised. For example, if a student is originally assigned to control, even if s/he is later moved into the intervention, the study must treat this student as a ‘control’ student. 2016 Link to Resource video 16 min P True False False False True True False False False True False False False False False False False False False False False False False False False False False None 2 False False False False False True False False False
192 Rapid Cycle Evaluation: Race to the Top-District Tools & Strategies Ceja, B.; & Resch, A. Reform Support Network The Reform Support Network (RSN) developed this webinar to help practitioners evaluate personalized learning tools using Rapid Cycle Evaluation and Opportunistic Experiments. The webinar first recognizes how these quick turnaround methods can be used to build knowledge in a context of scarce resources. It defines these methods, the conditions under which they are appropriate, and the identification of comparison groups. The webinar then goes into an example using Cognitive Tutor, a software intended to improve high school student math achievement. It focuses on an opportunity for Race to the Top grantees to apply for the RSN to conduct an evaluation, but this webinar is also a valuable resource for practitioners who want to understand how the results of quick turnaround experiments might inform the continued implementation of learning tools and/or future purchasing decisions. 2015 Link to Resource Webinar 27 slides; 55 minutes P True True False False False False False False False True False True False True True True False False False False False False True False False False False Leadership: Evaluation of Program Outcome (Revisions); Program Evaluation, random assignment Cycle Evaluation Improve achievement Rapid Cycle Cognitive Tutor comparison group can be subset of or schools rsn cognitive comparison students 3 True False False False False False True False False
193 Recognizing and Conducting Opportunistic Experiments in Education: A Guide for Policymakers and Researchers. REL 2014-037 Resch, A.; Berk, J.; & Akers, L. Institute of Education Sciences National Center for Education Evaluation and Regional Assistance An Institute of Education Sciences-developed guide designed to help researchers recognize and conduct opportunistic experiments. The authors begin by defining opportunistic experiments and providing examples, then discuss key steps and issues to consider when choosing potential experiments. The authors also outline the critical steps to successfully completing an opportunistic experiment and the potential to conduct such studies at a relatively low cost. 2014 Link to Resource Guide 28 pages P False False True False False True False False False True False True False True False True False False False False True False True False False False False Pell Grants Power Program Mathematica Policy Works Clearinghouse random causal chain Regional Assistance Completion Project Data use schools data research random assignment district opportunistic study researchers intervention 3 True False True False False False False False False
194 Recognizing Opportunities for Rigorous Evaluation Tuttle, C. Regional Education Laboratory Program, Institute of Education Sciences and Mathematica Policy Research The Regional Educational Program (REL) developed a video series to help schools, districts, states, and their research partners use a cost-effective approach, known as “opportunistic experiments,” to test the effectiveness of programs. This video describes key characteristics of opportunistic experiments and introduces situations that provide good opportunities for embedding experiments in existing programs with minimal disruption. The video will be useful to professionals considering rigorous evaluation that will generate evidence for informing their education decisions in a cost-effective manner. 2016 Link to Resource Video 8 min P False False True False False True False False False False False False False False False False False False False False False False False False False False False None 2 False False False False False True False False True
195 Redesigning Teacher Evaluation: Lessons from a Pilot Implementation Riordan, J.; Lacireno-Paquet, N.; Shakman, K.; Bocala, C.; Chang, Q. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed useful surveys as part of this report, which describes a study of the implementation of new teacher evaluation systems in New Hampshire’s School Improvement Grant (SIG) schools. They find that, while the basic system features are similar across district plans, the specifics of these features vary considerably by district. Key findings are listed on the second half of page i. The relevant section is in appendix D (pages D-1 to D-11), which provides survey protocols. The goal of the surveys is to understand what is working well and what is challenging during the pilot implementation of a new evaluation system. There is a teacher professional climate and implementation survey and an evaluator survey. Both ask for background information and about implementation and stakeholder support. The teacher survey also includes questions on the professional school climate (i.e., inclusive leadership, teacher influence, teacher–principal trust, coordination and quality of professional development, reflective dialogue, and focus on student learning). 2015 Link to Resource Tool 11 Pages P False False False False True False False False False False False False False False False False True False False False False False False False True False False Evaluation, Teachers 2 True False False True False False False False False
196 Reporting on Impacts in Anticipation of a WWC Review Anne Wolf, A.; Goodson, B. Institute of Education Sciences This video provides an overview about what to include in a report on the findings from an evaluation or impact study of an education intervention to support a systematic evidence review by the What Works Clearinghouse. It will help researchers minimize misinterpretation during the review process, get the highest rating the study has the potential to receive, and ensure that consumers of the study report can clearly understand and interpret the results. The video can be used as a checklist to review each section of a report. The presenters recognize that researchers may write different reports for other audiences and recall that the WWC is an important audience as it tells readers of the study report how confident they can be about the reported evidence. The first five minutes describe the purpose and organization of the video. Next, it goes into what information to include on: the intervention program and activities, participants, location, time period, and intervention and comparison groups (5:00); the study design, including method and unit of assignment (7:30); outcomes and baseline measures (12:00); evidence that the methodology for data collection was consistent across the intervention and comparison condition (14:00); the analytic approach, including models and handling of missing data (14:35); the baseline equivalence model (20:45); and findings including impacts, attrition, baseline equivalence, and representativeness (22:35). The video concludes at 45:54 with a list of additional resources about reporting. Examples are included. The underlying guidance document can be found at https://fitw.grads360.org/#communities/pdc/documents/15060. 2018 Link to Resource Video Guide 50:57 R True False False False True True True False False True False False True True False True False False False True True False False False False False False References WWC Standards, Any Education Topic, Initial Planning, RCT, QED, Identifying Comparison Groups, Addressing Changes in your Sample, Reducing Bias in Comparison Groups, Selecting Appropriate Outcome Measures, Addressing Analysis Challenges, Reporting, Interpreting, and Using Findings 3 False False True False False True False False True
197 Reporting What Readers Need to Know about Education Research Measures: a Guide Boller, K.; & Kisker, E.E. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) program developed this guide to help researchers make sure that their research reports include enough information about study measures to allow readers to assess the quality of the study’s methods and results. The guide provides five checklists to assist authors in reporting on a study’s quantitative measures, a sample write-up to illustrate the application of the checklists, and a list of resources to further inform reporting of quantitative measures and publication of studies. The checklists address the following measurement topics: (1) measure domains and descriptions; (2) data collection training and quality; (3) reference population, study sample, and measurement timing; (4) reliability and construct validity evidence; and (5) missing data and descriptive statistics. 2014 Link to Resource Guide Tool 26 pages R True False False False False False False False False False False False False False False False False False False False True False False False False False False Child Trends Early Childhood overly aligned New York Certification criteria Standards Handbook measures reliability study evidence training 3 False False True True False False False False False
198 Request for Proposals Evaluation Guide Reform Support Network Reform Support Network The Reform Support Network developed this guide to help state and local education agencies in defining the evaluation process for a Request for Proposal (RFP). The guide addresses the major components of an RFP, including involving stakeholders in its evaluation, scoring proposals, considering costs, planning vendors’ demonstrations, checking references, and establishing timelines. Tools valuable to education agencies include sample-scoring matrices for different components and overall, a checklist for the agency to evaluate vendors’ written proposals and one for vendors to identify the extent to which they meet key functional requirements, a committee signoff sheet, and a sample evaluation timeline. 2012 Link to Resource Guide 12 pages P False False False False False False False False False False False False False False False False False False False False False False False False False False False Scoring Matrix rather than Technical local education Rank Order Final Rank Functional Requirements vendor evaluation rfp cost agency team score phase 2 False False True False False False False False False
199 Retaining High Performers: Insights from DC Public Schools’ Teacher Exit Survey Pennington, K.; Brand, A. Bellwether Education Partners Bellwether Education Partners developed this tool to help practitioners and researchers better understand how to retain high-performing teachers. The relevant sections of this slide presentation are pages 8-10 and 36 which list survey questions to ask teachers on why they leave their school district, what might have retained them, and what their characteristics are. This tool is available to practitioners and researchers want to collect data on teachers’ experiences. 2018 Link to Resource Tool Slide Presentation 4 P False False False False True False False False False False False False False False False False True False False False False False False False True False False Collecting New Data, Teacher Measure 1 True False False True False False False True False
200 Retaining K-12 Online Teachers: A Predictive Model for K-12 Online Teacher Turnover Larkin, I.; Lokey-Vega, A.; Brantley-Dias, L. Journal of Online Learning Research The authors developed this tool to help practitioners and researchers better understand the factors that influence teacher job satisfaction and teacher commitment to their job. The relevant section of this paper is the survey tool in Appendix A. It includes questions about teacher demographic characteristics, education, past and current experience, workload, job satisfaction, organizational commitment, and turnover intention. 2018 Link to Resource Tool 9 P False False False False True False False False False False False False False False False False True False False False False False False False True False False Elementary School Teachers, Secondary School Teachers, Online Courses, Teacher Persistence, Labor Turnover, Job Satisfaction, Teacher Surveys, Teacher Attitudes, Intention, Scheduling, Mentors, Teacher Student Ratio, Teaching Conditions, Teaching Experience, Predictor Variables, Regression (Statistics), Statistical Analysis, Collecting New Data, Teacher Measure 2 True False False True False False False False False
201 Review Issues Related to Attrition: Module 2, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences This video discusses how ways in which researchers exclude sample members may or may not cause an attrition bias. This video also discusses in what conditions ‘nonparticipation’ is or is not considered attrition. If researchers continue to track individuals who did not participate, and keep them in their originally assigned condition, then it is not considered attrition. 2016 Link to Resource Video 10 min P True False False False True True False False False False False False True False False False False False False False False False False False False False False None 2 False False False False False True False False False
202 Review Protocol For Teacher Training, Evaluation, And Compensation Version 3.2 (July 2016) What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this protocol to help program directors and evaluators select WWC-appropriate outcome measures. It defines key terms including categories of relevant research (preparation, induction, evaluation, compensation, and professional development). It describes the procedures for conducting a literature search, including search terms and searched databases. It lists eligibility criteria for populations, interventions, research, and student and teacher outcomes. It provides the eligible standards against which studies are reviewed: sample attrition, baseline equivalence, and outcomes. 2016 Link to Resource Guide 15 pages R False False False True True True True True False True False False True False False True False False False False False False True True True False False None 2 True False True False False False False False False
203 Rigorous Program Evaluations on a Budget: How Low-Cost Randomized Controlled Trials Are Possible in Many Areas of Social Policy Coalition for Evidence-Based Policy Coalition for Evidence-Based Policy The Coalition for Evidence-Based Policy developed this guide to illustrate the feasibility and value of low-cost randomized controlled trials (RCT) for policy officials and researchers, by providing concrete examples from diverse program areas. The guide provides a brief overview of what RCTs are and how to reduce their cost. It then presents five examples of quality, low-cost RCTs in four areas: criminal justice, child welfare, a community-wide parenting intervention, and a teacher-incentive program. This guide is a valuable resource for policymakers, researchers, and others who automatically discard RCTs as an approach because of the perception that such studies are always too costly and too administratively burdensome on schools to be practical. 2012 Link to Resource Guide Brief or Summary 10 pages P False False True False True True False False False False False False False False True False False False False False False False True True False False False Task Force Waiver Demonstration Community Supervision New York Recovery Coach Services Task carried out program study cost data group low control administrative outcomes randomized 2 True True True False False False False False False
204 RTT-D Guidance: Implementing Performance Metrics for Continuous Improvement that Support the Foundational Conditions for Personalized Learning Johnson, J.; Kendziora, K.; & Osher, D. American Institutes for Research The American Institutes for Research developed this guide to help districts implement a continuous improvement process to personalize education for all students in their schools, focusing on classrooms and the relationship between educators and students. The guide begins by recalling the three foundational conditions for personalized learning – building the capacity of teachers to create highly effective classroom learning environments, developing a rigorous capacity for student support, and establishing the organizational efficacy of leadership and management to implement personalized learning environments. It then describes an approach to continuous improvement that builds on the conditions for learning, personalization, and achievement for all students. The guide proposes a key set of performance metrics and summarizes them in a table on pages 21-23. The appendix presents a sample on-track measure. This guide is a valuable resource for district and school staff who want to gauge how well they provide teachers with the information, tools, and supports that enable them to meet the needs of each student and substantially accelerate and deepen each student’s learning. Through this guide, district and school staff can gauge whether they have the policies, systems, infrastructure, capacity, and culture to enable teachers, teacher teams, and school leaders to continuously focus on improving individual student achievement and closing achievement gaps; whether they set ambitious yet achievable performance measures that provide rigorous, timely, and formative information; and the extent to which implementation is progressing. 2012 Link to Resource Guide 40 pages P False True False False True False False False False False False False False False False True True False False False False False True True True True False Civil Rights Disease Control Healthy Kids Longer Term Term Sustainability juvenile justice Action Brief BMI school students student track learning received classes above high 3 True False True False False False False False False
205 Seeing It Clearly: Improving Observer Training for Better Feedback and Better Teaching Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide focuses on training observers of educators being evaluated. The guide describes how to recruit and prepare observers; what skills to develop; and how to use data to monitor implementation. Each section provides guiding questions on how to lay the foundation and how to improve. The guide also includes a checklist, tips, tools from districts and states, and a planning worksheet. This guide is a valuable resource for program directors who want to design and implement a high quality observation system. 2015 Link to Resource Guide Tool 128 pages P False False False True True False False False False False False True False False False False True False False False False False False False False False False None 3 False False True True False False False False False
206 Self-study Guide for Implementing Early Literacy Interventions Dombek, J.; Foorman, B.; Garcia, M.; Smith, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed this guide to help district and school-based practitioners conduct self-studies for planning and implementing early literacy interventions for kindergarten, grade1, and grade 2 students. Although the guide is not designed for rigorous evaluations, it can be used for continuous ongoing improvement. The guide is designed to promote reflection about current strengths and challenges in planning for implementation of early literacy interventions, spark conversations among staff, and identify areas for improvement. Schools and districts interested in implementing an early literacy program can use the guide to help assess needs in areas such as student selection, content and instruction, professional development, communication and other areas of importance. 2016 Link to Resource Guide Tool 27 Pages P False True True False True False False False False False False False False False False False False False False False False False False False False False False Early Interventions, Education Interventions, Literacy 3 False False True True False False False False False
207 Sensitivity Analysis for Multivalued Treatment Effects: An Example of a Cross-Country Study of Teacher Participation and Job Satisfaction Chang, C. Society for Research on Educational Effectiveness This resource illustrates the importance of conducting sensitivity analyses in the context of examining average treatment effects of multiple treatments (multivalued treatments). The study focuses on the sensitivity of a multivalued treatment effect; the degree to which test statistics and sample size affect the Impact Threshold of a Confounding Variable(ITCV) index (the size of the impact of a potential confounding variable that can change an inference); and whether different propensity score weighting methods are sensitive enough to capture different levels of the treatment and allow for robust inferences on each value treatment. 2015 Link to Resource Brief or summary Methods report 17 pages R False False False False True False True False False False False False False False False False False False False True False False False False True False False Job Satisfaction, Intervention, Sample Size, Weighted Scores, Robustness (Statistics), Simulation, Inferences, Surveys, Teacher Attitudes, Cross Cultural Studies, Statistical Analysis, Multivariate Analysis, Validity, Teacher Participation, Likert Scales 2 True True False False True False False False False
208 Smarter, Better, Faster: The Potential for Predictive Analytics and Rapid-Cycle Evaluation to Improve Program Development and Outcomes Cody, S.; & Asher, A. Mathematica Policy Research The Mathematica Policy Research report proposes using predictive modeling and rapid-cycle evaluation to improve program services while efficiently allocating limited resources. The authors believe that these methods – both individually and together – hold significant promise to improve programs in an increasingly fast-paced political environment. They propose two actions: first, agency departments with planning and oversight responsibilities should encourage the staff of individual programs to conduct a thorough needs assessment, and second, federal agencies should take broad steps to promote predictive analytics and rapid-cycle evaluation. 2014 Link to Resource Brief or Summary 12 pages R False False False False False False False False False False False False False False False False False False False False False False False False False False False Predictive Analytics, Eligibility Assessment Resources Administration Andrew Asher frontline workers Address Poverty Hamilton Projecrookings U.S. Department Scott Cody decision making program predictive analytics programs data rapid cycle evaluation 2 False True False False False False False False False
209 Standardized Measures: Module 5, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences Standardized tests are considered to have face validity and reliability if administered in the way intended. This video describes how reviewers can identify standardized tests and provides examples of how researchers may deviate from the intended administration of the standardized tests. In these cases, the measure would not be considered reliable. 2016 Link to Resource Video 5 min P True False False False True False False False False False False False False False False True False False False False False False False False False False False None 1 False False False False False True False False False
210 Statistical Adjustments: Module 3, Chapter 4, WWC Training What Works Clearinghouse Institute of Education Sciences This video discusses acceptable methods to adjust for baseline pretest differences when baseline differences are between .05 -.20 standard deviations. Acceptable methods include regression covariate adjustments (including covariates in HLM), and analysis of covariance (ANCOVA). 2016 Link to Resource Video 3 min R True False False False True True True False False True False False False True False False False False False True False False False False False False False None 1 False False False False False True False False False
211 Statistical Power Analysis in Education Research Hodges, L.V.; & Rhoads, C. Institute of Education Sciences An Institute of Education Sciences paper that provides an introduction to the computation of statistical power for education field studies and to discuss research-design parameters that directly affect statistical power. The paper shows how to use the concepts of operational effect sizes and sample sizes to compute statistical power. It also shows how clustering structure, described by intra-class correlation coefficients, influences operational effect size and therefore, statistical power. In addition, this paper details how the use of covariates can increase power by operational effect size. 2010 Link to Resource Methods Report 88 pages R False False False False False True False False False True True False False False False False False False False True False False False False False False False design power effect sample designs level size statistical covariates Hierarchical Potential Conflicts Psychological Methods RR assigned costs while effect randomized-block while maintaining 3 False False False False True False False False False
212 Studying the Sensitivity of Inferences to Possible Unmeasured Confounding Variables in Multisite Evaluations. CSE Technical Report 701 Seltzer, M.; Kim. J.; Frank, K. CRESST/UCLA The University of California, Los Angeles developed this report to help researchers identify and deal with factors that may be confounded with differences in implementation. The report focuses on assessing the impact of omitted confounding variables on coefficients of interest in regression settings in the context of hierarchical models in multisite settings in which interest centers on testing whether certain aspects of implementation are critical to a program’s success. It provides a detailed illustrative example using the data from a study focusing on the effects of reform-minded instructional practices in mathematics. It also addresses how widely applicable their approach to sensitivity analysis is, specifically: in what situations or settings will cluster-level regression analyses yield point estimates and standard errors for fixed effects of interest that are highly similar to those produced via a restricted maximum likelihod/empirical Bayes estimation strategy? This report is a valuable resource for researchers who conduct research in multiple sites and those who implement hierarchical models. 2006 Link to Resource Methods report 42 pages R False False False False True True False False False False False False False False False False False False True True False False True False False False False Program Effectiveness, Teaching Methods, Inferences, Educational Environment, Educational Practices, Educational Experience 3 True False False False True False False False False
213 Supporting a Culture of Evidence Use in Education Institute of Education Sciences Institute of Education Sciences This video describes the different research resources that individuals can access through the Institute of Education Sciences (IES). The video describes the products offered through the What Works Clearinghouse (WWC) and the online search tool for thousands of research studies through the Education Resource Information Center (ERIC). It also provides information about the 10 Regional Education Laboratories (RELS) and Statewide Longitudinal Data Systems (SLDSs). This video will be useful to participants who want to make better use of evidence for education improvement and to leverage IES resources in that effort. 2016 Link to Resource Video 5 min P True False False False True False False False False False False False False False False False False False False False True False False False False False False None 1 False False False False False True False False False
214 Survey methods for educators: Analysis and reporting of survey data (part 3 of 3) Pazzaglia, A. M., Stafford, E. T., & Rodriguez, S. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed this guide to help educators analyze survey data and report the results in a way that is accessible and useful to interested stakeholders. This guide will help readers use survey data to inform decisions about policy or practice in schools and local or state education agencies. The document describes a five-step survey analysis and reporting process: step 1 – review the analysis plan, step 2 – prepare and check data files, step 3 – calculate response rates, step 4 – calculate summary statistics, and step 5 – present the results in tables or figures. This guide is the third in a three-part series of survey method guides for educators. The first guide in the series covers survey development, and the second guide in the series covers sample selection and survey administration. 2016 Link to Resource Guide Tool 37 pages P False False False False True False False False False False False False False False False False True True True True True True True False False False False Achievement, Achievement (student), Data Interpretation, States 3 True False True True False False False False False
215 Survey methods for educators: Collaborative survey development (part 1 of 3) Irwin, C. W., & Stafford, E. T. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed a series of three guides to help educators administer surveys to collect unavailable data they need. This guide, the first in the series, focuses on collaborative survey development. The document describes a five-step process: step 1 – identify topics of interest, step 2 – identify relevant, existing survey items, step 3 – draft new survey items and adapt existing survey items, step 4 – review draft survey items with stakeholders and content experts, and step 5 – refine the draft survey using cognitive interviewing. Appendix A (page A-1) provides additional resources for sampling and survey administration, including references to textbooks and professional organizations and university departments with expertise in these topics. Appendix B (pages B-1 and B-2) provides a survey blueprint or table of specifications i.e., a document that outlines the topics and subtopics to be included on a survey and serves as an outline for developing survey items. Appendix C (page C-1) describes how to develop an analysis plan, to be used prior to calculating summary statistics and ensure that the resulting information will help address the survey development team’s topics of interest, and provides a sample. Appendix D (pages D-1 and D-2) provides a sample feedback form, which allowed a survey development team to obtain input on item wording and importance across all stakeholders despite limited time to meet with the stakeholder group. Appendix E (pages E-1 to E-4) provides a sample cognitive interview protocol, which includes is the step-by-step instructions for gathering feedback during cognitive interviews. Appendix F (pages F-1 to F-3) provides a sample codebook i.e., a record of the codes used to categorize feedback gathered during cognitive interviewing. 2016 Link to Resource Guide Tool 33 Pages P False False False False True False False False False False False False False False False False True False False False False False False False False False False Achievement, States 3 False False True True False False False False False
216 Survey methods for educators: Selecting samples and administering surveys (part 2 of 3) Pazzaglia, A. M., Stafford, E. T., & Rodriguez, S. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed a series of three guides to help educators administer surveys to collect unavailable data they need. This guide, the second in the series, focuses on selecting a sample and administering a survey. The document describes a five-step sample selection and survey administration process: step 1 – define the population, step 2 – specify the sampling procedure, step 3 – determine the sample size, step 4 – select the sample, and step 5 – administer the survey. Appendices provide steps on how to use Microsoft Excel to obtain a random sample (appendix B on pages B-1 to B-5), and sample survey invitations and reminders (appendix D on pages D-1 to D-11). This guide is a valuable resource for educators who want to understand a larger population by surveying a subgroup of its members. The first guide in the series covers survey development, and the third guide in the series covers data analysis and reporting. 2016 Link to Resource Guide Tool 29 pages P False False False False True False False False False False True True False False False False True False False False False False False False False False False Achievement, Academic Achievement,Achievement (student), Data Interpretation, States 3 False False True True False False False False False
217 Teacher Effectiveness Data Use Regional Education Laboratory Program Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) West developed this video to help district and school leaders build capacity to better understand, analyze, and apply teacher effectiveness data to make strategic decisions regarding teacher assignment, leadership, and professional development, and to facilitate local planning around the use of these data. The video defines different types of effectiveness data, and discusses ways to use data and interpret research. It introduces a five-step process to analyze teacher effectiveness data, which includes: identifying the question, collecting data, organizing data, making data-based decisions, and communicating the data findings and decisions. The video can be used as a working session by using handouts from the Great Teachers and Leaders Supporting Principals Using Teacher Effectiveness Data Module (note that handouts are numbered differently in the video). This video will be helpful to stakeholders interested in making data-based decisions to help increase teacher effectiveness. 2016 Link to Resource Video Guide 25 min P False False False True False False False False False False False False False False False False False False False False True False False False True False False None 3 True False True False False True False False False
218 Teacher evaluation and professional learning: Lessons from early implementation in a large urban district Shakman, K.; Zweig, J.; Bailey, J.; Bocala, C.; Lacireno-Pacquet, N. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast and Islands developed this report to describe the findings from a study of the alignment of educator evaluations and professional development. The relevant sections of the report are the survey and interview tools (Appendices D, F, and G). Appendix D describes the coding dictionary that – a list of professional development activities with descriptions – allowed the study team to summarize and quantify the professional activities in the narratives and to match the activities prescribed to the activities reported in the survey to study the alignment between what evaluators prescribed and what teachers reported. Appendix F lists the survey items, which are about the types of professional development that teachers engaged in during the study period; the standards and indicators that the professional development addressed; the type and duration of the professional development activities; and school climate. Appendix G provides the interview protocol. Teachers were asked about the prescriptions they received, how they were selected, what they did, and their experience with the resulting professional development. Educators were asked similar questions about the prescriptions they wrote, their role in the process, what teachers did and how successfully, and their evaluation of the resulting professional development. These sections are a valuable resource for program directors who want to better gauge the quality of the feedback administrators give teachers regarding how to improve. They also help track whether teachers take the recommended steps and how these may affect subsequent evaluation results. 2016 Link to Resource Tool 8 pages P False False False True True False False False False False False True False False False True True False True False False False False False True False False Teachers, Education/Preparation, Performance Evaluations, Professional Development, Urban Schools 2 True False False True False False False False False
219 Teaching and Learning International Survey (TALIS) 2013: U.S. Technical Report Strizek, G. A.; Tourkin, S.; Erberber, E.; Gonzales, P. National Center for Education Statistics, U.S. Department of Education The National Center for Education Statistics developed this report to provide researchers with an overview of the design and implementation of a large-scale teacher survey that covers professional development, evaluation, school leadership and climate, instructional beliefs, and pedagogical and professional practices. The report focuses on a comprehensive description of survey development and administration, and data processing. It addresses common, important survey considerations such as sampling and non-response. Appendices provide the survey tool and recruitment documents. This series is a valuable resource for program directors and evaluators who want an introduction to or a review of all the steps involved in collecting survey data. The series is also useful for evaluators who want to use the data from this particular survey (the Teaching and Learning International Survey) to answer some of their research questions. 2014 Link to Resource Guide Tool 257 pages R False False False True True False False False False False True True True False True False True False False True False False False False True True False International Comparisons, Labor Force Experiences, Principals, Professional Development, Public Schools, Schools, Secondary Education, Elementary and Secondary Schools Staff, Teachers:Characteristics of, College Major or Minor, Education/preparation, Job Satisfaction, Perceptions and Attitudes 3 True False True True False False False False False
220 Technical Assistance Materials for Conducting Rigorous Impact Evaluations Institute of Education Sciences National Center for Education Evaluation and Regional Assistance Institute of Education Sciences National Center for Education Evaluation and Regional Assistance The National Center for Education Evaluation and Regional Assistance developed this website to help program evaluators design, implement, conduct analyses, and report findings from impact studies. The site links to multiple resources: an evaluation plan template with guidance, an Excel spreadsheet designed to support and supplement the development of an evaluation plan (the contrast tool), and examples for randomized controlled trials (RCTs) and quasi-experiments (QEs); a toolkit to understand and design logic models and use them to plan a program evaluation; resources that support the design and implementation of RCTs and QEs; briefs on baseline equivalence and attrition; and guidance on reporting findings. 2010–2018 Link to Resource Guide Tool None P True False False False True True True False False True True True True True True True False True True True True False True False False False False References WWC Standards, Any Education Topic, Initial Planning, RCT, QED, Determining Sample Sizes, Recruiting Study Participants, Addressing Changes in your Sample, Reducing Bias in Comparison Groups, Acquiring and Using Administrative Data, Selecting Appropriate Outcome Measures, Combining Data Systems, Understanding Data Analytic Models, Addressing Analysis Challenges, Reporting, Interpreting, and Using Findings, Student Achievement Measure 0 True False True True False False False False True
221 The CLASS Protocol for Classroom Observations Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide presents the elements that constitute the CLASS framework, a research-based protocol to evaluate math and English language arts lessons in all grades. This guide is a valuable resource for program directors who want to select a teacher observation tool. 2010 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
222 The core analytics of randomized experiments for social research Bloom, H. MDRC This MDRC paper on research methodology examines the core analytic elements of randomized experiments for social research. Its goal is to provide a compact discussion for researchers of the design and analysis of randomized experiments. Design issues considered include choosing the size of a study sample and its allocation to experimental groups, using covariates or blocking to improve the precision of impact estimates, and randomizing intact groups instead of individuals. 2006 Link to Resource Methods Report 41 pages R False False False False False True False False False False True False True True False False False False True True False False False False False False False Evidence Head Start Local Average Web site covariates identically distributed Evolving Analytic Thousand Oaks student achievement Improve Precision treatment group effect randomized groups sample research randomization impact experimental 3 False False False False True False False False False
223 The Effectiveness of a Program to Accelerate Vocabulary Development in Kindergarten Goodson, B.; Wolf, A.; Bell, S.; Turner, H.; Finney, P. B. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed this report to describe the findings from the study of the 24 week long K-PAVE program and its impact on students’ vocabulary development and academic knowledge and on the vocabulary and comprehension support that teachers provided during book read-aloud and other instructional time. The relevant sections of the report are the random assignment and the survey appendices. Appendix C describes the matching of schools within blocks for random assignment and Appendix G includes the teacher survey. These resources are valuable for grantees who wish to conduct similar research. Other appendices focus on quantitative research methods that grantees might find useful. 2010 Link to Resource Methods Report 9 Pages R False False False False True True False False False True False False False False False False True False False False False False True False True False False Education Interventions, Kindergarten, Reading 2 True False False False True False False False False
224 The Effects of Connected Mathematics 2 on Math Achievement in Grade 6 in the Mid-Atlantic Region Martin, T.; Brasiel, S.; Turner, H.; Wise, J. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Mid-Atlantic developed this report to describe the results of a study of the effect of a mathematics intervention on the mathematics achievement and engagement of grade six students. It found that students who received the intervention did not have greater mathematics achievement or engagement than comparison students who experienced other curricula. The relevant sections of the report are appendices on the statistical power analysis conducted to determine the number of schools, class sections, and students needed to detect a minimum detectable effect size for an outcome (appendix B on pages B-1 and B-2), an illustration of how random assignment procedures were implemented using Microsoft Excel (appendix C on pages C-1 to C-3), the student math interest inventory used to understand what students think about the work they do for mathematics class and the associated confirmatory factor analysis (appendix D on pages D-1 to D-4), and teacher surveys including an initial survey for background information and then monthly surveys and a final surveys about how intervention elements were used and perceived (appendix E on pages E-1 to E-4). 2012 Link to Resource Methods Report Tool 13 Pages R False False False False True True False False False True True False False False False False True True True True False False True False False False False Student Achievement, Mathematics, Professional Development 2 True False False True True False False False False
225 The English Language Learner Program Survey for Principals. REL 2014-027. Grady, M. W.; O’dwyer, L. M. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed this survey tool – The English Language Learner Program Survey for Principals – to help state education departments collect consistent data on the education of English language learner students. Designed for school principals, the survey gathers information on school-level policies and practices for educating English language learner students, the types of professional development related to educating these students that principals have received and would like to receive, principals’ familiarity with state guidelines and standards for educating these students, and principals’ beliefs about educating these students. 2014 Link to Resource Tool 28 Pages P False False False False True False False False False False False False False False False False True False False False False False False False False True False Education Evaluation, Department of Education, Education Development Center, Institute of Education Sciences, t i o, National Center, r t m e n t o f E d u c, Regional Assistance, unbiased large-scale evaluations of education programs, D e p, research-based technical assistance, results of research, commercial products, Grady, O’Dwyer, English Language Learner Alliance, policies of IES, REL report, Boston College, mention of trade names, ED-IES, Learner Program, Survey, Regional Educational Laboratory Northeast, organizations, policymakers, educators, public domain, widespread dissemination, practices, content, endorsement, Laura, Matthew, publication, Islands, synthesis, NCEE, DC, funds, United States, Principals, Contract, Washington, views, permission, June, collaboration, Government 3 True False False True False False False False False
226 The Impact of Providing Performance Feedback to Teachers and Principals: Final Report Garet, M.; Wayne, A.; Brown, S.; Rickles, J.; Song, M., Manzeske, D. Institute of Education Sciences The Institute of Education Sciences developed this report to provide researchers and program evaluators with a description of the design and findings of an evaluation of a program providing performance feedback to teachers and principals. The relevant sections of the report are the tools used to measure classroom practice, principal leadership, and student achievement, and the description of the outcomes of the study. The report includes an overview of the measures (pages 2-5), more detail on pages 22-24, 33-34, and 41-42, and sample reports in Appendix K. Outcome measures are presented on pages B-4 to B6. Appendix E provides technical detail on the value-added model used to estimate student achievement and Appendix H technical details about analyses assessing treatment-control differences in educators’ experiences and impacts on outcomes. 2017 Link to Resource Tool Methods report 61 pages R False False False True True True False False False True False True True True True True True False True True True True True False True True False Principals, Teachers’ Performance Evaluations 3 True False False True True False False False False
227 The Investing in Innovation Fund: Summary of 67 Evaluations Boulay, B.; Goodson, B.; Olsen, R; McCormick, R.; Darrow, C.; Frye, M.; Gan, K.; Harvill, E.; Sarna, M.; Institute of Education Sciences National Center for Education Evaluation and Regional Assistance The National Center for Education Evaluation and Regional Assistance developed this report to summarize the findings of a set of evaluations of programs designed to improve student outcomes. The report includes sections that are useful for professionals designing program evaluations. Section 3.1. describes how the authors adopted criteria to assess the strength of the i3 evaluations using What Works Clearinghouse (WWC) evidence standards and a review protocol that ensured the consistent application of those standards (Appendix B). The protocol lists a number of elements that program evaluators might want to consider as they design evaluations, including example outcomes (these are also summarized in section 2.3.3.). Section 3.2.2. lists four standardized tools and templates the authors designed: a study design template to fully describe impact and implementation evaluation design plans, with embedded supports and links to relevant WWC Standards; a Contrast Tool to document each of the planned impact analyses, including research questions, samples, outcomes and baseline measures, and timing of the analysis; a Fidelity Measure Tool to describe the methodology for measuring the fidelity of implementation of the key components of the intervention’s logic model; and reporting templates that include all the information necessary to review the strength of, and summarize the findings from, the evaluations to guide evaluators in drafting reports. These tools are available at http://ies.ed.gov/ncee/projects/evaluationta.asp. Section 3.2.3. describes potential risks to the strength of evaluations such as threats to baseline, power, and implementation fidelity. Section 3.3.1. describes how the team prioritized among multiple evaluation findings. Finally, Chapter 4 and Appendix I can be skimmed to explore different ways of summarizing findings. 2018 Link to Resource Guide 166 R True False False False True True True False False True False False True True False True False False False True True False True True False False False Education Interventions, Educational Research, References WWC Standards, Any Education Topic, Initial Planning, RCT, QED, Identifying Comparison Groups, Addressing Changes in Your Sample, Reducing Bias in Comparison Groups, Selecting Appropriate Outcome Measures, Addressing Analysis Challenges, Reporting, Interpreting, and Using Findings, Student Achievement Measure, Student Behavior Measure 3 True False True False False False False False True
228 The MQI Protocol for Classroom Observations Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide presents the elements that constitute the MQI framework, a research-based protocol to evaluate math lessons. This guide is a valuable resource for program directors who want to select a teacher observation tool. 2010 Link to Resource Guide 4 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
229 The Next Big Leap for Research-Practice Partnerships: Building and Testing Theories to Improve Research Use Tseng, V. W. T. Grant Foundation The W. T. Grant Foundation developed this guide to introduce researchers and practitioners to theories of action (TOAs) as a tool to promote evidence use. Research-practice partnerships (RPPs) have been flourishing as opportunities for researchers to do more impactful research, and for agency staff to get more useful research that can inform their work to improve student outcomes. However, RPPs have focused on producing evidence more than they have on using it. This guide describes how to articulate TOAs for research use, empirically test them, and then iteratively improve on RPP work and refine theories. The guide describes the purpose of a TOA and how to focus initial conversations on different types of research use (instrumental, conceptual, political, process, and imposed uses). Next it highlights considerations around change, decision-making processes, processes by which research would come to be used, and the conditions that would support its use. The last section moves beyond the use of TOAs to support research use and highlights emerging research on TOAs and RPPs including: the relationships between RPP structures, strategies, and capacity and practitioner research use; how and under what conditions RPPs influence the use of research; and the measurement of RPP effectiveness employing five key dimensions: 1) building trusting relationships, 2) conducting rigorous research to inform action, 3) supporting practice change, 4) informing education efforts outside the partnership, and 5) building capacity among partners. 2017 Link to Resource Guide 12 P False True False False True False False False False False False False False False False False False False False False False False False False False False False None 2 False False True False False False False False True
230 The PLATO Protocol for Classroom Observations Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide presents the elements that constitute the PLATO framework, a research-based protocol to evaluate English language arts lessons in grades four through nine. This guide is a valuable resource for program directors who want to select a teacher observation tool. 2010 Link to Resource Guide 5 pages P False False False True True False False False False False False False False False False False True False False False False False False False True False False None 1 True False True False False False False False False
231 The Reliability of Classroom Observations by School Personnel Ho, A.; Kane, T. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this report to help evaluators of educator observation programs assess the accuracy and reliability of school personnel in performing classroom observations. A particularly useful section of the report is the description of the study design, which includes random assignment (pp. 5-8). The authors examine different combinations of observers and lessons observed, including variation in the length of observations, and the option for teachers to choose the videos that observers see. The bulk of the report (pp. 9-31) presents study results including clear tables and graphs. This guide is a valuable resource for program evaluators who want to understand the breadth of statistical analyses that underlie the non-technical “Ensuring Fair and Reliable Measures of Effective Teaching” (Resource # 205). 2013 Link to Resource Methods report 34 pages R False False False True True True False False False True False True False False False True True False True True True True False False True False False None 3 True False False False True False False False False
232 The Role of Between-Case Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research. NCER 2015-002 Shadish,W.R.; Hedges, L.V.; Horner, R.H.; & Odom, S.L. National Center for Education Research The National Center for Education Research developed this report to increase professionals’ knowledge of, and interest in, single-cased design (SCD) as a strong experimental-design option. The report addresses when to use an SCD, the different forms it can take, its ability to yield causal inference, analysis issues, and how to calculate, use, and report on effect sizes. This paper adds a tool to the repertoire of SCD researchers. Those who review literatures to identify what works (SCD researchers, other academic scholars, research firms, or governmental bodies interested in effective interventions) will benefit from statistical recommendations about effect-size calculations and their use in meta-analysis and pertinent examples to encourage the inclusion of SCD studies in evidence-based practice reviews. Suggestions for future directions are useful for those in policy positions with resources and an interest in improving the use of SCDs both as a method in their own right and as one of many methods that can contribute to knowledge of what works. Finally, the report can help scholars with a statistical and methodological bent to address tackle the theoretical issues the analysis and meta-analysis of SCD research raise. 2015 Link to Resource Methods Report 109 pages R False True False False False False False False True False False False False False False False False False True True True False False False False False False Effect Size, Case Studies, Research Design, Observation, Inferences, Computation, Computer Software, Meta Analysis, Ruling Out Thousand Oaks shed light Mean Difference Sir Ronald Van den Noortgate Heterogeneity Testing Pivotal Response effect case size sizes scd studies treatment analysis scds 3 False False False False True False False False False
233 The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research William R. Shadish, Larry V. Hedges, Robert H. Horner, Samuel L. Odom IES The National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors’ recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices. 2016 Link to Resource Methods Report 109 pages R False True True False False False False False False False False False False False True False True True False False True True False False False False False Education Research Thomas, Special Education Research Joan McLaughlin, University of California, Institute of Education Sciences Ruth Neild, Northwestern University, University of Oregon, National Center, University of North Carolina, report, Department of Education Arne Duncan, Summarizing Single-Case Research, single-case design research, Case Effect, Project Officer, Meredith Larson, Phill Gagné, Kimberley Sprague, public domain, Deputy Director, Shadish, use of effect sizes, Odom, Commissioner, Chapel Hill, Policy, Interpreting, Robert, Horner, Larry, Hedges, paper, William, Merced, Samuel, Contract ED-IES, Conducting, Delegated Duties, Authors, views, Brock, Secretary, positions, NCSER, NCER, Westat, Role, opinions, December, Disclaimer, Authorization 3 False False False False True False False False False
234 The Role of Review Protocols and Other Review Issues: Module 3, Chapter 3, WWC Training What Works Clearinghouse Institute of Education Sciences This video discusses in detail how to determine which variables must show baseline equivalence on the analytic sample. Most studies must demonstrate baseline equivalence on outcome variables. However, when the outcome variable is impossible to collect at baseline, WWC provides content area ‘review protocols’ that provide acceptable proxies. The video also discusses why the use of propensity score matching cannot be used to determine baseline equivalence. It also addresses when imputed baseline data may or may not be acceptable. 2016 Link to Resource Video 13 min R True False False False True True True False False True False False False True False True False False False False False False False False False False False None 2 False False False False False True False False False
235 The Texas Teacher Evaluation and Support System Rubric: Properties and Association With School Characteristics Lazarev, V.; Newman, D.; Nguyen, T.; Lin, L.; Zacamy, J. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southwest developed this report to describe the properties of the Texas Teacher Evaluation and Support System rubric and its association with school characteristics. Two sections of the report are relevant. The first one is study limitations on pages 11 and 12, which discusses collecting data during early stages of implementation, small sample size, single teacher observations, and school- vs. classroom-level analyses. The other is data and methodology in Appendix C where authors describe how they combined datasets and the approach they used for each research question, including descriptive statistics, correlations, factor analysis, and statistical tests. These sections are a valuable resource for program directors and evaluators who want to create or evaluate teacher evaluation rubrics and especially determine whether they truly differentiate among teachers. Appendix C will be useful to evaluators working with multiple datasets. 2015 Link to Resource Methods report Tool 6 pages R False False False True True False False False False False False False False False True False False True True False False False False False True True False Schools: Characteristics of, Teachers: Characteristics of, Performance Evaluations, Quality of Instruction 2 True False False True True False False False False
236 The Utility of Teacher and Student Surveys in Principal Evaluations: An Empirical Investigation Liu, K.; Springer, J.; Stuit, D.; Lindsay, J.; Wan, Y. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Midwest developed this report to describe the findings from a study of a district’s principal evaluation model and the relationships between component measures and student growth. The relevant sections of the report are: the analyses appendix (C) and the Tripod survey tool (appendix E). Appendix C reports on supplemental analyses: the reliability and validity of the principal evaluation instrument, additional correlations, and power analysis. Appendix E lists the publicly-released Tripod Student Perception Survey items. These sections are a valuable resource for program directors who want to better gauge the quality of their agency’s educator evaluation system. In particular, they provide a way to assess how well evaluation components predict student growth. 2014 Link to Resource Methods Report Tool 50 pages R False False False True True False False False False False True False False False False True True False True True False False True True True True False Evaluation, Principals, Students, Student Evaluation, Teachers 3 True False False True True False False False False
237 The What Works Clearinghouse: Connecting research and practice to improve education What Works Clearinghouse What Works Clearinghouse This guide goes into further depth on what the WWC does. It features links to WWC publications and resources, such as intervention reports, practice guides, and reviews. The guide also provides answers to frequently asked questions and information about the WWC review process. 2017 Link to Resource Guide Brief or Summary 2 P True False False False True False False False False False False False False False False False False False False False True False False False False False False WWC 1 False True True False False False False False True
238 The What Works Clearinghouse: New Strategies to Support Non-Researchers in Using Rigorous Research in Education Decision-Making Seftor, N.; Monahan, S.; McCutcheon, A. SREE Spring 2016 Conference Abstract Study authors developed this brief (SREE abstract) to raise researchers’ awareness of What Works Clearinghouse (WWC) activities to communicate findings of rigorous research to a wide audience in clear and engaging ways. The brief focuses on three tools the WWC publishes: briefs that explain the rules the WWC uses to assess the quality of studies, a database of intervention reports, and practice guides and accompanying materials that describe low-cost actionable recommendations to address common challenges, such as writing or algebra instruction. This brief is a valuable resource for researchers and evaluators who want to leverage rigorous research to guide and support their work and want to access them easily. 2016 Link to Resource Guide 7 pages P False False False False True False False False False False False False False False False False False False False False False False False False False False False Clearinghouses, Federal Programs, Educational Research, Decision Making, Evidence Based Practice, Information Dissemination, Research and Development, Theory Practice Relationship 2 False False True False False False False False True
239 The WWC Outcome Standard: Module 5, Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video begins by providing WWC’s definition of an outcome, outcome measure, and outcome domain, and then provides examples of how these three terms are used differently. The video then briefly reviews 4 ways in which the WWC rates and outcome study: (1) demonstrate face validity (2) demonstrate reliability (3) avoid over alignment, and (4) use consistent data collection procedures. 2016 Link to Resource Video 3 min P True False False False True False False False False False False False False False False True False False False False False False False False False False False None 1 False False False False False True False False False
240 Theoretical and Empirical Underpinnings of the What Works Clearinghouse Attrition Standard for Randomized Controlled Trials Deke, J.; Chiang, H. SREE Spring 2014 Conference Abstract Study authors developed this brief (SREE abstract) to explain the WWC attrition model, the process for selecting key parameter values for that model, and how the model informed the development of the WWC attrition standard, and can be used to develop attrition standards tailored to other substantive areas. The brief focuses on the model, how it can be used to estimate bias and determine whether differential attrition rates exceeds a maximum tolerable level. 2014 Link to Resource Brief Methods report 9 pages R True False False False False True False False False False False False True True False False False False False True False False False False False False False Attrition (Research Studies), Student Attrition, Randomized Controlled Trials, Standards, Evidence, Models, Statistical Bias, Error of Measurement, Intervention, Educational Research 2 False True False False True False False False False
241 Tips and Tricks for Successful research Recruitment: A Toolkit for a Community-Based Approach Kubicek,K and Robles, M. Southern California Clinical and Translational Science Institute This toolkit provides an array of advice on how to recruit participants. Even though the main focus is on medical trials, many tips can be applied to educational research as well. The tool discusses the role of partners in recruitment, tips about how to successfully communicate your study, and discusses strategies to help you address common recruiment challenges. 2016 Link to Resource tool guide 30 P False False False False False False False False False True False True False False False False False False False False False False False False False False False Recruiting participants; Recruitment; Digital platforms; compensation 3 False False True True False False False False True
242 Toolkit for a Workshop on Building a Culture of Data Use Gerzon, N.; Guckenburg, S. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Northeast & Islands developed this guide to help practitioners foster a culture of data use in districts and schools. It is grounded in a conceptual framework that draws on five research-based elements known to support an effective culture of data use: participating in the flow of information for data use; communicating professional expectations for data use; providing resources and assistance to make meaning from data; providing professional development on data-use knowledge and skills; and providing leadership to nurture a culture of data use. The guide provides a set of structured activities for facilitators to undertake during a workshop, guiding ideas to scaffold participant learning, and suggestions for participant activities. The facilitator’s guide includes instructions for use and preparation, an agenda for a one-day professional development session (or a series of shorter sessions), and slides with guidance for each of the 11 components of the workshop(s). The handouts offer research reviews, vignettes, tools, and resources that highlight effective practices in each of the five framework elements. This guide is a valuable resource for program directors who may want to implement or improve on data use in their setting. The guide is also useful for evaluators who want to add important elements related to data use to logic models including activities and indicators. 2015 Link to Resource Guide Tool 129 P False False False False True False False False False False False False False False True False False False False False False False False False False False False None 3 False False True True False False False False False
243 Tools For Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching Archer, J.; Cantrell, S.; Holtzman, S.; Joe, J.; Tocci, C.; Wood, J. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. The guide focuses on creating and using pre-scored videos of teaching practice to build a common understanding of what teaching looks like at particular levels of effectiveness. The guide describes how to identify and acquire appropriate videos; how to recruit and prepare coders; and how to monitor for continuous improvement. Each section provides guiding questions on how to lay the foundation and how to improve. The guide also includes a planning worksheet. This guide is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Guide Tool 41 pages P False False False True True False False False False False False True False False False False False False False False False False False False False False False None 3 False False True True False False False False False
244 Tools For Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching – DCPS Advice for Facilitating Reconciliation Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this tool to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Videos of teaching practice are a tool used to train observers. Master observers pre-score videos with a rubric. This tool is a script for reaching agreement on correct ratings when pre-scoring videos of teaching practice. This tool is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Tool 3 pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False True False False False False False
245 Tools for Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching – DCPS Guidelines for Score Rationale Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Videos of teaching practice are a tool used to train observers. Master observers pre-score videos with a rubric. Clear rationales for ratings help observers in training understand what a rubric’s indicators of performance look like in practice, and it lets them compare their own attempts to identify relevant evidence with the correct evidence for rating a segment. This tool is a set of guidelines and specific examples to draft quality rationales when pre-scoring videos of teaching with a rubric. This tool is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Guide 3 pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False None 1 False False True False False False False False False
246 Tools for Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching – DCPS Video Quality Checklist Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this tool to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Videos of teaching practice are a tool used to train observers. Master observers pre-score videos with a rubric and observers in training compare their own attempts to the master observers’. This tool is a checklist that helps determine whether a video includes sufficient evidence to rate practice and should therefore be pre-scored and used in the training of observers. This tool is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Tool 2 pages P False False False True True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False True False False False False False
247 Tools For Making it Real: Pre-Scoring Video to Clarify Expectations for Effective Teaching – RIFT Master Coding Worksheet Measures of Effective Teaching (MET) project Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this tool to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Videos of teaching practice are a tool used to train observers. Master observers pre-score videos with a rubric. This tool is a form for coders to organize relevant evidence for rating while pre-scoring videos of teaching. This tool is a valuable resource for program directors who want to design and implement a high quality observation system. 2014 Link to Resource Tool 2 pages P False False False True True False False False False False False False False False False False True False False False False False False False False False False None 1 False False False True False False False False False
248 Toward School Districts Conducting Their Own Rigorous Program Evaluations: Final Report on the “Low Cost Experiments to Support Local School District Decisions” Project Newman, D. Empirical Education Inc. Empirical Education Inc. developed this report to help school systems increase the efficiency and lower the cost of randomized experiments. The report focuses on how to design and conduct a randomized experiment, on school system decision-making processes, and on the challenges to using research findings. This report is targeted to three audiences: “1) educators with little training or interest in research methodologies but who are in decision-making positions, such as senior central office administrators, 2) district research managers with some, but often rusty or outdated, understanding of research methodology and statistical analysis, and 3) technical experts who may be called upon to review the evidence.” (p. 2) 2008 Link to Resource Methods Report 47 pages R False False True False False True False False False True True True False False True False True False True False True False False False False False False Bay Area Child Left Behind Paired Randomization Professional Development research district program decision evidence school education experiment report 3 False False False False True False False False False
249 Understanding Program Monitoring: The Relationships among Outcomes, Indicators, Measures, and Targets. REL 2014-011 Malone, N.; Mark, L.; Narayan, K. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Pacific developed this guide to help educators, program managers, administrators, and researchers monitor program outcomes more effectively. The guide provides concise definitions of program monitoring components, explains how these components relate to one another, and illustrates a logic model framework for assessing the progress of a program. The guide demonstrates through examples the relationships between outcomes, indicators, measures, benchmarks, baselines, and targets. This guide is a useful resource for educators who want to identify the achievement of program goals. This guide is one of a four-part series on program planning and monitoring. 2014 Link to Resource Guide 5 Pages P False False False False True False False False False False False False False False False True False False False False False False False False False False False program components, program monitoring framework, program managers, monitoring program outcomes, Understanding program monitoring, program evaluation, definitions of program monitoring components, assessing program progress, Mid-continent Research, measures, educators, indicators, better data-informed decisions, Education, effectiveness, Krishna Narayan, Island Innovation, LLC, better monitor, researchers, resource, baselines, building capacity, targets, including benchmarks, Learning, administrators, crucial step, guide, relationships, Nolan Malone, Lauren Mark, means, practitioners, ongoing plan, programs, Examples, following, Policymakers 1 False False True False False False False False False
251 Understanding Variation in Treatment Effects in Education Impact Evaluations: An Overview of Quantitative Methods Schochet, P. Z.; Puma, M.; & Deke, J. Institute of Education Sciences An Institute of Education Sciences-developed report to help education researchers evaluate programs using randomized controlled trials or quasi-experimental designs with treatment and comparison groups. This report summarizes the research literature on quantitative methods for assessing the impacts of educational interventions on instructional practices and student learning and provides technical guidance about the use and interpretation of these methods. This report is a valuable resource for education researchers with an intermediate to advanced knowledge of quantitative research methods who could benefit from summary information and references to recent papers on quantitative methods. The report examines variations in treatment effects across students, educators, and sites in education evaluations. 2014 Link to Resource Methods Report 50 pages R False False False False False True True True False False False False False False False False False False True True False False False False False False False National Center Working Paper intervention Mathematica Policy take-up exclusion restriction Regional Assistance U.S. Department job training treatment subscript effects intervention methods example subgroup control group student 3 False False False False True False False False False
252 Using Administrative Data for Research: A Companion Guide to “A Descriptive Analysis of the Principal Workforce in Florida Schools.” REL 2015-049 Folsom, J.S.; Osborne-Lampkin, L.; Herrington, C. D. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed this guide to help professionals extract and describe information from a State Department of Education database. The document describes the process of data cleaning, merging, and analysis, including: identifying and requesting data, creating functional datasets, merging datasets, and analyzing data. Although focused on the demographic composition, certifications, and career paths of Florida school leaders, the processes described in this guide can be modified to explore other education datasets. The document also provides an example of a data request and data dictionary. This guide will be useful to anyone using administrative datasets for to create summary statistics and charts, or conducting other analyses from raw administrative data. 2014 Link to Resource Guide 38 pages P False False False True True False False False False False False False False False True True False True True True False True False False False True False Education Evaluation, Department of Education, Regional Assistance, Institute of Education Sciences, Reading Research, results of research, research-based technical assistance, National Center, Regional Educational Laboratory Southeast, publication, unbiased large-scale evaluations of education programs, Florida schools, commercial products, Osborne-Lampkin, REL report, Florida State University, policies of IES, ED-IES, principal workforce, Using administrative data, mention of trade names, descriptive analysis, organizations, policymakers, practices, educators, widespread dissemination, content, permission, companion guide, endorsement, Herrington, Folsom, synthesis, public domain, DC, NCEE, funds, United States, Contract, Washington, views, December, Government, edlabs 3 True False True False False False False False False
253 Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates Fortson, K.; Verbitsky-Savitz, N.; Kopa, E.; Gleason, P. Institute of Education Sciences This National Center for Education Evaluation and Regional Assistance (NCEE) report estimates the impacts of charter schools using the four common comparison group methods and assesses whether these nonexperimental impact estimates are the same as the experimental impact estimates. The authors find that the use of pre-intervention baseline data that are strongly predictive of the key outcome measures considerably reduces but might not completely eliminate bias. Regression-based nonexperimental impact estimates are significantly different from experimental impact estimates, though the magnitude of difference is modest. 2012 Link to Resource Methods Report 68 pages R False True True False False True True False False True False False True True False False False False True True False False True False False False False DEPARTMENT OF EDUCATION Emma Kopa Natalya Verbitsky-Savitz ad justed experimental Supplemental Tables Working Paper group comparison treatment students group test experimental nonexperimental impact baseline estimates 3 True False False False True False False False False
254 Using Classroom Observations to Measure Teacher Effectiveness Regional Education Laboratory Program Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Mid-Atlantic developed this webinar to help participants use classroom observations as one measure of teacher effectiveness in a comprehensive educator support system. The featured presentation (about 0:04:56 to 1:15:00) begins with an overview of the context and general principles for educator evaluations (until about 21:00) and then goes into the effective practice of observations and professional conversations. This video will be helpful to stakeholders interested in putting in place, or improving upon, classroom observations for evaluation purposes and increased professional learning. 2016 Link to Resource Video 1 hour 42 min P False False False True False False False False False False False False False False False False True False False False True False False False False False False None 3 False False False False False True False False False
255 Using Cost Analysis to Inform Decisions about Professional Learning REL Northeast and Islands Regional Education Laboratory Program, Institute of Education Sciences The Institute for Education Sciences developed this webinar to introduce professionals to cost analyses in education research with a focus on the costs of professional development offerings. The goals for the webinar are to demonstrate ways to estimate resource requirements and costs of implementing educational strategies, the questions different cost analyses can answer and with what data, and how a district or school can use these analyses to inform resource allocation decisions. An introduction to cost analysis and purposes begins at 4:03. Two examples are presented starting a 7:29—class size reduction and Read 180. The importance of cost analyses for professional learning starts at 10:16, and addresses which costs to include (district, school, teachers, taxpayers, state education department). At 15:28, the webinar describes CostOut, a free online toolkit to estimate the cost and the cost-effectiveness of educational programs. A description of four types of cost analyses begins at 16:41: cost feasibility, cost effectiveness , cost benefit, and cost utility. Costs to be considered in the context of professional learning is at 30:52. At 38:44, a case study in Maine includes a description of cost analyses considering various professional development offerings and lessons learned (65:42). 2018 Link to Resource Webinar Guide 86:37 P False False False True True False False False False False False False False False True True True True True False True False True False True True False Educator Development, Any Education Topic, Initial Planning, Acquiring and Using Administrative Data, Selecting Appropriate Outcome Measures, Collecting New Data, Combining Data Systems, Understanding Data Analytic Models, Reporting, Interpreting, and Using Findings, Student Achievement measure, Teacher Measure, Principal/School Measure 3 True False True False False False True False True
256 Using Matching Methods: A Practical Guide to Minimizing Differences between Groups in Quasi-Experimental Designs Litwok, D.; Wolf, A.; Price, C.; Unlu, F. Abt Associates This set of resources offers practical guidance to evaluators interested in using matching methods to improve estimates of intervention impacts. The zip file includes a 39-page guide that explains why a matched comparison group should be used and what makes a comparison group well matched. It describes the four steps for implementing a matching process to create equivalent treatment and comparison groups in a quasi-experimental evaluation: selecting the matching variables, choosing the distance metric and matching method, assessing baseline equivalence, and adjusting the match. It goes through five applications of propensity-score matching and two examples of Mahalanobis matching, and compares methods. The zip file includes a dataset and SAS and Stata programs that can be used to implement the matching techniques described in the guide. (Note: the Length field is the number of pages in the guide) 2017 Link to Resource Guide Tool 39 R True False False False True False True False False True True False False True False False False False False False False False True False False False False QED Tools and Resources, Guidance 3 True False True True False False False False False
257 Using Student Surveys to Monitor Teacher Effectiveness Regional Education Laboratory Program Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Mid-Atlantic developed this webinar to give participants access to a student survey to monitor teacher effectiveness. The featured presentation (from about 00:8:00 to 1:35:00) focuses on the Tripod survey, a student survey about teaching practices, student engagement, and school climate. At its core are the 7 Cs (caring, captivating, conferring, clarifying, consolidating, challenging, and controlling), a set of teaching practices that peer-reviewed research has linked to student engagement (effort and behavior) and achievement (gains on standardized tests). The video explores each measure using survey examples, results from the field for various student groups, and graphical representations. It discusses an example of use of the survey in the Measures of Effective Teaching project, which examined how evaluation methods could best be used to identify and develop great teaching. This video will be helpful to schools and districts interested in using student surveys to improve teaching and learning. 2016 Link to Resource Webinar 1 hour 55 min P False False False True True False False False False False False False False False False False True False False False True True False True True False False None 3 True False False False False False True False False
258 Using the WWC to Support Data Collection for Meta-Analyses Josh Polanin, J.; Scher, L. What Works Clearinghouse The What Works Clearinghouse (WWC) developed this webinar to help researchers conduct meta-analyses – the webinar assumes basic familiarity with meta-analysis and the WWC website. The webinar begins with a brief overview of meta-analysis and systematic reviews differentiating them from other research; it then describes the key steps involved in conducting a meta-analysis: formulating a research question, developing a protocol, searching the literature, gathering study-specific information, preparing a meta-analytic dataset, and analyzing and reporting results. The webinar describes how researchers can use WWC resources to support their meta-analytic review efforts; how to export study-specific details from the WWC’s individual studies database, including data on effect sizes that may not be available in published study reports; how to extract information from the database files and conduct a meta-analysis using R, a free statistical software package. It also points to researchers and software that could be resources for those interested in single case designs in the meta-analysis context (this starts at 71:18). 2017 Link to Resource Webinar Methods report 01:19:00 R False False False False True False False False True False True False False False True True False True False False True False False False False False False None 3 False False False False True False True False False
259 Validity and Reliability of a Questionnaire on Primary and Secondary School Teachers’ Perception of Teaching a Competence-Based Curriculum Model García, L.; Gutiérrez, D.; Pastor, J. ; Romo, V. Journal of New Approaches in Educational Research The authors developed this paper to help researchers develop a valid and reliable questionnaire to measure teacher perception, in this case of a competence-based curriculum model. The article describes the sample and how it was recruited; how the questionnaire was drafted based on a review of the literature and an expert panel; how it was administered; and how it was finalized using exploratory and confirmatory factor analysis, resulting in a rapidly and easily administered scale with good criterion validity. This paper is a valuable resource for program evaluators who want to develop quality questionnaires or those who are looking for a tool to measure teacher perceptions of competence-based curriculum 2018 Link to Resource Methods report Tool 6 R False False False False True False False False False False False False False False False True True False False False False False False False True False False Validity, Reliability, Questionnaire, Teacher, Competence 2 True False False True True False False False False
260 Ways to Evaluate the Success of Your Teacher Incentive Fund Project in Meeting TIF Goals Milanowski, A.; Finster, M. U.S. Department of Education, Teacher Incentive Fund The U.S. Department of Education developed this guide to help Teacher Incentive Fund (TIF) grantees collect evidence about how well they are accomplishing TIF goals (improving student achievement by increasing teacher and principal effectiveness; reforming teacher and principal compensation systems so that teachers and principals are rewarded for increases in student achievement; increasing the number of effective teachers teaching poor, minority, and disadvantaged students in hard-to-staff subjects, and creating sustainable performance-based compensation systems). The document provides examples of charts that users can create to visualize trends and comparisons. It also suggests survey questions that can be added to existing surveys to get at stakeholder perceptions of the program. It provides links to additional resources on program evaluation available on the TIF Community of Practice website. This guide will be useful to program staff who want to track progress toward program goals. However, the methods described in the guide will not provide definitive evidence to measure the program’s impact on outcomes. 2016 Link to Resource Tool Guide 12 pages P False False True True True False False False False False False False False False False True True False False False True True True False True False False Program Evaluation, Incentives, Grants, Compensation (Remuneration), Program Effectiveness, Academic Achievement, Sustainability, Teacher Effectiveness, Equal Education, Management Systems, Human Capital, Teachers, Principals, Evaluation Methods 2 True False True True False False False False False
261 We are Better Together: Researchers & Educators Partner to Improve Students’ Math Skills REL Midwest Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Midwest developed this podcast to help practitioners use quick turnaround research to get feedback on their improvement efforts in a timely manner. This podcast describes a continuous improvement process implemented through a networked improvement community – a group of individuals who are committed to working on the same problem of practice. It describes an example from a network of Michigan schools. This video will be helpful to practitioners and researchers interested in implementing and collaborating on a continuous improvement process and wanting an overview and a real-life example. 2016 Link to Resource Video Guide 18 min P False True False False True False False False False False False False False False False False False False False False False False False False False False False None 2 False False True False False True False False True
262 Webinar Series on Developing a Successful Professional Development Program Evaluation Webinar 4 of 5: Going Deeper into Analyzing Results REL Southeast Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This fourth webinar mainly focuses on the considerations for quantitative and qualitative analyses. For quantitative analysis, the webinar focuses on calculating attrition, calculating baseline equivalence, and statistical adjustments. For qualitative analysis, the webinar focuses on triangulating data by using multiple methods of data on the same question, using different interviews to avoid biases of different data collectors and interviewers working alone, and using multiple perspectives to interpret a set of data. 2016 Link to Resource Webinar 31 slides, 38 minutes P True False False True False True True False False False False False True True False True False False True True False False False False False False False observed data/number without observed strong studies estimated effects evidence standards must have have low statistical significance studies that baseline characteristics data attrition wwc ncee professional development 3 False False False False False False True False False
263 Webinar Series on Developing a Successful Professional Development Program Evaluation: Webinar 1 of 5: Critical Elements of Professional Development Planning & Evaluation REL Southeast Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This first webinar introduces practitioners to What Works Clearinghouse standards for randomized controlled trials and quasi-experimental designs. It also provides recommendations so practitioners have the resources necessary to design an evaluation aligned to best practices for making causal conclusions. 2016 Link to Resource Webinar Slide presentation 35 slides, 62 minutes P True False False False False True True False False True False False True True False False False False True True True False False False False False False Development Planning Guide for Study Authors Logic Model Professional Development Reporting Guide for a Logic Barbara Foorman study # WWC Reporting professional development wwc http gov study ies ncee sid groups design 3 False False False False False False True True False
264 Webinar Series on Developing a Successful Professional Development Program Evaluation: Webinar 2 of 5: Going Deeper into Planning the Design REL Southeast Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This second webinar builds on the first in the series and provides an in-depth discussion of the critical features of randomized controlled trials and quasi-experimental designs, including the random assignment process, attrition, baseline equivalence, confounding factors, outcome eligibility, and power analysis. 2016 Link to Resource Webinar Slide presentation 22 slides, 49 minutes P True False False False False True True False False True False False True True False True False False True True True False False False False False False Cluster-level RCT professional development QED 3 False False False False False False True True False
265 Webinar Series on Developing a Successful Professional Development Program Evaluation: Webinar 3 of 5: Going Deeper into Identifying & Measuring Target Outcomes REL Southeast Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This third webinar mainly focuses on identifying evaluation outcomes and describes an example from Mississippi which created two measures – a classroom observation tool, and a teacher knowledge measure – to evaluate professional development around a literacy initiative. The third webinar also discusses participant recruitment. 2016 Link to Resource Webinar Slide presentation 20 slides, 49 minutes P False False False True False False True False False True False True False False False True True False False False False False False False True False False items test reliability may measure teachers tool schools professional study professional development teacher knowledge initiative target schools point biserial the assessment coaches observation tool target state 3 True False False False False False True True False
266 Webinar Series on Developing a Successful Professional Development Program Evaluation: Webinar 5 of 5: Going Deeper into Interpreting Results & Presenting Findings REL Southeast Regional Education Laboratory Program, Institute for Education Sciences The Regional Educational Laboratory (REL) Southeast developed a series of five recorded webinars with materials designed to help professionals rigorously evaluate professional-development programs. The series focuses on key aspects of the evaluation process, including planning and design, identifying and measuring target outcomes, analyzing results, interpreting results, and presenting findings. This series is a valuable resource for program directors who want to understand how to design and implement a high-quality evaluation. The series is also useful for seasoned evaluators who want to review important evaluation issues. This fifth webinar mainly focuses on interpreting results, especially using effect sizes and statistical significance. It also briefly addresses the presenting findings to different groups of practitioners and mentions a REL Northeast and Islands resource for those interested in the details on this topic. 2016 Link to Resource Webinar Slide Presentation 15 slides, 24 minutes P True False False True False True True True False False True False False False False True False False False False True False False False False False False Districts Working External Researchers Islands Toolkit REL Northeast Results Externally Disseminating Research Externally http Module 5 Research Results Working with professional development effects external student positive wwc results working effect research researchers 3 False False False False False False True True False
267 Webinar: Examining Evaluator Feedback in Teacher Evaluation Systems Cherasaro, T.; Martin, T. Regional Education Laboratory Program, Institute of Education Sciences The Regional Education Laboratory (REL) Central developed this webinar to give professionals access to a tool to assess the quality of evaluator feedback in a teacher evaluation context. Presenters describe the survey, guiding research questions, the underlying theoretical model, and definitions. They provide an overview of the iterative steps in the survey development process, including selecting an advisory panel, testing the survey with teacher cognitive interviews, examining the reliability and validity of the survey, piloting it, and getting to a final version. They go over sample research questions the survey can help address and how to analyze data. They also provide links to pdf, Word, and Google versions of the survey. Finally, they provide a sample data display and how to interpret it. This video will be useful more generally to those interested in creating their own evaluator feedback survey or even surveys in other areas. 2016 Link to Resource Webinar 28 min P False False False True False False False False False False False False False False False False True False False False True True False False True False False None 3 True False False False False False True False True
268 What is the WWC? What Works Clearinghouse What Works Clearinghouse This one-page brief provides a quick overveiw of the what, why, who, how, and where of the What Works Clearinghouse, the U.S. Department of Education’s trusted source for rigorous research in education. 2017 Link to Resource Brief or summary guide 1 P True False False False True False False False False False False False False False False False False False False False True False False False False False False WWC 1 False True True False False False False False True
269 What It Looks Like: Master Coding Videos for Observer Training and Assessment — Summary McClellan, C. Bill and Melinda Gates Foundation The Bill and Melinda Gates Foundation developed this guide to help professionals design and implement high-quality observation systems as part of educator evaluation initiatives. Observers must know what it looks like when a particular aspect of teaching is demonstrated at a particular level. Master-coded videos—videos of teachers engaged in classroom instruction that have been assigned correct scores by professionals with expertise in both the rubric and teaching practice—help ensure this competency. The guide describes the goals of master coding and the recruitment and training of coders. This guide is a valuable resource for program directors who want to design and implement a high quality observation system. 2013 Link to Resource Guide 4 pages P False False False True True False False False False False False True False False False False True False False False False False False False False False False None 1 False False True False False False False False False
270 What Works Clearinghouse Procedures and Standards Handbook Version 3.0 What Works Clearinghouse Institute of Education Sciences The Institute of Education Sciences developed this guide to help researchers design studies that meet the standards to claim causal inference. The guide focuses on randomized controlled trials and quasi experimental designs (it also briefly introduces standards for regression discontinuity and single case designs). The guide defines the basic steps used to develop a review protocol, identify the relevant literature, assess research quality, and summarize evidence of effectiveness. It also briefly describes the set of products that summarize the results. This guide is a valuable resource for researchers and practitioners who want to understand why studies meet or do not meet the standards and why education programs, policies, and practices are deemed effective or not. 2014 Link to Resource Guide 91 pages R True False False False False True True False False True True False True True False True False False False True True False False False False False False peer-reviewed Learning Disabilities Pilot Single Single-Case New York Electronic Databases Works Clearinghouse full text intervention wwc study effect review studies data group level design 3 False False True False False False False False False
271 What Works Clearinghouse. Single-Case Design Technical Documentation Kratochwill, T. R.; Hitchcock, J.; Horner, R. H.; Levin, J. R.; Odom, S. L.; Rindskopf, D. M.; & Shadish, W. R. Institute of Education Sciences: What Works Clearinghouse at the U.S. Department of Education The What Works Clearinghouse (WWC) developed this guide to help researchers implement effective single-case designs (SCDs). The guide focuses on the features of SCDs, threats to internal validity, standards for SCDs, the visual analysis of the data on which SCD researchers traditionally have relied to answer research questions, and the estimation of effect sizes. This guide is a valuable resource for researchers who want to understand how to design, implement, and evaluate an SCD that meets WWC standards. 2010 Link to Resource Guide Methods Report 34 pages R True False False False False False False False True False False False True False False False False False True True False True False False False False False Projected Comparison straight line best fitting distribution theory give reasons randomized controlled Examine Observed error structures intervention data phase baseline effect design standards case designs phases 3 False False True False True False False False False
272 When Is It Possible to Conduct a Randomized Controlled Trial in Education at Reduced Cost, Using Existing Data Sources? A Brief Overview Coalition for Evidence-Based Policy Council for Excellence in Government The Council for Excellence in Government developed this guide to advise researchers, policymakers and others on when it’s possible to conduct a low-cost, high-quality randomized controlled trial (RCT) in education. The guide includes two main sections: 1) describing the conditions that enable researchers to conduct an RCT at reduced cost (e.g. high-quality administrative data and an active partnership with senior school or district officials), and 2) providing examples of well-designed RCTs conducted at a reduced cost. This guide is a valuable resource for policymakers, researchers, and others who automatically discard RCTs as an approach because they believe such studies are always too costly and too burdensome on schools to be practical. 2007 Link to Resource Guide Brief or Summary 13 pages P False False True False False True False False False False False False False False True False False False False False False False True False False False False Costs, Scores, Data, Research Design, Evaluators, Evaluation Methods, Guides, Outcomes of Education, Program Evaluation, Research Methodology, Evidence, Educational Policy, Federal Programs, Controlled Trial Randomized Controlled grade retentions struggling readers Project Officials professional development special education Mathematica Policy Solicit Rigorous Evidence-Based Policy data school trial students cost intervention randomized study measure 2 True True True False False False False False False
273 Which comparison-group (“quasi-experimental”) study designs are most likely to produce valid estimates of a program’s impact: A brief overview and sample review form Coalition for Evidence-Based Policy Coalition for Evidence-Based Policy The Coalition for Evidence-Based Policy developed this guide to help researchers select comparison-group (quasi-experimental) studies that are most likely to provide the same level of rigor as a randomized controlled trial. The guide describes examples of these studies and how to identify comparison groups in a way that increases the likelihood of producing valid results. It identifies regression-discontinuity designs as an example of approaches that can meet the required conditions for validity of results. The guide also provides a form that can be used to help determine whether a comparison-group study produces scientifically valid estimates of a program’s impact. 2014 Link to Resource Guide 7 pages P False False False False False False True True False True False False False True False False False False True True False False False False False False False Brief Overview composite rating been carried carried out controlled trials one composite give one Please give program comparison group groups study studies valid estimates methods design 2 False False True False False False False False False
274 Which Study Designs Are Capable of Producing Valid Evidence About A Program’s Effectiveness? A Brief Overview Coalition for Evidence-Based Policy Coalition for Evidence-Based Policy The Coalition for Evidence-Based Policy developed this guide to help policy officials, program providers, and researchers go identify effective social programs and ways to assess a program’s effectiveness. The guide provides a brief overview of which studies (randomized controlled trials and quasi-experiments, such as matched comparison groups or regression discontinuity designs) can produce valid evidence about a program’s effectiveness. The final section identifies resources for readers seeking more detailed information or assistance. This guide is a valuable resource for policy officials, program providers, and researchers who want to understand the conditions under which different evaluation approaches are most likely to provide an accurate assessment of a program’s effectiveness. 2014 Link to Resource Guide Brief or Summary 8 pages P False False False False False True True True False True False False False True False True False False False False False False False False False False False Regional Assistance Services Task Study Designs Task Force Designs Are carried out linked here well-conducted pre-program program groups comparison studies study education evidence group rcts 2 False True True False False False False False False
275 Why Use Experiments to Evaluate Programs? Resch, A. Regional Education Laboratory Program, Institute of Education Sciences and Mathematica Policy Research The Regional Educational Program (REL) developed a video series to help schools, districts, states, and their research partners use a cost-effective approach, known as “opportunistic experiments,” to test the effectiveness of programs. This video describes why schools, districts, states, and their research partners might want to use experiments to evaluate programs and policies, and what they can learn from them. The video will be useful to professionals considering rigorous evaluation that will generate evidence for informing their education decisions in a cost-effective manner. 2016 Link to Resource Video 6 min P False False True False False True False False False False False False False False False False False False False False False False False False False False False None 2 False False False False False True False False True
276 Workshop on Survey Methods in Education Research: Facilitator’s guide and resources Walston, J.; Redford, J.; Bhatt, M. P. Regional Education Laboratory Program, Institute of Education Sciences The Regional Educational Laboratory (REL) Midwest developed this report for practitioners who want to conduct training on developing and administering surveys. The report provides materials related to various phases of the survey development process, including planning a survey, borrowing from existing surveys, writing survey items, pretesting surveys, sampling, survey administration, maximizing response rates, and measuring nonresponse bias. It also contains a section on focus groups (as part of the survey development process or as a supplementary or alternative data collection method). The materials include a sample workshop agenda, presentation handouts, activities, additional resources, and suggestions for adapting these materials to different contexts. This report is a valuable resource for program directors and evaluators who want to design surveys and focus groups to collect new data. 2017 Link to Resource Guide Tool 146 pages P False False False False True False False False False False True True False False False False True False False False False False False False False False False Educational Research 3 False False True True False False False False False
277 WWC Online Training, Module 6: Systematic Reviews, Part 2 What Works Clearinghouse Institute of Education Sciences This video describes how the What Works Clearinghouse (WWC) conducts systematic reviews of education research, which they do under a variety of topic areas. It mentions the WWC-developed standards, procedures, and review protocol, which are described in handbooks available on the WWC website. The protocol outlines additional review parameters and procedures specific to a number of topic areas of interest, including elgible outcomes, and serves as a guide for every step of the systematic review process. The video then describes the five steps of the WWC systematic review process: define the scope of the review, search the literature, assess the research, combine the findings, and summarize the review. It then presents the components of the protocols: what studies are eligible for review, how the WWC will search for them, and how it will review them. The video illustrates each step of the systematic review process by examining them trhough an example of the systematic review of the intervention Read 180, which is in the adolescent literacy topic area. 2018 Link to Resource Video Guide 14:47 R True False False False True False False False False False False False False False False True False False False False False False True False False False False None 2 True False True False False True False False True
278 WWC Online Training, Module 6: Systematic Reviews, Part 3 What Works Clearinghouse Institute of Education Sciences This video presents the What Works Clearinghouse’s (WWC) definition of a study, an important concept to understand since studies are the building block of systematic reviews. It recalls that a single manuscript can include one study or more and that multiple manuscripts might all report on the same study. It explains when analyses of the same intervention share enough (at least three) characteristics for the WWC to consider them parts of the same study. These study characteristics include the study sample, the assignment or selection process used to create the intervention and comparison groups, the data collection and analysis procedures, and study findings. It provides a couple of example and a knowledge check. It concludes on how ratings are finalized when reviewers initially disagree on them. 2018 Link to Resource Video 6:36 R True False False False True False False False False False False False False False False False False False False False False False False False False False False None 2 False False False False False True False False False
279 WWC Online Training, Module 7: Reporting, Part 2 What Works Clearinghouse Institute of Education Sciences This video describes the key elements for reporting intervention effectiveness findings stemming from the What Works Clearinghouse (WWC) systematic review process. For findings that meet WWC group design standards with or without reservations, these elements are three measures of the magnitude of findings: a mean difference, an effect size, and an improvement index. The video refers the viewer to the WWC study review guide and the WWC procedures handbook for more information and calculations. The WWC also reports the statistical significance of the findings and the video provides an example from a reading recovery intervention report. 2018 Link to Resource Video Guide 7:25 R True False False False True True True False False False False False False False False False False False False False True False True False False False False None 2 True False True False False True False False False
280 WWC Online Training, Module 7: Reporting, Part 3 What Works Clearinghouse Institute of Education Sciences This video explains how to report on findings from within studies that meet the eligibility requirements specified in the review protocol and meet What Works Clearinghouse (WWC) design standards with or without reservations. It describes how to report findings for studies that include multiple findings, for the same outcome measure on different samples, at different points in time, or from different measurement approaches, or for multiple outcomes within a single outcome domain. It explains how decisions are made regarding which are the main findings and which are supplemental, e.g., based on timing or sample vs. subgroup-based findings and how decisions are made in expedited reviews. It describes individual finding ratings vs. full-study ratings. Finally it presents the distinction between statistically significant and substantively important positive effects. The video concludes with an example from a reading recovery intervention report. 2018 Link to Resource Video Guide 13:25 R True False False False True True True False False False False False False False False False False False False False True False True False False False False None 2 True False True False False True False False False
281 WWC Online Training, Module 7: Reporting, Part 4 What Works Clearinghouse Institute of Education Sciences This video describes the statistical adjustments made during a What Works Clearinghouse (WWC) study review. There are three situations in which the WWC will adjust study findings rather than reporting measures of magnitude and statistical significance from a study or calculated based on statistics reported in a study. Two of these post hoc statistical adjustments affect the statistical significance of the reported results. First the WWC applies an adjustment to studies that assign clusters, that is, a set of individuals assigned as a group to a condition, if the study authors did not perform their own adjustment. Second, the WWC applies an adjustment when a study makes multiple comparisons in a group design study. The last adjustment, a difference-in-differences adjustment, affects the magnitude of the findings reported by the study. This adjustment accounts for differences that existed between the intervention and comparison groups prior to implementing the intervention and is applied when the authors did not perform their own adjustment for baseline differences. The video discusses these adjustments including when and how the WWC makes them and it uses an example to walk through each step of the process. 2018 Link to Resource Video Guide 14:21 R True False False False True True True False False False False False False False False False False False False True True False True False False False False None 2 True False True False False True False False False
282 WWC Online Training, Module 7: Reporting, Part 5 What Works Clearinghouse Institute of Education Sciences This video explains how the What Works Clearinghouse (WWC) characterizes the evidence of effectiveness for an intervention across studies, including individual characterizations for each outcome domain. There are six possible characterizations an intervention can receive: positive effects, potentially positive effects, no discernable effects, mixed effects, potentially negative effects, and negative effects. The video refers the viewer to the procedures handbook for detail on the criteria. The video also describes an extent of evidence rating for each domain to show how much evidence the WWC used to determine the intervention effectiveness rating, an indication of how likely the findings are to generalize to other settings. This is also called the external validity of the findings and is based on the number of studies, settings, students, and classrooms. The video displays a summary table from a reading recovery intervention report and describes each of its components. 2018 Link to Resource Video Guide 7:05 R True False False False True True True False False False False False False False False False False False False False True False True False False False False None 2 True False True False False True False False False
283 WWC Online Training, Module 7: Reporting, Part 6 What Works Clearinghouse Institute of Education Sciences The video is a quiz that checks for viewer’s knowledge of how the What Works Clearinghouse (WWC) reports findings stemming from the systematic review process of studies of intervention effectiveness that use a group design. Questions address statistical adjustments the WWC makes to reported findings under certain circumstances, statistical significance before and after adjustments, main vs. supplemental findings, and statistically significant, substantively important, and indeterminate effects. The last minute of the video summarizes a WWC module on reporting in the context of group design standards and points the viewer to additional resources on the WWC website, including detailed responses to the knowledge check. 2018 Link to Resource Video Guide 9:02 R True False False False True True True False False False False False False False False False False False False True True False True False False False False None 2 True False True False False True False False False
284 WWC Online Training, Module 9: Study Review Guide, Part 2 What Works Clearinghouse Institute of Education Sciences The relevant section of this video (up to 1:15) describes the What Works Clearinghouse (WWC) study review guide (SRG), a user-friendly tool for reviewing studies, which flags important information on studies and helps understand ratings. The video describes how the tool helps ensure the reporting of pertinent information about a study and the uniform and quality application of the logic of the WWC design standards and procedures to assign a study rating and report findings. It mentions the existence of two versions of the SRG, a public version and another available to contracted reviewers. They are identical in that they will lead to identical ratings, but they have different functionalities (the rest of the video provides tips on conducting a WWC study review and then gets into the details of completing the SRG). 2018 Link to Resource Video 1:15 R True False False False True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False False False True False False True
285 WWC STANDARDS Brief: Baseline Equivalence What Works Clearinghouse Institute of Education Sciences The What Works Clearinghouse (WWC) developed this brief to help program evaluators and researchers understand and determine baseline equivalence in education research. Baseline equivalence indicates how similar groups that received and did not receive an intervention were before the intervention began. The brief explains why baseline equivalence matters, provides an example with a graphical representation, and describes how the WWC determines baseline equivalence by calculating an effect size. The brief includes a glossary and suggestions for further reading. 2017 Link to Resource Guide 2 P True False False False True True False False False True False False False True False False False False False False False True False False False False False Control Groups, Experimental Groups, Intervention, Comparative Analysis, Individual Characteristics, Research Methodology, Computation, Standards, References WWC Standards, Any Education Topic, RCT, Identifying Comparison Groups, Reducing Bias in Comparison Groups, Visualizing Data 1 False False True False False False False False False
286 WWC Standards Brief: Attrition Standards What Works Clearinghouse Institute of Education Sciences Two-page WWC brief that defines attrition, identifies common causes, discusses why matters, and describes how the WWC handles attrition when considering if an evaluation has met WWC standards. 2017 Link to Resource brief or summary 2 P True False False False True True True False False False False False True True False False False False False False False False False False False False False WWC; Comparison Groups; attrition; bias; differential attrition; overall attrition 1 False True False False False False False False True
287 WWC Standards Brief: Confounding Factors What Works Clearinghouse Institute of Education Sciences Three-page WWC brief that defines confounding factors, (when it is not possible to tell whether the difference in outcomes is due to the intervention or to another cause), discusses why matters, identifies the types of confounding factors that commonly occure in WWC-reviewed studies, provides examples, and detials how a confounding factor can affect a study’s WWC rating. 2017 Link to Resource brief or summary 2 P True False False True True True True False False True False False False True False False False False False False False False False False False False False WWC; Comparison Groups; ; bias; confounding factors; 1 False True False False False False False False True
288 WWC Training Logistics: Intro–Chapter 2, WWC Training What Works Clearinghouse Institute of Education Sciences This video, designed to provide logistics related to the WWC Group Design Training Modules, introduces different helpful resources that are available on the WWC website. 2016 Link to Resource video 3 min P True False False False True False False False False False False False False False False False False False False False False False False False False False False None 1 False False False False False True False False False
289 WWC Version 4.0 Procedures Handbook What Works Clearinghouse Institute of Education Sciences The Institute of Education Sciences developed this guide to help researchers understand the What Works Clearinghouse (WWC) study review process. The guide describes the steps used to develop a review protocol for a newly-prioritized topic. It explains how relevant literature is identified, how eligible studies are selected within that literature, and how they are assessed against WWC design standards. It also details the process of reporting on findings, including the magnitude and statistical significance of study-reported estimates of the effectiveness of interventions. Appendices provide additional details and examples. This guide is a companion to the WWC Version 4.0 Standards Handbook available in this database. 2017 Link to Resource Guide 76 R True False False False True True True True True False False False False False False False False False False False True False False False False False False References WWC Standards, Initial Planning, RCT, QED, RDD, SCD, Reporting, Interpreting, and Using Findings 3 False False True False False False False False False
290 WWC Version 4.0 Standards Handbook What Works Clearinghouse Institute of Education Sciences The Institute of Education Sciences developed this guide to help researchers design studies that meet the standards to claim causal inference. The first section of the guide deals with randomized controlled trials (RCTs) and quasi experimental designs. It covers individual- and cluster-level assignment, propensity score analyses, Analyses in Which Subjects Are Observed in Multiple Time Periods, Analyses with Potentially Endogenous Covariates, Analyses with Missing Data, and Complier Average Causal Effects (CACE) — when subjects do not comply with their assigned conditions in an RCT). The second section focuses on regression discontinuity designs (RDDs) including eligibility, ratings, standards for different RDDs, and reporting requirements. Single case designs are addressed in an appendix. Additional appendices dive deeper into imputed outcome and baseline data and CACE estimates. This guide is a valuable resource for researchers and practitioners who want to understand why studies meet or do not meet the standards and why education programs, policies, and practices are deemed effective or not. It is a companion to the WWC Version 4.0 Procedures Handbook available in this database. 2017 Link to Resource Guide 130 R True False False False True True True True True True False False True True False True False False True True True False False False False False False References WWC Standards, Initial Planning, RCT, QED, RDD, SCD, Identifying Comparison Groups, Addressing Changes in your Sample, Reducing Bias in Comparison Groups, Selecting Appropriate Outcome Measures, Understanding Data Analytic Models, Addressing Analysis Challenges, Reporting, Interpreting, and Using Findings 3 False False True False False False False False True

How to use these filters. As you check individual field checkboxes, the documents that appear will only be documents that satisfy all of those checkboxes. The more boxes you check, the fewer number of documents.

Resource Characteristics

Evaluation Lifecycle

Additional Focus