Module 2B: Indicator Interaction in the State’s System of Annual Meaningful Differentiation (AMD)
Evaluating State Accountability Systems Under ESEA
This webpage is part of the Evaluating State Accountability Systems Under ESEA tool, which is designed to help state educational agency (SEA) staff reflect on how the state’s accountability system achieves its intended purposes and build confidence in the state’s accountability system design decisions and implementation activities. Please visit the tool landing page to learn more about this tool and how to navigate these modules.
Under the Elementary and Secondary Education Act of 1965 (ESEA), as amended by the Every Student Succeeds Act (ESSA), a state’s system of AMD (i.e., a state’s accountability system) must include a minimum number and certain types of indicators, which are the data and information used to measure school performance and reflect priorities within each state. The following indicators are required under ESEA:
- Academic achievement indicator, as measured by proficiency on the annual reading or language arts and mathematics assessments and at the State’s discretion, for each public high school in the State, student growth, as measured by such annual assessments.
- Other academic indicator, for elementary and middle schools that are not high schools, where another valid and reliable statewide academic indicator allows for meaningful differentiation.
- Graduation rate indicator, for high schools, as measured by the four-year adjusted-cohort graduation rate (ACGR) and, at a state’s discretion, one or more extended-year ACGRs.
- Progress in achieving English-language proficiency (ELP) indicator, as defined by the state and measured by the statewide ELP assessment.
- At least one indicator of school quality or student success (SQSS) that meaningfully differentiates between schools and is valid, reliable, statewide, and comparable.
A state’s system of AMD must afford substantial weight to each indicator, where much greater weight is afforded to the academic indicators in aggregate than the SQSS indicator. Likewise, states must describe how these indicators interact within the state’s accountability system. In some states, the state’s system of AMD is based on an index consisting of the ESEA-required indicators. In other states, the state’s system of AMD uses a series of decision rules to identify schools that merit reward or require support to improve outcomes for all students based on these indicators—these states are sometimes described as having a “dashboard” approach. Regardless of whether a state uses a summative (e.g., index) or non-summative (e.g., dashboard) approach, the selection and interaction of indicators should reflect the state’s theory of action, the policy objectives of the state’s accountability system, and intended outcomes of the state’s system of AMD.
This module focuses on the selection and interaction of indicators for the state’s accountability system; however, this module also includes a series of optional sub-modules that focus on the individual indicators required under ESEA. To use this module, first complete this main module on the selection and interaction of indicators, which will help you further explore how indicators interact and function within your state’s system of AMD. In addition, this main module will help you identify whether any specific indicators require additional exploration or examination. After completing this main module, select the sub-modules for the indicators you would like to explore in depth. The links to these sub-modules are provided at the end of this module.
Please note that it may be helpful to use the notes generated during Module 1: Theory of Action and Module 2A: State’s System of Annual Meaningful Differentiation (AMD) alongside this module to inform reflection.
This main module includes three sets of self-reflection prompts that are intended to help articulate why decisions were made and how indicators interact within the state’s system of AMD. These three sets of prompts are not intended to be discrete; instead, they are intended to work together to help you answer questions in the next sections of this module. The three sections are described in the following table.
Table 1. Overview of Module 2B: Indicator Interaction in the State’s System of AMD
|Section||What is it?||Why is it important?||How should it be used?|
|Articulate the Rationale for How Indicators Are Combined||A description of why indicator interactions are designed the way they are||It is important to document the reasoning behind how indicators are combined, document how they should interact operationally, and describe the “what” and “why” behind the weighting of the indicators.||The rationale for the indicators asks you to describe the expected policy objective, behavioral intent, and expected results associated with how the indicators are combined in the state’s system of AMD. This rationale can be used as a point of comparison for examining the data within the set of indicators. This will also help you, in the next section, assess the strength of the rationale.|
|Stakeholder Perceptions of the Rationale for Combining Indicators||A reflection on whether stakeholders understand the rationale behind how indicators interact and that helps identify possible areas that may be misinterpreted or misunderstood by the stakeholders||Determining what assumptions or design decisions might require more explanation that can help minimize the public’s misunderstanding and help prioritize resources to support communication efforts.||The stakeholder perceptions section asks you to think about your rationale as an outsider. To what degree will stakeholders understand this rationale? How public is the rationale or its supporting documentation? How might stakeholders interpret, use, or misinterpret the design and results of the system? This reflection may help you identify what areas may need additional explanation or whether additional communication is necessary.|
|Confidence in Operations and Results of Combining Indicators||Based on your rationale and potential risk, an examination of your level of confidence that design decisions are sound and evidence supports your assumptions||Determining your overall confidence in the state’s system of AMD results and presentation can help you determine where to collect evidence, make system revisions, or develop outreach materials.||The confidence in operations and results section will help you identify potential evidence that can help confirm your rationale regarding how indicators are combined and how each indicator is designed. The rationale can also be used as a point of comparison for design decisions, and the strength of rationale can be used to focus attention on key confidence claims.|
To get started, click on the link below.