Tag Archives: Assessment

New York – Assessment of Students with Limited English Proficiency Policy Letter

November 24, 2003

Honorable James A. Kadamus
Deputy Commissioner
Office for Elementary, Middle, Secondary and Continuing Education
The New York State Education Department
Room 875 EBA
Albany, New York 12234

Dear Deputy Commissioner Kadamus:

I am writing in response to your letter of May 30, 2003, in which you sought clarification about the annual assessment requirements for English language proficiency. Specifically, you asked for clarification regarding the provisions of Title I and Title III of the Elementary and Secondary Education Act (ESEA), as amended by the No Child Left Behind Act of 2001 (NCLB), that require an annual assessment of English proficiency of students with limited English proficiency, as applied to students with the most significant cognitive disabilities. I apologize for the late response to your letter.

For students whose Individual Education Program (IEP) team determines that the cognitive disabilities are so significant that they cannot participate in the NYSESLAT, the State’s test of English language proficiency for Title III and test of language arts and mathematics for Title I, New York may excuse those students from the NYSESLAT. In those cases, New York must use the New York State Alternate Assessment (NYSAA) or a similar local assessment to determine student proficiency relative to New York’s academic standards, and may also use the NYSAA or a similar local assessment to monitor English language proficiency, as long as the following conditions are met. First, New York would need to define a standard for English language proficiency that can be applied to the alternate assessment. Second, New York must ensure that the alternate assessment is valid for both purposes. One approach for determining validity is to involve experts knowledgeable about language acquisition in the development, administration, and scoring process for the alternate assessment.

Using the NYSAA or a similar local assessment under these conditions would be an acceptable course of action if: (1) New York’s language arts content standards are compatible with the assessment of both academic content and English language proficiency in the areas of reading, writing, speaking and listening; (2) the alternate assessment includes an assessment of student achievement on the critical Title III elements (i.e., listening, speaking, reading, and writing) and the language arts standards for Title I; and (3) the alternate assessment scoring rubric permits documentation of a full range of performance on this indicator.

Page 2 – Honorable James A. Kadamus

Please remember that, while Title I only requires students to be assessed in reading/language arts and mathematics in grades 3-8 and high school (by 2005-06), Title III of NCLB requires that limited English proficient students must be assessed for English proficiency in grades kindergarten through grade twelve. As you mention in your letter, the number of students who fall into this category must be limited, and would be dictated by the percentage ultimately determined by the Department following its proposed rule of March 20, 2003. We intend to finalize this regulation in the near future, in time for you to provide timely guidance to districts and schools in New York.

As you work through this process, please note that this letter does not constitute final approval of the NYSAA for these purposes. If New York were to pursue this option, it would need to submit evidence to the Department for peer review through the standards and assessment process to receive that approval. Also, please be aware that this letter does not indicate that the approach will comply with Federal civil rights requirements, including Title VI of the Civil Rights Act of 1964, Title IX of the Education Amendments of 1972, Section 504 of the Rehabilitation Act of 1973, Title II of the Americans with Disabilities Act, and requirements under the Individuals with Disabilities Education Act.

If you have additional questions about the nature of this alternate assessment and how it may be designed to measure both content achievement and English language proficiency, please contact Sue Rigney in the Office of Elementary and Secondary Education at 202-260-0931 or Kathleen Leos in the Office of English Language Acquisition at 202-205-4037, who can provide additional guidance.

Sincerely,
Ronald J. Tomalis
Acting Assistant Secretary
Office of Elementary and Secondary Education

Table of Contents SEA Policy Letters

FY 2009

Arizona Department of Education

An Examination of the Relative Contributions of the Four Language Modalities to
English Language Proficiency: Implications for Assessment and Instruction Across
Grade Spans and Proficiency Levels

This consortium of six states — Arizona, Colorado, Louisiana, Montana, New Mexico, and Utah — led by Arizona, in partnership with WestEd and Pacific Metrics, and supported by national experts from institutions of higher education, proposes an 18-month study that systematically examines the four language modalities (i.e., listening, speaking, reading, writing) required to be assessed under Title III of the No Child Left Behind Act of 2001 in terms of (1) their relative contribution toward determining English language proficiency (ELP), (2) their interrelationships vis-à-vis ELP, and (3) whether and how their relative contributions toward determining ELP and their interrelationships change across (a) grade levels, (b) language proficiency levels, and (c) English learner (EL) student subgroups. This proposal originates from states’ need to better ensure accurate and valid measures of the ELP domain and the achievement of their EL students, and this information has significant implications for standards and instruction that support EL student achievement.

The proposed study will analyze data from consortium member states related to EL students’ development and attainment of ELP. The study will be conducted within a framework of validity and utility that structures and organizes the study’s activities and outcomes. All states will benefit from the project in terms of (1) knowledge related to improving measurement of student development and attainment of ELP, (2) guidance related to creating systems of support for EL students, and (3) professional development that builds educator capacity related to supporting the development of EL students’ ELP.

Arizona Department of Education

Longitudinal Examination of Alternate Assessment Progressions (LEAAP)

Arizona Department of Education (ADE) submits this proposal for Project LEAAP, featuring an analysis of curricular progressions and student performance across grades on states’ alternate assessments based on alternate academic achievement standards (AA-AAAS) for students with significant cognitive disabilities. LEAAP will allow states to examine student progress over time (absolute priority 3) – in both performance and skills assessed. LEAAP is proposed by a consortium of states (competitive priority 2), including Arizona, Maryland, South Dakota, and Wyoming. LEAAP is a collaborative effort (absolute priority 1) with Western Carolina University, which will manage all project activities with oversight by the ADE, and the University of North Carolina at Charlotte. LEAAP will inform states’ future improvements in AA-AAAS systems, including accessibility and validity (competitive priority 1).

Goal 1: Conduct a retrospective study of content and performance expectations in states’ AA-AAAS. PIs will conduct an analysis of content, performance expectations, and cognitive demands of items within and across grades to determine the nature of AA-AAAS progressions across grades. Content will be evaluated relative to state standards and the Common Core State Standards.

Goal 2: Investigate and define dimensions of growth in achievement for this population, using three years of student achievement data.

Goal 3: Examine teacher and student variables in relation to AA-AAAS content selection,
administration, and progressions.
Student and teacher variables will be examined as predictors of
assessment administration choices (all assessment formats) and content progressions (portfolio only).

Goal 4: Provide technical assistance to states on interpreting and using their findings in order to improve assessment systems. An expert panel will review curricular progressions identified in Goal 1. States and the Advisory Committee will discuss the use of findings for continuous improvement.

Goal 5: Disseminate project products and findings. LEAAP includes a multi-faceted approach to dissemination (competitive priority 3) including data collection and reporting tools for states and project
findings disseminated through a wide range of channels.

Illinois State Board of Education

Spanish Academic Language Standards and Assessment (SALSA): Creating Spanish Language Development Standards PreK-12 and K-2 Spanish Language Proficiency Assessment

The Illinois State Board of Education (ISBE), on behalf of the 23-state World-Class
Instructional Design and Assessment (WIDA) Consortium, proposes to develop and implement
Spanish language development (SLD) standards for students in Pre-K through twelfth grade, and
to develop a practicable, reliable, and valid Spanish language proficiency assessment system for
Kindergarten and Grades 1-2 based on those standards for first language Spanish English
language learners and for students receiving content area instruction in Spanish regardless of
their L1. The project has four goals: 1) create academic SLD Standards for grades preK-12; 2)
develop a technology-mediated assessment, the Prueba Óptima del Desarrollo del Español
Realizado (PODER), to ensure that Spanish language development, as defined by the SLD
Standards, is assessed validly and reliably in grades K-2; 3) disseminate information on the
project; and 4) collaborate with other institutions in the development, research, and
administration of the assessments.

The proposed Spanish Academic Language Standards and Assessment (SALSA) project will
result in preK-12 SLD Standards, along with a defensible, psychometrically sound, technologically
mediated test based on those standards. PODER will give LEAs the capacity to initially screen Spanish-speaking ELLs in grades K-2 to determine their baseline Spanish language proficiency, and to monitor students’ progress in Spanish language development in classrooms where Spanish is the medium of instruction. The SLD Standards and PODER, intended for use throughout the United States and in Puerto Rico, will be developed with input from academic experts and educational professionals from a wide spectrum of programs and states. Initial piloting and field testing of PODER will occur in multiple sites, including Illinois, Colorado, New Mexico, and Puerto Rico.

Kansas State Department of Education

Develop Instrumentation to Analyze Fidelity of Instruction for Students with Disabilities
in Relation to Standards and Assessments
and Report on Opportunity to Learn and Student Achievement

With Kansas serving as the lead state, and Georgia and Ohio serving as anchor
consortium states, two Council of Chief State School Officers’ (CCSSO) State Collaboratives on
Assessment and Student Standards (SCASSs) — in partnership with the University of
Wisconsin’s Center for Education Research (WCER), and WestEd, Inc. — propose a unique
project integrating research, development, collaboration, and technical assistance that will
provide a high-quality, valid method of improving the quality and validity of state assessments
designed to assess the knowledge and skills of students with disabilities. The two state
collaboratives, Assessment for Special Education Students (ASES) and Surveys of Enacted
Curriculum (SEC), have each been working with member states for over a decade, and now are
joining forces for this project. Kansas (lead state), Georgia, and Ohio are active states with the
work of both collaboratives that see strong prospects for the project. The central goal of the
proposed project is to develop and test a method for measuring and reporting the degree to which
instruction for students with disabilities (SWDs) is aligned to grade-level state standards. This
study will result in: 1) a report for state leaders, educators, and researchers with research-based
evidence on the fidelity of instruction to state standards and assessments for students with
Individual Education Plans (IEPs) and evidence on the effects of differences in opportunity to
learn for subsequent student achievement, and 2) focused dissemination of materials and guides
for instructional improvement via conferences and other media that will highlight (a) research
results, (b) implications for their schools/districts, and (c) strategies to improve practice for
students with disabilities.

Minnesota Department of Education

Improving the Validity of Assessment Results for ELLs with Disabilities (IVARED)

States have developed participation criteria and accommodation policies for ELLs and for students with disabilities, but for the most part have done little to address these for students who fit into both groups: English language learners (ELLs) with disabilities. IVARED creates a consortium of states (Arizona, Minnesota, Maine, Michigan, and Washington) to address the validity of assessment results of ELLs with disabilities in statewide accountability assessments by examining the characteristics of the students and their performance, improving the process for making decisions about participation and accommodations via expert panel input and studies of decision making, and developing principles to guide the assessment of ELLs with disabilities. Through these activities, states will be able to develop validity arguments for their assessments and assessment practices for ELLs with disabilities, and by doing so will enhance the quality of their assessment systems for measuring the achievement of ELLs with disabilities.

States will collaborate with each other and with the National Center on Educational Outcomes
(NCEO) to examine state data, policies, and practices for ELLs with disabilities to achieve the following
objectives: (1) Identify and describe each state’s population and relate it to assessment performance; (2)
Describe inclusion in state assessment participation and performance policies; (3) Identify promising
practices for participation, accommodations, and test score interpretation decisions; (4) Strengthen the
knowledge base of assessment decision makers to improve decisions; and (5) Disseminate projectresults within states and nationally.

The materials developed through these objectives will become the basis for an online training module customized to each state’s specific requirements. All partners in the project agree on the importance of addressing validity issues for ELLs with disabilities, and are committed to IVARED activities to help them move toward their own validity arguments.

New Hampshire Department of Education

Student Accessibility Assessment System

The Student Accessibility Assessment System (SAAS) Project is a collaborative effort to develop and
validate a comprehensive system that meets four critical needs that must be addressed in order to improve
the assignment of test accessibility options and to improve the validity of test-based inferences about
student achievement, particularly for students with disabilities, special needs, and/or who are English
language learners. These needs include: a) improving educator understanding of accessibility options that
are available in computer-based test delivery environments; b) providing tools that allow educators to
work individually with students to explore test accessibility features that may help improve access for
each individual student during testing; c) creating better opportunities for students to develop familiarity
and comfort using computer-based test accessibility tools prior to testing; and d) providing educators with
empirical evidence that a selected tool or set of tools benefits the student while performing test items. To
meet these needs, the SAAS project will develop a comprehensive, integrated assessment tool that
provides: a) users with rich information about accessibility options available for computer-based tests; b)
tools to help users make informed decisions about the use of specific accessibility options and
combinations of these options; and c) measures to assess the effectiveness of selected options for
improving student access to test content. Once developed, the SAAS project will conduct a validity study
that focuses on teachers’ use of the SAAS to inform the specification of student access profiles for an
operational state test. Through this study, the project will collect evidence to: 1) examine the effect that
the SAAS has on informing decisions about student accessibility profiles; 2) better understand the factors
that teachers and students consider when defining a profile; and 3) estimate the effect that informed
assignment of accessibility features has on the validity of inferences about student achievement based on
a test score. To accomplish these goals, 9 states (NH, VT, ME, RI, CT, MT, UT, SC, and MD), and
experts from the University of Oregon, Boston College, the National Center for Educational Outcomes,
CAST, Measured Progress, Nimble Assessment Systems, ETS, American Printing House for the Blind,
and Gallaudet University will work collaboratively to develop, validate and disseminate the SAAS.

North Carolina Department of Public Instruction

The Utility of Online Mathematics Constructed-Response Items: Maintaining Important
Mathematics in State Assessments and Providing Appropriate Access to Students

With North Carolina serving as the lead state, four state education agencies including Kentucky, New Mexico, and South Carolina will collaborate to develop an assessment that is designed to better measure students’ knowledge and skills in mathematics. The assessment will consist solely of constructed-response items, delivered via computer with various accommodations available, and be automatically scored. Two sets of items will be developed: one for grade 7 and one for Algebra I, aligned to the Common Core State Standards. The use of constructed-response items in mathematics will allow states to measure students’ knowledge and skills relative to their mathematics standards in ways that more closely tap into the intentions of the standards than the use of selected-response items can.

The online delivery system will allow for both automated scoring of student responses and a variety of administration and response conditions to meet the needs of students with disabilities, English learners, and other students who may have specific access needs but are not classified in one of those two groups. The collaborating states will work with a panel of experts in mathematics education and in educating and assessing students with a variety of learning and assessment requirements. The items will be administered to students in each state, and the results will be analyzed to address the following questions:

  1. Can computer-delivered constructed-response mathematics items measure essential knowledge, skills, and processes in mathematics content standards more closely than traditional multiple-choice items?
  2. What computer-based accommodations are appropriate and feasible to meet a broad range of student needs, and do they increase student access to test content?
  3. Does automated scoring provide reliable, valid, and acceptable (to users) item scores, and do these qualities apply equally to responses from different student groups, particularly English learners and students with disabilities?
  4. Does the application of measurement models associated with constructed-response items reduce error in test scores through (a) the elimination of the need to estimate the guessing parameter and (b) the ability to score for partial knowledge?

FY 2006

Connecticut

Establishing the Validity of Test Accommodations and Score Interpretations for Special Education Students: A Collaboration of State-based Research

     NCLB requires that states offer accommodations on the grade level assessment so that the test is accessible to as many special education students as possible. However, little research has been conducted on the validity of accommodated score interpretations or the effectiveness of test accommodations. With Connecticut serving as the lead state and with the support of the CCSSO, a large consortium of states from two SCASS groups will participate in this special research project. The purpose of the research is to establish the validity of inferences from accommodated tests based on specific student accessibility needs. Studies will be conducted in up to ten states (AK, AZ, CT, KY, MI, NV, RI, UT) on a variety of commonly-used accommodations so that the results can be used to build a shared body of evidence for the validity of the interpretation of accommodated scores.

     This project will be coordinated across states, using a rigorous empirical research design. Every state will use the same 2 x 2 counterbalanced research design that has students (regular education or special education) crossed with accommodations (with or without), on one form of a pilot test to be conducted in their state. Approximately 200 – 500 students will be tested in each state’s sample, depending on the type of accommodation studied. Analyses to be conducted include statistical analysis of items and test scores, content analyses for validity evidence, and factor analysis to examine any structural changes in constructs due to accommodations. This project will result in a guidebook and associated database that provide designs, procedures, statistical data, and other information for evaluating the validity of test results from accommodated assessments. The findings and products will be disseminated to all states for their use in providing evidence on the appropriateness of their accommodations.


Illinois

Obtaining Necessary Parity through Academic Rigor (ONPAR): Research, Development, and Dissemination of a Parallel Mathematics Assessment for ELLs

     The Illinois State Board of Education (ISBE), on behalf of the 15-state World-class Instructional Design and Assessment (WIDA) Consortium, proposes to develop and implement a feasible, accessible, and valid assessment in mathematics for English language learners (ELLs) in the beginning stages of English language acquisition that can be used for state accountability purposes to meet the requirements of federal law. To achieve this aim, this project encompasses the following three broad goals: a) to ensure that the mathematics achievement of ELLs is assessed validly and reliably; b) to increase knowledge about English language acquisition and mathematics achievement; and c) to disseminate information about this standards-based, academic assessment and the results of related research.

     The WIDA Consortium, originally established with funding from a U.S. Department of Education Enhanced Assessment Grant, currently includes Illinois and 14 additional states. Combined, the 15 WIDA partner states enroll approximately 420,000 K-12 ELLs. Since 2003, WIDA has created and adopted comprehensive English language proficiency standards (2004, 2007) that represent the second language acquisition process and the language of the content areas of language arts, mathematics, science, and social studies. Based on these standards, WIDA developed a K-12 ELP test battery–ACCESS for ELLs® –that approximately 420,000 students took in spring 2007. WIDA also provides professional development activities and maintains a Web site (www.wida.us). In 2006, Rhode Island, on behalf of WIDA, received an Enhanced Assessment Grant to develop ONPAR-Science, a parallel science assessment for grades 4, 8, and 11. The work begun with ONPAR-Science, along with other WIDA activities, will support this project to develop a Web-based, parallel assessment of mathematics–ONPAR-Math.

     This project will result in a defensible, psychometrically sound, Web-based assessment that forms a common core test in mathematics for grades 3, 7, and 11. Initial piloting, field testing, and comparability studies will occur in Illinois and the New England Compact states of New Hampshire, Rhode Island, and Vermont, with the intent that the core test will be applicable to all WIDA states. The knowledge and skills developed with the core ONPAR-Math will be used to expand the assessment to include grades 3 through 8, 10, and 11. After the grant period, each state may choose to augment the core test with state-specific items to capture its full range of mathematics standards for specific grade levels.

     ISBE will leverage the already strong working relationship between the Wisconsin Center for Education Research, the administrative and research home of WIDA, and the Center for Applied Linguistics, as well as other nationally recognized leaders in mathematics, assessment, measurement, and the education of ELLs. ONPAR-Math will be built on a strong foundation of research and collaboration.


Iowa

Improving Methods of Analyzing Alignment of
Instruction to Assessments and Standards for English Language Learners and
Analyzing the Relationship of Alignment to Student Achievement

Overview: With Iowa serving as the lead state, the project will consist of a consortium of 10 states – Florida, Idaho, Iowa, Maine, Minnesota, Ohio, Utah, Virginia, Wisconsin, and one state to be named – partnering with the Council of Chief State School Officers (CCSSO) as lead contractor and three other research and technical assistance organizations: the University of Wisconsin’s Wisconsin Center for Education Research, EdCount, LLC, and WestEd, Inc. The consortium proposes a unique project integrating research, development, collaboration, and technical assistance to improve the quality and validity of state assessments designed to assess the knowledge and skills of English language learners (ELL). Each of the participating states will analyze and improve their state assessments through research and analysis of the alignment and validity of their state assessments in academic subjects for math and language arts especially in relation to English language learners, and by improving the alignment of state tests of English language proficiency to state standards. </strong

Goals and Scope: The proposal stemmed from states’ recognition of the need for additional resources and information for improving assessments and standards that apply to English language learners as identified in the CCSSO state collaborative project on ELL issues. The project proposal is also based on the states’ recognition of the potential applications of the Surveys of Enacted Curriculum (SEC) model and tools – which have been applied in more than 20 states for alignment studies and analyses of instructional alignment in several academic subjects – for analyzing alignment issues for ELL instruction, assessments, and standards. The project has five specific goals:

  • Measure and report the degree of alignment between instruction, state standards,
    and student assessments for English language learners in a sample of classrooms
    and schools in the Consortium states.

  • Advance and improve the alignment procedures and definitions for the Surveys of
    Enacted Curriculum alignment method to increase capacity for applying the
    method to English language development practices and materials.

  • Provide expert technical assistance for consortium states to apply and use
    alignment analysis findings with state leaders to improve the quality and validity
    of assessment instruments for ELL students.

  • Work with selected states to analyze growth in student achievement in relation to
    the degree of alignment of classroom instruction to standards, i.e., differences in
    “opportunity to learn,” and report the findings of the analysis to states.

  • Prepare and widely disseminate products from the study for use by educators,
    researchers, and policymakers, including a guide for use of the alignment methods demonstrated in the study for improving assessment.


Montana

Adapting Reading Test Items to Increase Validity of Alternative Assessments Based on Modified Achievement Standards

     The proposed project, Adapting Reading Test Items to Increase Validity of Alternate Assessments Based on
Modified Academic Achievement Standards (ARTIIV) reflects the desire of five states to respond thoughtfully and
significantly to the reading assessment needs of students eligible for the 2% option. ARTIIV will advance the states’
previous work, which resulted in a better understanding of the eligible students and promising approaches to item
construction. The consortium seeks to build on those outcomes to increase the validity and accessibility of their
current statewide assessments.

     This project will investigate strategies that states can use to adapt their assessments based on grade-level
academic achievement standards, focusing on the critical area of secondary level reading comprehension. ARTIIV
will explore the systematic reengineering of assessment items based on cognitive modeling of the comprehension
skill set. Cognitive modeling in the service of test development is increasingly recommended for improving the
validity of assessment results interpretation (Gorin, 2006, National Research Council, 2001).

     This project will utilize researched cognitive models, rework reading comprehension items to fit the model,
experimentally manipulate the items to present reduced cognitive loads, and test to see whether the approach is more
sensitive to the target population’s competencies without compromising psychometric properties or the intended
grade level constructs. ARTIIV further proposes to study experimentally the impact of read-aloud accommodations
on the performance of the reengineered items based on the work of the National Accessible Reading Assessments
Projects (Cahalan Laitusis, Cook, L.L, Cline, F., King, T., and Sabatini, J., 2007) and to explore a methodology for
creating modified academic achievement level descriptors.


Pennsylvania

State Academic Learning Links with Self-Evaluation for Alternate Assessment

     The Pennsylvania Department of Education is the applicant for the proposed project, State Academic Learning Links with Self-Evaluation for Alternate Assessment (SALLSA). Four states (Georgia, Pennsylvania, Washington, and Wyoming) collaborate (a) to replicate and extend the Links to Academic Learning (LAL) protocol (Flowers, Wakeman, Browder, & Karvonen, 2007) and (b) to understand and improve the linkage between their state academic standards and their own alternate assessments (AA) measured against alternate achievement standards (AAs). The LAL model was designed specifically for AAs and goes beyond content match between standards and assessment and looks at the match of instruction, curriculum, and accessibility of content standards for students with significant cognitive disabilities (SWSD).

     SALLSA has four goals: (1) Replicate and extend the research on the LAL alignment protocol using four states; (2) Develop and implement a process for helping states use alignment study results for planning improvements in their AA systems; (3) Conduct multiple case study to investigate implementation processes and impact of Goals 1 and 2; and (4) Disseminate project activities and findings. Perhaps the most significant need addressed is helping states interpret alignment findings and plan for improvement. Until now, the emphasis has been on conducting a study and submitting evidence for peer review. States need to take action on alignment study outcomes and SALLSA focuses on the analysis of study results for ongoing AA system improvement. As a form of validation, SALLSA replicates the LAL model in four states with unique assessment formats and investigates how states interpret alignment findings. A multiple case study will be conducted to investigate (a) how the LAL protocol is implemented within the state’s assessment system, and (b) what impact SALLSA implementation has on changes in AA systems with an emphasis on lessons learned that might guide other states in their planning for and use of alignment studies. The project will make processes and findings available through broad dissemination and the case study will maximize the quality of information made available to other states. Pennsylvania will engage Measured Progress to coordinate grant activities and conduct the alignment ratings. The authors of the LAL at the University of North Carolina at Charlotte have endorsed the replication plan, offered ongoing consultation, and agreed to be available to discuss refinements and interpretation capabilities. Western Carolina University will hold the subcontract for the project evaluation and multiple-case study. Finally, the Southeast Regional Resource Center will assist with dissemination of outcomes through the Federal Regional Resource network.


South Carolina

Operationlizing Alternate Assessment for Science Inquiry Skills

     The South Carolina Department of Education requests $1,168,706 over an 18-month period to implement the Operationalizing Alternate Assessment for Science Inquiry Skills (OAASIS) project. Developing assessment strategies based on specific student characteristics and instructional needs is crucial if accurate inferences are to be made about what students know and can do. The OAASIS project will change and improve state assessment systems because it will inform and provide guidance to a current assessment priority—defining the target student population and developing an alternate assessment based on modified achievement standards (AA-MAS) that provides accurate results for students—in order to use the 2% flexibility option as outlined in the No Child Left Behind non-regulatory guidance document (April 2007). South Carolina’s meets the four absolute priorities and the three competitive preference priorities established for the Enhanced Assessment Grant program.

     The goal of OAASIS is to investigate the process and development of an alternate assessment based on modified achievement standards (AA-MAS) by defining the target population, administering assessment strategies using multiple formats, and evaluating their accuracy in measuring student achievement. Objectives for OAASIS are 1) Establish learner characteristics and instructional needs as a basis for the design of multiple assessment strategies that increase access for students requiring AA-MAS; 2) Establish a common core of high school science inquiry standards among all participating states; 3) Design and implement three assessment strategy formats based on the common essential constructs of high school science inquiry content standards; 4) Disseminate results through diverse methods to reach widest audience possible

     Partners involved in OAASIS include two universities, Vanderbilt University and the University of South Carolina; two state partners (South Dakota and Wyoming), 31 state members of the State Collaborative on Assessment and Student Standards: Assessing Special Education Students (ASES-SCASS), and the Discovery/ThinkLink Learning Corporation. Through this collaborative effort, OAASIS promises to yield significant information for all states on the processes and procedures not only to create an AA-MAS, but to create an effective, valid AA-MAS.

FY 2005

Delaware

Implementing and Improving Comprehensive and Balanced Learning and Assessment Systems for Success in High School Success and Beyond: A Collaborative Project of Ten State Departments of Education, the Council of Chief State School Officers, the University of Pennsylvania Consortium for Policy Research in Education, Edvantia, and the Educational Testing Service

     This ten-state collaborative, led by the State of Delaware, in concert with CCSSO, proposes a three-phase capacity building, research, and evaluation project to provide intensive, high-quality professional development for state, district, and high school teams in the effective implementation and improvement of a comprehensive and balanced learning and assessment system, including formative assessment. It addresses the needs of all students in high poverty, low-performing high schools, including students with disabilities and English language learners, by combining two innovative strategies for enhancing learning and assessment systems for continuous improvement in teaching and learning and meeting NCLB goals: 1) formative assessment, shown by research to lead to significant gains in student achievement, confidence, and motivation, especially for low-performing students; and 2) Individual Learning Plans (ILPs), where students and teachers use data effectively to plan, monitor, and accelerate progress.

     This project meets the urgent need to provide guidance and a cost-effective, flexible model to build capacity for states and districts in delivering high-quality professional and leadership development in how to implement and improve a comprehensive and balanced learning and assessment system aligned with state standards. It includes a formative evaluation to document the issues, successes, and challenges of building a balanced system and will provide states and districts a guide that operationalizes this system for practitioners, with concrete examples across the variety of state and local sites. The project has four goals:

  1. Develop and implement a practitioner and research-based vision of a comprehensive and balanced assessment system, including formative assessment.
  2. Build state leadership capacity and support state technical assistance to districts and high schools in providing high-quality professional and leadership development in balanced assessment systems and specific practices of assessment for learning.
  3. Begin using district and high school teams to implement the vision and assessment for learning in schools and classrooms, integrating with individual learning plans.
  4. Generate and disseminate tools, techniques, and new knowledge to support future approaches to teaching and learning.

     Training is an essential component and will include self-examination of each state’s assessment program. It will instill a capacity in the state team to work effectively with local districts and a sample of high-poverty, low-performing high schools in the effective design, implementation, and use of assessment systems that balance continuous classroom, interim formative, and accountability assessments into an integrated system.

Georgia

Assessing One and All: A Partnership for Success Georgia Consortium (Georgia, Hawaii, and Kentucky)

     In the context of high expectations for all students and fully inclusive assessment and accountability systems, our consortium of States, university partners, researchers, and advocates will explore and document effects of multiple methods of assessments that meet identified student needs, to ensure all children are able to show what they know in the grade-level standards-based curriculum, based on appropriate and high achievement standards. The states will partner in three separate but related investigations of assessment options to include every student appropriately in state assessment and accountability systems. Each of the states will learn from the others the potential utility of a range of formative and summative methods of determining what students know and are able to do, in response to identified student needs, and then we will share our understanding nationally. Our investigations include:

  • a study designed to better understand the lowest performing students, to carefully examine the qualities of their performance, and to evaluate instructional and assessment techniques designed to make learning and assessment more accessible to all students;
  • an interdisciplinary pilot to develop high quality, validated within grade-level performance indicators and performance tasks to measure progress and attainment of “hard-to-assess” students; and
  • an evaluation of the performance impact and comparability of an online technology based assessment, including study of the users’ learning characteristics.

     Our project is sponsored by a consortium of States directly involved in the investigations, and has additional support from other states through consulting relationships (Massachusetts) and as members of the external review panel (staff from Arkansas, Delaware, Ohio, and Puerto Rico Departments of Education), along with external experts from advocacy organizations, university curriculum, special education, and measurement experts, and experts in inclusive assessment systems. Our project’s state partners have investigated participation and performance status in their states, and have identified options they believe these students need to be assessed well, to count in accountability systems, and to achieve the high expectations the states have set for them. This proposed project will allow systematic investigation of these options. Our external partners are committed to helping the three states capture lessons learned across these investigations. All partners in this project believe that it is through multiple investigations, collaborative thinking, and creative problem identification and solution finding that we will achieve truly inclusive assessment and accountability systems that benefit ALL students.

Hawaii

Pacific Assessment Consortium

     This application is being submitted by a Consortium of four State educational agencies (SEAs) and one private, not-for-profit educational research organization. The Consortium members include the Hawaii State Department of Education (HIDOE), American Samoa Department of Education (ASDOE), Commonwealth of the Northern Mariana Islands Public School System (CNMI PSS), Guam Public School System (Guam PSS), and Pacific Resources for Education and Learning (PREL). The purpose of the project proposed is to design, develop, and disseminate new K-3 English Language Proficiency (ELP) assessments that are appropriate to the four SEAs’ large populations of Pacific Islander students.

     Pacific Islanders represent one of the smallest racial minorities identified in the 2000 U.S Census, but make up a significant percentage of the general and student populations in American Samoa, the CNMI, Guam, and Hawaii. Although English is the medium of instruction in the public school systems of these four jurisdictions, many of the Pacific Islander students in these systems do not learn English as their first language. Instead, the first language is the local Pacific language and sustained exposure to English only begins when the child enters elementary school.

ELP is critical to the academic success of Pacific Islander and all other students in the U.S. To date, however, little is known about the ELP of Pacific Islander students because there are no existing, off-the-shelf assessments that provide reliable data for this population.

The Pacific Consortium believes that the significance of its proposed project is based on three factors:

  1. It fulfills the vision of NCLB by ensuring that a relatively small and overlooked subpopulation of students receives the services it needs to attain challenging academic standards;
  2. It addresses the qualitative differences between Pacific Islanders and other ELLs in terms of the former’s experiences with their native languages and the school systems in which they participate; and
  3. It provides a valuable educational tool for schools and districts on the U.S. mainland where Pacific Islanders are enrolling in increasing numbers.

Idaho

Consortium for Alternate Assessment Validity and Experimental Studies

     The purposes of the consortium are to advance the validity of alternate assessment score interpretations and to conduct feasibility studies to inform future assessment practices. Specifically, it is proposed that a consortium of assessment leaders from seven states (AZ, HI, ID, IN, MS, NV, and WI) and measurement experts affiliated with Vanderbilt University collaborate to enhance scientific rigor of alternate assessment validity research through replication, increased samples, and tests of generalization. Each of these states requires additional validity evidence to support their alternate assessment score and usage claims. These states also want to better assess hundreds of students with less severe disabilities and will investigate item modification strategies to increase accessibility on multiple-choice tests. To address these needs, each state will (a) conduct a study to enhance evidence for the validity of its existing alternate assessment and (b) participate in experimental field trials to examine the effects of item modifications on accessibility and score comparability for students with disabilities. Idaho’s alternate assessment (IAA) was one of the first approved by the USDOE. Thus, Idaho will serve as the lead consortium partner. The validity study each state has agreed to complete replicates a multi-source, multi-method investigation conducted with the IAA to examine its concurrent and construct evidence for a robust sample of students with disabilities. The Vanderbilt measurement group will provide the consortium members research design and analysis resources for their validity studies and training sessions on universal design, alignment practices, and item modification principles for current and future alternate assessments. As a result, each state will conduct a portion of a multi-state, experimental study examining the effect of item modifications on the testing preferences and test performances of students with and without disabilities. Over the 18-month project, the consortium project will enhance the validity evidence for current alternate assessments, advance understanding of item modifications on score comparability for future alternate assessments, build capacity in each state to conduct future validity studies, and actively disseminate results to other states.

North Carolina

Strengthening the Comparability and Technical Quality of Test Variations

     With North Carolina serving as the lead state, a consortium of 29 states – Alabama, Alaska, Arizona, California, Connecticut, Delaware, Georgia, Hawaii, Kansas, Kentucky, Louisiana, Michigan, Minnesota, Missouri, Nevada, New Mexico, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Carolina, South Dakota, Texas, Virginia, West Virginia, Wisconsin, and Wyoming – and the Department of Defense Education Activity proposes to develop a research- and best-practice based handbook of procedures for evaluating and documenting the comparability of scores from different versions of state assessments that are intended to measure the same content standards, with the same rigor, as the general assessment.

     The proposed project involves (a) collecting and synthesizing published and unpublished research about conducting comparability studies and evaluating the results and (b) conducting research to develop procedures for evaluating the comparability of four categories of test variations: (1) translations; (2) clarified language versions; (3) alternative formats such as portfolios or non-parallel native language forms; and (4) computer-delivered versions of the paper-and-pencil test. The research builds upon existing research that is relevant, but usually not directly applicable, to state assessment programs. Each type of test variation will be studied using two state assessments across grade levels and content areas so that the results will be generalizable to multiple situations.

     Results of the literature review and research will be organized in the handbook to provide procedures, designs, and statistical techniques for evaluating and documenting the comparability of scores from each of the four categories of variations of general tests. The handbook will be disseminated to all states and extra-state jurisdictions through a web cast, in-person dissemination meetings, presentations at national conferences, and publications.

Oregon

Increasing Assessment Validity and Reliability through Systematic Task Development, Training, and Supported Decision Making

     In this application for funding, we have formed a collaborative group of states along with the University of Oregon to investigate the technical adequacy of decision-making for the participation of students with disabilities in large-scale testing. The research begins with the use of a computer-based internet application: the Assessment and Decision Support System (ADSS) to guide teachers to a participation decision for the student that selects from among as many as five potential assessment options. The five potential options we consider are: (a) general assessment, (b) general assessment with accommodations, (c) an alternate assessment judged against grade level achievement standards, (d) an alternate assessment judged against modified achievement standards, and (e) an alternate assessment judged against alternate achievement standards.

     Building on five years of research with this application, we propose expanding the tool to include a teacher perception component and an assessment of student skill level to generate a report that can be used by Individualized Educational Program (IEP) teams when making participation and accommodations decisions for their students. Four other states will also participate in this aspect of the research. A research study on the ADSS, will use two basic designs: crossed (in which IEP teams are assigned to both this decision-making tool for one student or a typical check list as a control condition for another student) or nested (in which IEP teams are randomly assigned to use the ADSS or a traditional checklist).

     In this grant application, we also plan to develop item templates for an alternate assessment based on both modified and alternate achievement standards. We base these templates on performance tasks that have been the mainstay of two states’ alternate assessments but expand them to include more reach into grade-level content standards. We plan to align them with the grade level content standards in these two states (Oregon and Alaska) and vertically scale performance into grade groups (elementary, middle, and high school). In this process, we will create technical manuals that other states will be able to use in developing their own alternate assessments to ensure content-related evidence is explicitly addressed. Finally, we plan to develop systematic training for teachers that provides other states a model for ensuring reliability of administration and scoring of alternate assessments.

     Throughout the process, we focus on collecting reliability-related evidence in support of the alternate assessment, again providing other states a model for both research and development. We base our efforts on the need for more research that directly addresses measurement and technical adequacy, which, though common among state testing programs is often lacking in decision-making that includes students with disabilities in all three critical areas: development of an initial participation-accommodation station for making initial decisions about the use accommodations or participation in an alternate assessment, development of item templates for performance tasks aligned with grade level content standards, and training of assessors for being qualified. We propose not only conducting this research by developing exemplary protocols in all three areas but also collecting procedural evidence and analyzing the outcomes.

     We plan to disseminate our work in three ways: (a) posting our materials on the web with the University of Oregon and the Oregon Department of Education, (b) presenting at national conferences (Large-Scale and the National Council on Measurement in Education), and (c) publishing in refereed journals.

Rhode Island

Obtaining Necessary Parity through Academic Rigor (ON PAR): Research, Development, and Dissemination of Alternate (Parallel) Assessments for English Language Learners

     The Rhode Island Department of Elementary and Secondary Education (RIDE), on behalf of the 13-state World-class Instructional Design and Assessment (WIDA ) Consortium, proposes to develop and implement an accessible and valid assessment for beginning English language learners (ELLs) that can be used for state accountability purposes to meet the requirements of federal law. To achieve this aim, this project encompasses the following 6 goals: (a) develop a prototype of a standards-based science assessment instrument designed to measure the academic achievement of beginning ELLs; (b) conduct scientifically sound research to ensure the validity, reliability, fairness, and comparability of the assessment; (c) through research, explore links between English language proficiency (ELP) and academic achievement and chart student progress over time; (d) explore the feasibility of using this assessment for assessing the academic achievement of ELL and non-ELL students with particular disabilities; (f) disseminate information about this No Child Left Behind-compliant, standards-based academic assessment, the WIDA Consortium assessment system, and the results of related research; and (e) collaborate with states (SEAs), institutions of higher education (IHEs), and other experts to ensure that all goals are met.

     Through this project, WIDA will develop an academic assessment instrument for ELLs at the beginning levels (1–2 out of 6 total levels) of ELP and literacy development. These are the students for whom the regular state assessments, even in an accommodated form, do not usually yield valid results due to the assessments’ linguistic complexity, dependency on print, and limited visual and graphic support. The project proposed here focuses on developing an assessment in the area of science that will be a model for developing similar assessments in additional grades and content areas. These new assessments— tentatively titled Obtaining Necessary Parity through Academic Rigor (ON PAR) in the area of science—will achieve cognitive and psychometric rigor comparable to that of regular state assessments but will provide greater contextualization and linguistic scaffolding, thereby making content more accessible. ON PAR will be innovative; to date, no measures of achievement across the content areas and anchored in state academic content standards have been exclusively designed for and piloted, field-tested, and validated on ELLs. For the purposes of this project, validation studies will be focused in the New England Compact states of Rhode Island, New Hampshire and Vermont and in New Jersey, but alignment studies and additional research will ensure that it is valid, reliable and appropriate for all WIDA states.

     The WIDA Consortium currently represents approximately 380,000 English language learners in over 10,000 schools in the states of Alabama, Delaware, the District of Columbia, Georgia, Illinois, Kentucky, Maine, New Hampshire, New Jersey, Oklahoma, Rhode Island, Vermont, and Wisconsin. In addition, Kansas, North Dakota and Pennsylvania have expressed interest in this project with a formal letters of support. WIDA estimates that within these states at least 180,000 ELL students would be eligible to take ON PAR.
The WIDA Consortium has a track record of success. Since its inception only three years ago, WIDA has developed ELP Standards and successfully launched the ACCESS for ELLs™ ELP test. Collaboration among partner states, contractors and consultants is a hallmark of the consortium and is fundamental to its success. This collaboration will continue with this new project. In addition to RIDE and other WIDA states, essential partners for this project include: Center for Applied Linguistics (CAL); National Center on Educational Outcomes (NCEO); Performance Assessment Network, Inc. (pan); Wisconsin Center for Education Research (WCER) at the University of Wisconsin–Madison, and individual consultants, particularly Dr. Margo Gottlieb, Dr. Rebecca Kopriva, and Martha Thurlow.

South Carolina

Adding Value to Accommodations Decision-making

     The South Carolina Department of Education (SDE) requests $1,371,324 for the Adding Value to Accommodations Decision-making for English Language Learners and Students with Disabilities (AVAD) project. Our goal is to ensure the valid, effective use of accommodations designed to increase access to large-scale assessments for English language learners (ELLs) and students with disabilities.

     Objectives of the project are to 1) validate and enhance the Selection Taxonomy for English Language Learner Accommodations (STELLA) and publish it on the SDE’s Web site; 2) create and validate a decision-making taxonomy for the Accommodation Station (AS) based on the ASES Accommodations Manual and publish it on the SDE’s Web site; 3) develop 20 protoypes of CAE science items for grade four; and 4) disseminate results through reports, the SDE’s Web site, User and Technical manuals, the ASES Professional Development Guide video, and Web-Ex teleconferences.

     The AVAD project addresses all four absolute priorities and all three competitive preferences. AVAD involves the collaboration of 38 states in two groups—the Assessing Special Education Students (ASES) SCASS and the Assessing Limited English Proficient Students (LEP) SCASS. Consultants from Michigan State University and the University of Oregon will also collaborate on the AVAD project. Through AVAD, we will validate and enhance the accommodations decision-making tools developed by the Taxonomy for Testing English Language Learners (TTELL) and the Achieving Accurate Results for Diverse Learners (AARDL) projects. The SDE has submitted its State Consolidated Plan to the U.S. Department of Education. In our plan, we include Section 6112, Enhanced Assessment Instruments, as a program in our consolidated strategy.

     The STELLA and the AS facilitate rigorous, systematic, and empirically based accommodations decision-making so that ELLs and students with disabilities receive the accommodations that best allow them to demonstrate what they know and can do. By enhancing and publishing these systems, the AVAD project will contribute significantly to the growing body of research about inclusive assessment strategies that yield accurate results for all students.

FY 2003

New Hampshire

Knowing What Students with Significant Cognitive Disabilities Know:
Defining and Disseminating Technical Criteria
for Alternate Assessments through a Research and Practice Partnership

     States and testing companies have struggled to identify technically adequate but educationally sound methods of assessing the small group of students with significant cognitive disabilities for federal accountability purposes. Typically, experts in educational programming for these students, along with key stakeholders, have advised state assessment offices in defining what the best possible outcomes of standards-based instruction should be for the students. From those definitions, states and test company partners have developed assessments to measure the outcomes for school, district, and state accountability purposes.

     In the next year, state assessment systems will undergo Title I peer review to determine whether the systems meet NCLB requirements. Technical manuals will be important pieces of documentation. This project will address the short-term practical necessity of technical adequacy documentation, and the longer-term research commitment to building measurement models that “work” to measure achievement for this small group of students. We will work at the theoretical and the practice levels, addressing our research questions through collaborative cross-disciplinary study, reflection, discussion, and explication; through prototype development and testing; and through extensive real-world application and review by technical and policy experts from multiple states with varied approaches to alternate assessment.

     The project has three primary goals:
1. First, the project will address the immediate practical challenge of documentation of the technical adequacy of alternate assessment for students with significant cognitive disabilities, along with developing technical assistance processes and products for use with states during and after the project ends.
2. Second, the project will enhance fundamental knowledge of what the results of good teaching and learning look like for students with significant disabilities, allowing educational researchers, measurement experts, and practitioners to identify the kinds of evidence of standards-based learning that can yield valid and reliable inferences for accountability and school improvement purposes.
3. Finally, we will capture lessons learned that will help define areas for improvement of entire assessment systems. We will target areas where technical assistance is needed to document the technical adequacy of alternate assessments, we will identify gaps in our knowledge, and we will define needs for further research.

Oklahoma

Validity of Accommodations for LEP Students
and Students with Disabilities in Math and English

     The Oklahoma State Department of Education (OSDE) will serve as the lead state organization in collaboration with thirteen jurisdictions including Alabama, Georgia, Hawaii, Iowa, Kentucky, Louisiana, Nebraska, Nevada, New Jersey, Ohio, Oregon, Virginia, and West Virginia to investigate the validity of accommodations in math and English proficiency assessments for limited English proficient students with disabilities (LEP/SD). The proposed project addresses the need for states to identify valid accommodations for LEP students with disabilities in an effort to develop and implement reliable and valid English language proficiency tests as required by Title III, and to fairly assess all students in the math content area as required by Title I of the No Child Left Behind (NCLB) Act. As a result, the project will provide information on the validity of accommodations for future national and state assessments for LEP students with disabilities, a group of students that needs more attention due to their dual challenges, limited English proficiency and individual disabilities.

     This study examines the validity of accommodations in two ways: (1) comparing the performance of accommodated and non-accommodated non-LEP/non-SD students for whom accommodations are not intended, and (2) comparing the criterion-related validity of accommodated and non-accommodated assessment within a structural equation modeling approach. Researchers at the Advance Research & Data Analyses Center (ARDAC) and the California State University, Long Beach (CSULB) will assist the OSDE in the implementation of the project and be responsible for overseeing the research plans, data collection and analyses, and in part, the dissemination of the study’s outcomes. The Center for the Study of Assessment, Validity and Evaluation (C-SAVE) at the University of Maryland will serve as the independent evaluator of this project.

     The results of the study will help identify valid accommodations for LEP students with disabilities and may be applied to the general population of students with disabilities (with or without limited English proficiency). The large number of states participating in this project makes it possible for the results to be generalized to the nation with a greater level of confidence.

Rhode Island

Reaching the “Students in the Gap” through Web-based Module Assessments

     Given the current emphasis on accountability, all states need to provide equitable access to assessment for every student, and typically do so through two large-scale assessment systems, the regular (grade-level) and the alternate. However, 3-4% of the overall student population fall between the two assessment systems. Students with moderate cognitive deficits or severe learning disabilities do not qualify for the alternate assessment, but perform poorly on current regular assessment tests. These students are capable of, and could demonstrate, greater proficiency on grade-level assessments if provided with the appropriate scaffolding and contextualization. To meet the needs of these “students in the gap,” the New England Compact Enhanced Assessment Instrument Project proposes to develop a web-based Task Module Assessment (TMAS) prototype.

     The TMAS will be web-based, allowing for local administration; it will include task- and performance-based assessments rather than on-demand assessments; computer-based accommodations where appropriate; and online and offline assessment options. It will be designed to measure student learning against grade-level standards in mathematics and/or English Language Arts at one grade level and will be aligned to the large-scale assessment. The final products will consist of common criteria across the four NE Compact states defining the process for identifying students who qualify for the Task Module Assessment; a TMAS prototype for one grade; online assessment architecture with computer accommodations and options for delivery and performance; validity and reliability results for the TMAS; “how to” guides to build the capacity of other agencies to deal with assessment issues for students in the gap; and a series of monographs to disseminate findings to help “demystify” validity and reliability issues.

     Dissemination will use strategically selected venues to ensure all states and interested researchers timely access to the products including, as examples, conference presentations, articles in newsletters, publications and establishing links from key websites to the New England Compact’s website.

South Carolina

Achieving Accurate Results for Diverse Learners: Accommodations and Access Enhanced Item Formats for English Language Learners and Students with Disabilities (AARDL)

     The overall goal of the Achieving Accurate Results for Diverse Learners: Accommodations and Access Enhanced Item Formats for English Language Learners and Students with Disabilities (AARDL) project is to obtain more accurate results about the academic achievement of English language learners (ELLs) and students with disabilities (SWDs). AARDL proposes to empirically investigate strategies designed to increase access to test content for these students and others who may encounter barriers to demonstrating their knowledge of content areas and cognitive ability under regular testing conditions. The proposed study addresses all four of the absolute priorities laid out in the Request for Proposals as well as all three competitive preferences.

     The AARDL project involves a consortium of states and jurisdictions including South Carolina, North Carolina, Maryland, Minnesota, Oregon, Pennsylvania, and the District of Columbia who will collaborate with the University of Maryland, the University of Oregon, the University of South Carolina, Data Recognition Corporation (DRC) and SERVE. This diverse collaborative of state departments of education, research institutions, testing contractors and external evaluators will ensure that the project has a firm grounding in both theory and practice.

     The main objectives of the AARDL project are to (1) develop Access Enhanced (AE) Items in four subject areas and 6 grade levels with accompanying scoring guides focusing on responses from diverse learners; (2) investigate the reliability and utility of the Accommodation Station; (3) determine the comparability and scalability of the Access Enhanced (AE) Items; and (4) disseminate results through reports and a handbook for developing AE items.The results of the AARDL will provide an important contribution to the growing body of research into providing appropriate testing accommodations for those students for whom regular testing conditions pose a barrier to accessing content. New strategies such as the ones AARDL investigates are critical to the valid, reliable, and accurate assessment of English language learners and students with disabilities. Having accurate results for these and all students is essential to ensure the accountability of the educational system, determine how to best meet students’ educational needs, and track student progress over time.

West Virginia

Project DAATA: Developing Alternate Assessment Technical Adequacy

     All states currently have alternate assessment systems for students with significant disabilities. The problem is that 50 different approaches and strategies have been taken in the development of tasks used in the alternate assessment, alignment of alternate assessments with state standards, development of state standards upon which to construct alternate assessments, training of teachers in the administration, scoring and reporting of alternate assessments, validation of alternate assessments, and policy guidelines. And little technical adequacy exists on any of the 50 systems. This proposal reflects the next logical step in integrating practice with technically adequate measurement through the use of rigorous research methodologies. In this application, three major institutions (WV Department of Education, the Council of Chief State School Officers, and Behavioral Research and Training-University of Oregon) will work with a consortium of states to develop a handbook on technical adequacy of alternate assessments. We also propose incorporation of validated instruments and reporting systems within this handbook so that states have easy access and practical examples for use in developing technical adequacy in their own state. We address five related components in our focus on technical adequacy, each of which is aimed to have impact on classroom practice and the improvement of student performance and progress: (a) content validity, which should support development of appropriate Individualized Educational Programs (IEPs); (b) generalizability which should help teachers target classroom instructional practice; (c) reliability which should provide a stable analysis of current and expected performance for students with significant disabilities in the most accurate manner; (d) criterion and predictive validity which should help situate performance and allow teachers to trust the outcomes; and (e) consequential validity which should help states report outcomes and improve training and practice. The handbook is designed for state level use in training and policy development. West Virginia Department of Education is the lead state and applicant for this grant. The Council of Chief State School Officers (CCSSO), via the Assessing Special Education Students (ASES) State Collaborative on Assessment and Student Standards (SCASS) brings several states together to actually conduct the research. Behavioral Research and Teaching in the College of Education at the University of Oregon is contracted to conduct much of the research operations. Nationally recognized researchers in alternate and large-scale assessment assist in dissemination: National Association of State Directors of Special Education (NASDSE), National Center on Educational Outcomes (NCEO) and the Regional Resource Centers (RRC)

FY 2002

Colorado

Enhancing the Assessments of Students with Disabilities:
A Proposal for a Multi-State Special Education Collaborative

Abstract

The Colorado Department of Education is submitting this proposal as the lead agency for a Special Education Assessment Collaborative. The Collaborative is comprised of states and not-for profit organizations who desire to improve how students with complex disabilities are alternately assessed in the content areas of English language arts, mathematics and science. Colorado and the Collaborate states wish to enhance assessment efforts already in place to better meet the new accountability challenges of the NCLB Act and state accountability requirements. The Collaborative will strive to improve a variety of alternate assessment strategies, by increasing their technical soundness, accessibility, efficiency and feasibility for measuring adequate yearly progress. It will then examine the nature of the information derived from the multiple measures. The assessment methods will be developed, pilot tested and analyzed in the course of this project.

Seven tasks will be undertaken. Three states will provide leadership on development projects, five other participating states will select the components in which they wish to participate. Four organizations will assist with various aspects, as described below.

    1. A consensus standards framework will be developed by Measured Progress that will serve as the foundation for the improved assessment methods.
    2. Colorado, with Measured Progress, will lead in developing enhanced performance assessments.
    3. Iowa, with the Inclusive Large Scale Standards and Assessment group will develop instructional and assessment modules for use in portfolios.
    4. Iowa will partner with Measured Progress to improve observation, interview and data collection strategies.
    5. The Center for Applied Special Technologies will assist in the production of the standards frameworks and assessment methods to be more accessible through application of universal design principles.
    6. Research will be conducted by Oregon to investigate the technical attributes of the individual measures and to determine the effects of multiple measures.
    7. Colorado State University will conduct an evaluation of the processes and products of the project.

Minnesota

Improving the Achievement of English Language Learners through Authentic Proficiency Assessments

Abstract

The Minnesota Department of Children, Families & Learning (CFL) in collaboration the Nevada Department of Education, the South Carolina Department of Education and the Wyoming Department of Education propose to develop new and innovative assessment tools to measure the progress of students with limited English proficiency based on the elements of universal assessment design and proficiency-oriented assessments for second language learners. The University of Minnesota’s National Center on Educational Outcomes (NCEO) and the University of Minnesota’s Center for Advanced Research on Language Acquisition (CARLA) will also collaborate on this project to enhance the development of assessments for English language learners and to increase the validity of state assessments for all groups of students.

Information technology (IT) tools will be used to pilot methods of language assessment, develop new methods to organize, collect, score student assessment data and combine data from multiple measures to facilitate the evaluation of student progress over time. Staff development will be designed to expand teacher use of assessment results to improve instruction. These methods will be designed with the intent of expanding their use to other state assessment programs in the near future.
The objectives of this project are to (1) use principles of universal assessment design to review existing assessments and develop a process for incorporating their use into the development of new assessments required by NCLB including standards based listening and speaking proficiency assessments for LEP students and standards based assessments for students in grades three through eight, and high school in reading and mathematics; (2) use IT tools to develop authentic, contextually based listening and speaking assessments, pilot innovative data collection and scoring methods; and (3) develop a web-based framework for on-going staff development to help teachers interpret scores from state assessments, and use that information to make appropriate instructional adjustments, for the purposes of school improvement within the context of language development standards and state content standards.

Proficiency on state assessments is the primary indicator in ESEA of student academic achievement. States must develop educational systems that provide opportunities for maximum access to content and high expectations of learning for all students including students with limited English proficiency. It is critical that state assessment programs use measures of progress that are accurate and fair for all groups.

Nevada

Design and Development of an English Language Development Assessment

Abstract

The Nevada State Department of Education will serve as the lead state in a collaborative with 16 other states
including Indiana, Iowa, Kentucky, Louisiana, Massachusetts, Michigan, Nebraska, Nevada, New Jersey, New York, Ohio, Oklahoma, Oregon, South Carolina, Texas, and West Virginia to design and develop an English language
development (ELD) assessment over the period from December 2002 through September 2004. The proposed
project addresses the need for states to implement an assessment, which measures the annual growth of English language development in the domains of speaking, listening, reading and writing among limited English proficient students. As a result, the project will produce intact core test forms and an item bank from which states can draw to create test forms that reflect local needs and characteristics. This new assessment will 1) measure the progress of
LEP students in their development of English language proficiency (for a maximum of three years), (2) determine
status with reference to English proficiency standards required at each grade level in pre-K-12, and (3) predict an
LEP student’s readiness for English language assessment. In addition to the design and development of an ELD
assessment, the collaborative will field test the assessment and participate in standards setting workshops and
validity studies. The collaborative will also be trained on how to implement, score, and use the results of the ELD
assessment.

The American Institutes of Research (AIR) will assist the Nevada State Department of Education in the
Implementation of the project and be responsible for overseeing the development of the assessment framework and
specifications blueprint, and for the development of all core assessment items and non-core items.

Oklahoma


Improving Alignment Tools for Enhanced, More Accessible Assessments:
Development of an Electronic, Automated Alignment Analysis for States

Abstract

The Oklahoma State Department of Education will serve as the lead state in a collaborative with 15 other states including Alabama, California, Delaware, Kansas, Louisiana, Massachusetts, Minnesota, New Jersey, North Carolina, Pennsylvania, South Carolina, Texas, Wyoming, West Virginia, and Wisconsin. The group proposes to expand and automate an alignment process for judging the match of assessments with content over the period from December 2002 through September 2004.

The proposed project addresses very specific needs for improving assessments so that they 1) will more closely match content standards, 2) will be applicable for students with disabilities, and 3) will increase capacity for linking across grades. The collaborative, organized through the Council of Chief State School Officers, will incorporate into the process, along with the automation, the modifications necessary to 1) make the process valid for assessing special populations of students and 2) expand the procedures used so they will be applicable with every-grade assessments.

The proposed project will make the alignment process system available on a CD-ROM that can be readily distributed to states, thus increasing the use of the alignment tool in assessment development and verification. Along with automating the process, the collaborative will generate the necessary decision-making rules for aligning assessments to content standards.

As a culminating activity of the project, the collaborative will host a workshop to train state assessment, curriculum, and Special Education staff in use of the alignment process. An important outcome of the project will be the enhanced capacity of state department of education staff members to improve their assessment programs.

Assisting Oklahoma in the project implementation will be the Council of Chief State School Officers and staff from higher education, including researchers from the University of Wisconsin Madison, the University of Oregon, the Human Resources Research Organization, and WestEd.

Pennsylvania

Proposal of the English Proficiency for All Students (EPAS) Consortium

Abstract

Giving all children the chance to succeed, regardless of their background, is the central purpose of our nation’s schools. As America’s diverse population continues to grow, more and more students are entering our education system with limited English proficiency. These students are faced with major, often debilitating obstacles as they begin the transition from their native language to English. The attached proposal offers a plan that will help ease this difficult journey.

The states represented in this Consortium believe that English language learner (ELL) students should participate in meaningful assessments of what they are learning, providing the state with data to help address their educational needs. The states agree that the attached proposal not only complements current federal mandates, it goes beyond the requirements of ESEA, Title I, Part A.

This proposal envisions a true collaboration between a Consortium of states, AccountabilityWorks, and Educational Testing Service for the development of scientific-based solutions to ELL-based problems. This project is designed to assist participating states in assessing and providing results for all ELL learners. The states have selected AccountabilityWorks and ETS to serve as subcontractors on the project because of their excellent and relevant qualifications to perform the work involved.

Pennsylvania will serve as the Consortium’s lead state and fiduciary agent. Grant funds will flow from the lead state to AccountabilityWorks, who will collaborate with state educators, top researchers and practitioners to analyze relevant state standards and establish content benchmarks. AccountabilityWorks will pass these benchmarks to the ELL experts from ETS, who will then collaborate with teachers in each state to develop standards-based assessments drawn from scientific research on English language acquisition for K-12.

Rhode Island

The New England Compact, a Four-State Consortium
to Enhance the Quality of Their State Assessment Systems

Abstract

This project grows out of an existing collaboration formed by the Commissioners of Education of four states, Maine, New Hampshire, Rhode Island, and Vermont. These states have been working together since fall, 2001 to discuss and address the changes to their state accountability systems under the No Child Left Behind Act (NCLB). During the year, the informal collaboration was formalized to become the New England Compact. This Compact proposes to leverage the power of its shared commitment to maintaining challenging standards and the important role that local practitioners play in helping to design a state assessment system, and human resources of the four states in order to improve the achievement of all students through the development of comprehensive academic assessment instruments, particularly technology-based assessments that are designed to meet the needs of students with disabilities, limited English proficient students and other students who are at-risk. This proposal goes beyond the requirements for the assessments described in Section 1111(b)(3) of Title I, Part A of the NCLB Act in the following ways.

First, the states will create common, priority standards from which they will create a test blueprint. The states will then be able to compare progress across the states and combine resources to develop highest quality assessments. Second, the states will conduct a series of design experiments that focus on the impact of computer-based testing and accommodations on the validity of test scores for students with and without special needs, resulting in the development of exemplars for further development. Teachers will participate in the design process. Third, a group of teachers will be trained to provide professional development to teachers in their states on how to create and use assessments that are aligned to the state’s standards and that use the same accommodations, design and alternatives for students with disabilities, limited English proficient students, and students who are at risk of academic failure.

There are six broad goals.

Goal 1. The Compact will establish a set of common, priority standards termed ‘common expectations’ in English/language arts and mathematics in grades 3-8 and high school.

Goal 2. The Compact will create a test blueprint, based on the common expectations that will be used to cooperatively develop grade-level assessments in reading and mathematics that reflect the developmental issues of young children, the learning differences among all children, and the special needs of students with disabilities and limited English proficient students.

Goal 3. The Compact will develop and validate up to four assessments that are based on the common expectations and are designed to accurately measure academic content and skills achievement by limited English proficient students and students with disabilities.

Goal 4. The Compact will build the capacity of teachers within the Compact states to engage in effective classroom assessment and uses of assessment data from both state and local sources.

Goal 5. The Compact will enhance the states’ AYP systems to make them capable of demonstrating progress within the range of each category of performance and each disaggregated student achievement category at the individual school level.

Goal 6. The Compact will disseminate its products throughout the country to other state departments of education.

The resources for this project are substantial. The project is guided by a Research Technical Advisory Committee, representing national, regional and state experts in assessment, accommodations, universal design, mathematics, reading, and standards. These experts also bring the resources of their organizations, including two leading institutions of higher education, the Center for the Study of Testing, Evaluation and Educational Policy at Boston College and the Alliance at Brown University; resource-rich non-profit organizations, including the Annenberg Institute for School Reform, CAST, Educational Development Center, Inc., the Council of Chief State School Officers, the Center for Research on Evaluation, Standards, and Student Testing, the National Center on Educational Outcomes, TERC, and WestEd; and all the regional service providers funded by the U.S. Department of Education. Both the service providers and the state departments of education are providing resources beyond the funding from the Enhanced Assessment grant.

South Carolina

South Carolina
Taxonomy for Testing English Language Learners (TTELL)

Abstract

With increasing attention being paid to the issue of obtaining valid information about English language learners’ (ELLs) academic knowledge and skills, different forms of accommodation have been suggested in the assessment of this population. However, no research has been done to date to address how the large scale testing needs of English language learners in schools should be systematically identified and then systematically matched to specific accommodated methods. Thus, the project proposes to develop and evaluate a rigorous taxonomy model that would appropriately and systematically define and identify different types of English language learners based on their testing needs, and then provide the mechanisms that match these students to the proper accommodation methods. Furthermore, the project will also investigate the validity of the proposed decision match.

This research and development project is a collaborative undertaking between South Carolina Department of Education, the University of Maryland, Maryland State Department of Education, North Carolina Department of Public Instruction, District of Columbia Public Schools, Austin Independent School District, the American Association for the Advancement of Science, and a legal firm that specializes in federal and local educational law. The collaboration of multiple states and districts will allow the project to research the appropriateness of the work at both state and local level. This consortia will ensure that the products are technically, legally, and academically defensible, and that they are useful for educators at different levels of schooling and over a variety of assessment and accountability systems.

This project involves three phases: 1) collecting and analyzing the identification of differential assessment needs of ELLs through discovery interviews and survey information, 2) developing a taxonomy and process model to match access needs with appropriate large scale testing accommodations and develop dissemination materials, and 3) investigating the technical rigor of these tools and the overall decision-making process.

Utah

Improving the Assessment of Hispanic, Native American,
and Other English Language Learners

Abstract

The following is an application for work that the Utah State Office of Education is submitting on behalf of the Mountain West Assessment Consortium. Utah and the Mountain West Assessment Consortium propose to accomplish the development of improved assessments of English language proficiency by external funding through the Enhanced Assessment Instruments Competitive Grant Program (Title VI, section 6112) as well as non-grant funds that the consortium partners will provide. The Mountain West Assessment Consortium is a group of states in the mountain west and northern plains regions. Measured Progress, Inc., a not-for-profit educational assessment organization, is also a partner in the consortium.

The goal of this collaborative effort is to develop a series of academically oriented assessments of English language proficiency at four levels (K-3; 4-6; 7-9; 10-12). This series of assessments will be designed to enable teachers to more effectively diagnose the level of English language proficiency of English language learners in their classrooms. Levels of English language proficiency (pre-emergent, emergent, intermediate, fluent, advanced) will be assessed for different language skills (listening, speaking, reading, writing, comprehension) using classroom learning activities and technical vocabulary for each major content area (English language arts, mathematics, science, social studies). Teachers will therefore be able to provide more appropriate academic instruction to English language learners. This, in turn, would lead to better progress of English language learners toward proficiency in the major content areas.

The scope of the project is divided into Phase I activities and Phase II activities. During Phase I, which will occur in the fall of 2002, the consortium will undertake on its own to define a consensus set of learning activities and technical vocabulary for the content areas; develop a consensus set of English language development standards, and develop a draft test blueprint (test framework). During Phase II, which will occur during the grant period from December 2002 through September 2004, the series of assessments will be developed. The major activities include: validating the test blueprint; developing the test items; conducting pilot tests and field tests in each state; conducting statistical analysis of item performance and test reliability; conducting validity studies with other quantitative and qualitative data; setting standards for English language proficiency cut points; and developing operational tests forms, along with ancillary test materials such as administration manuals, scoring guides and demonstration materials.

Special features of this assessment series include test content that is academically oriented; a test design that is diagnostic providing a clear link to appropriate instructional interventions; test content that is based on the target language (English) rather than on the dominant language of the English language learner so that the assessment series can be used for students of any language/cultural background; technical information available about the assessment series as a result of the extensive set of reliability and validity analyses that will be conducted; and delivery of a complete assessment package that the consortium states can use and make available to other states at the end of the grant period.

Wisconsin

Enhanced Assessment Instruments for Limited English Proficient Students

<p.Abstract

Goal 1: To develop State standards-based assessment instruments for
measuring English proficiency and literacy skills for LEP students.

Goal 2: To develop and enhance State standards-based alternate
assessment with alternate performance indicators (APIs) for measuring
academic performance of LEP students in math, science, and social
studies.

Goal 3: To improve and promote English language acquisition and
Academic achievement through technology, this project will develop,
adopt, and enhance computer and technology skills with performance
indicators related to English language acquisition, literacy skills
development, academic content learning, and career path exploration
for English language learners. Technology-based assessment is an integral
part of this goal.

Goal 4: To provide training-of-trainers on assessment instruments designed
for measuring English language proficiency, literacy skills, and standards-
based alternate assessment (API/MECCA) for prospective trainers at local
educational agency (LEA) and institution of higher education (IHE) levels:

Goal 5: To collaborate with institutions of higher education, local
educational agencies, and other research institutions in the development,
research, and administration of assessment instruments.

Goal 6: To disseminate information on the development and administering
of assessment instruments at state and national levels.

Goal 7: To conduct research on the validity and reliability of the
assessment instruments.

FY 2016

Maryland

Innovations in Science Map, Assessment, and Report Technologies (I-SMART) Project Objectives and Activities:

Abstract

Next Generation Science Standards (NGSS) reflect high expectations for students and are based on a multidimensional model of learning science. As states adopt the NGSS, high-quality assessments are needed to measure student learning of more rigorous standards and provide timely and useful feedback about student performance. I-SMART’s ultimate goal is to maximize science achievement and progress across grades for students with significant cognitive disabilities (SCD) who take alternate assessments and for students with or without disabilities who are not yet meeting grade-level standards.

Goal 1: Develop and evaluate a learning map model for science. The project will build on existing local neighborhood maps around science grade-level targets for students with SCD by integrating science map neighborhoods with a multidisciplinary learning map that includes knowledge and skills in English language arts and mathematics. Activities include developing and evaluating the learning map model.

Goal 2: Design, develop, and evaluate assessments that incorporate science disciplinary content and science and engineering practices in highly engaging, universally designed, technology-delivered formats. Using an evidence-centered design approach, we will develop testlets (short assessments) that measure students’ knowledge and skills in science content aligned to the learning map. Universal Design for Learning principles will be incorporated to maximize student engagement and minimize barriers. After prototyping innovative items and testlets and receiving stakeholder input, refined testlets will be externally reviewed, pilot tested, and evaluated for their potential to support reliable, valid, and fair assessment.

Goal 3: Design, develop, and evaluate a dashboard that provides diagnostic feedback based on student performance on science assessments. Using iterative prototype designs and with input from stakeholders, we will develop a reporting dashboard that provides feedback on individual student performance on the new testlets. Using information from the learning maps and connections with other content areas, results will support teaching, learning, and communication with parents. The dashboard will include recommendations for instruction and embed just-in-time assessment literacy supports to facilitate appropriate interpretations and uses of results.

Goal 4: Broadly disseminate project materials and findings to a variety of audiences. The project’s dissemination plan includes dissemination of materials and products developed in goals 1-3, lessons learned during the design process, and research outcomes to stakeholder organizations, educators in the field, professional organizations, researchers, and policy makers.

Priorities: I-SMART addresses all four absolute priorities and competitive preference priorities 1(a&c) and 2(a&c). The project will be in collaboration with five states (Maryland-lead, Missouri, New York, New Jersey, Oklahoma), the University of Kansas Center for Educational Testing and Evaluation, the Center for Applied Special Technology (CAST), and BYC Consulting to produce assessments and materials to support comprehensive alternate assessments that include multiple measures of student progress over time. The project delivers innovative science assessments and score reports that improve the utility of information about student performance. I-SMART includes a comprehensive dissemination plan for materials, processes, and results.

Outcomes: The science learning map model includes multiple pathways for students to learn science and reach challenging grade-level expectations. Assessments aligned to the learning map model will measure student learning. The reporting dashboard would be appropriate for within-year uses and may also be useful for fine-grained reporting of summative results.

Participants and sites: Approximately 4,500 students and their teachers across the partner states.

Nebraska

SCILLSS Project Objectives and Activities:

Abstract

In the Strengthening Claims-based Interpretations and Uses of Local and Large-scale Science Assessments (SCILLSS) project, we propose to establish a foundation from which a broad range of enhanced science assessments that yield valid score interpretations can be built, evaluated, and shared across states, local education agencies, schools, and classrooms using a principled-design approach.
To address this objective, SCILLSS is organized into six phases. Phase 1 includes project management activities to ensure that the project is managed appropriately. Phase 2 includes needs assessments to gather important information about the status and characteristics of state and local assessment systems. Phase 3 involves the creation of a validity evaluation framework that can be tailored to specific assessment types and contexts. Phase 4 focuses on the intra-assessment examination of performance level descriptors, task models, items, and blueprints and the creation of large-scale assessment design and development tools that target standards-based concepts and skills. Phase 5 involves the creation of classroom-based evidence and tools to support effective interpretations and uses of large-scale assessment results. Phase 6 involves project evaluation and reporting to evaluate states’ progress, guide next steps, and provide useful reports.

Applicable Priorities: Through the SCILLSS project, we propose to address each of the Secretary’s four absolute priorities (APs) and three competitive preference priorities (CPPs). We address AP1 (collaboration) by bringing together three states, three independent organizations, and an external evaluator to improve the quality of statewide assessment systems in science. To address AP2 (multiple measures) we will establish a means for states to strengthen the meaning of statewide assessment results and to connect those results with local assessments in a complementary system. We will collect aggregated statewide assessment data and individual exemplars in a body of evidence that supports analysis of cross-sectional and within-student progress, as emphasized in AP3 (charting student progress over time). For AP4 (comprehensive assessment instruments) we will build principled-design tools to guide educators through a replicable process aimed at strengthening their assessment systems in science.

SCILLSS will address CPP1 (developing innovative assessment item types and design approaches) by using principled-design methodologies to evaluate current science assessment items and to develop task models for new innovative science items. We will address CPP2 (improving assessment scoring and score reporting) by engaging state and local educators to clarify the intended interpretations and uses of assessment scores, and to create a repertoire of tools aimed at improving the utility of student performance results for all stakeholders. We will address CPP3 (inventory of state and local assessment systems) by administering a needs assessment for each state to review their statewide and local assessments for quality, standards and instructional alignment, purpose, utility, and equity.

Proposed Project Outcomes: A primary goal of SCILLSS is to leverage existing tools and expertise to generate more broadly applicable resources and to strengthen the knowledge base among stakeholders for using principled-design approaches to create and evaluate quality science assessments that generate meaningful and useful scores. The SCILLSS tools and resources will be designed to have applicability and use beyond the participating project states.

Number of Participants to be Served: The SCILLSS project will involve key state and local education agency staff, approximately 120 educators, and a broad representation of students representing the three participating states (NE, MT, and WY), and will generate widely applicable tools and resources for use and dissemination beyond the participating states.

Number and Location of Proposed Sites: Project activities will be conducted virtually as well as on-site at local school districts and state education agencies within the partner states.

FY 2015

Arizona

Alternate English Language Learning Assessment (ALTELLA)

Abstract

This project will develop a English language proficiency (ELP) assessment for English learners (ELs) with the most significant cognitive disabilities to improve information about how ELs with significant cognitive disabilities are progressing toward English mastery to ensure their success in school and on the path to college, career, and community readiness. This project will apply the lessons learned from the past decade of research on assessing ELs and students with significant cognitive disabilities, as separate groups, to develop an ELP assessment based on alternate performance standards for ELs with significant cognitive disabilities—the Alternate English Language Learning Assessment (ALTELLA). ALTELLA will be based on current ELP standards and allow ELs with significant cognitive disabilities to demonstrate both receptive and expressive English language development. This project will establish a collaboration of States including Arizona, Michigan, Minnesota, South Carolina, and West Virginia, to complete the foundational work needed for an evidence-centered design approach to the development of an ALTELLA. By the end of the project, participating states will have an evidence-centered design validity argument for the ALTELLA as well as item templates to use in the next phases of the alternate assessment development.

California

Development of Enhanced Career and College Readiness Indices for Smarter Balanced High School Assessments

Abstract

The California Department of Education (CDE), in partnership with the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at UCLA, will fill a critical void in K-12 assessments by providing innovative indices that can support improved career readiness inferences based on the results of the Smarter Balanced Assessment Consortium (Smarter Balanced) high school assessments. The outcomes of the proposed work will result in substantial added value of the current CDE high school assessments by using item factor models to create scores that include career readiness interpretations. This will create a new framework for understanding the Smarter Balanced assessment results that will inform reporting and interpretation statewide and across the Smarter Balanced member States. It will also create a new set of digital support resources. California will collect data from 1,325 high schools, two California community colleges, and the Surface Warfare Officers School in Newport, RI.

Kansas

Use of Learning Maps as an Organizing Structure for Formative Assessment

Abstract

The Use of Learning Maps as an Organizing Structure for Formative Assessment project will investigate the use of organized learning models as the binding structure linking curriculum, instruction, and formative assessment. This project will develop learning maps with descriptions explaining the nodes and connections to help teachers plan instruction that is sensitive to cognitive development. For each learning map, the project will generate an instructional activity and teacher’s guide. The project will also produce performance tasks, rubrics, and objective item sets, for teachers to administer as formative assessments to generate the data they need to address individual learning needs. These materials will provide teachers the knowledge and tools they need to provide effective formative assessment and advance student learning. All materials will be made available in an intuitive web-based platform where teachers will explore learning maps and select materials for use with their students. During the final year of the project, up to 400 teachers will participate, providing evidence of scalability. Development activities will take place at the Center for Educational Testing and Evaluation (CETE) at the University of Kansas. Implementation activities will take place in the classrooms of teachers in five partner states: Alaska, Iowa, Kansas, Missouri, and Wisconsin.

Michigan

Dynamic Interactive Formative Assessment Tasks and End-of-Unit Tests for Measuring Challenging Concepts and Skills of Diverse Middle School Students

Abstract

Michigan will partner with Wisconsin, Maryland, New Jersey, and Nevada, and the Wisconsin Center for Education Research at the University of Wisconsin–Madison, to develop an operational set of performance-based, technology-interactive, formative assessment tasks, end-of-unit assessment modules, and related teacher tools aligned to the Next Generation Science Standards (NGSS). The goal of the project is to improve the assessment of challenging science learning for all middle-school students. The project will produce 12 technologically interactive, end-of-unit performance diagnostic assessments using 36 extended tasks and 35-40 additional classroom-embedded extended performance assessment tasks designed for on-demand use by the teacher. It will produce individualized diagnostic student- and classroom-level reports generated immediately after students complete the tasks and tests. The project will produce and evaluate materials and related professional development for teachers to support and inform task use, interpretation, and differentiated learning based on individualized results. The project will investigate the relationships between traditional and innovative item types that measure similar content and depth, particularly to identify ways to measure challenging science knowledge and abilities of widely diverse students, including English learners, students with learning disabilities, and mainstream students.

Minnesota

Data Informed Accessibility – Making Optimal Needs-based Decisions (DIAMOND)

Abstract

The DIAMOND project is collaboration between Minnesota, Alabama, Maryland, Michigan, Ohio, West Virginia, Wisconsin, and the Virgin Islands and the National Center on Educational Outcomes. The project’s goal is to improve the validity of assessment results and interpretations for students with documented needs by developing guidelines for making informed decisions about accessibility features and accommodations. It will promote a decision-making process that moves beyond the use of a checklist approach (which often results in identifying tools and accommodations that do not provide access to the student), to an approach that relies on the use of classroom progress data and other measures charted over time to evaluate individual student needs. All students who require accessibility and accommodations supports – general education students with documented accessibility needs, students with disabilities, English language learners (ELs), ELs with disabilities – will be served by this project.