FY 2005


Implementing and Improving Comprehensive and Balanced Learning and Assessment Systems for Success in High School Success and Beyond: A Collaborative Project of Ten State Departments of Education, the Council of Chief State School Officers, the University of Pennsylvania Consortium for Policy Research in Education, Edvantia, and the Educational Testing Service

     This ten-state collaborative, led by the State of Delaware, in concert with CCSSO, proposes a three-phase capacity building, research, and evaluation project to provide intensive, high-quality professional development for state, district, and high school teams in the effective implementation and improvement of a comprehensive and balanced learning and assessment system, including formative assessment. It addresses the needs of all students in high poverty, low-performing high schools, including students with disabilities and English language learners, by combining two innovative strategies for enhancing learning and assessment systems for continuous improvement in teaching and learning and meeting NCLB goals: 1) formative assessment, shown by research to lead to significant gains in student achievement, confidence, and motivation, especially for low-performing students; and 2) Individual Learning Plans (ILPs), where students and teachers use data effectively to plan, monitor, and accelerate progress.

     This project meets the urgent need to provide guidance and a cost-effective, flexible model to build capacity for states and districts in delivering high-quality professional and leadership development in how to implement and improve a comprehensive and balanced learning and assessment system aligned with state standards. It includes a formative evaluation to document the issues, successes, and challenges of building a balanced system and will provide states and districts a guide that operationalizes this system for practitioners, with concrete examples across the variety of state and local sites. The project has four goals:

  1. Develop and implement a practitioner and research-based vision of a comprehensive and balanced assessment system, including formative assessment.
  2. Build state leadership capacity and support state technical assistance to districts and high schools in providing high-quality professional and leadership development in balanced assessment systems and specific practices of assessment for learning.
  3. Begin using district and high school teams to implement the vision and assessment for learning in schools and classrooms, integrating with individual learning plans.
  4. Generate and disseminate tools, techniques, and new knowledge to support future approaches to teaching and learning.

     Training is an essential component and will include self-examination of each state’s assessment program. It will instill a capacity in the state team to work effectively with local districts and a sample of high-poverty, low-performing high schools in the effective design, implementation, and use of assessment systems that balance continuous classroom, interim formative, and accountability assessments into an integrated system.


Assessing One and All: A Partnership for Success Georgia Consortium (Georgia, Hawaii, and Kentucky)

     In the context of high expectations for all students and fully inclusive assessment and accountability systems, our consortium of States, university partners, researchers, and advocates will explore and document effects of multiple methods of assessments that meet identified student needs, to ensure all children are able to show what they know in the grade-level standards-based curriculum, based on appropriate and high achievement standards. The states will partner in three separate but related investigations of assessment options to include every student appropriately in state assessment and accountability systems. Each of the states will learn from the others the potential utility of a range of formative and summative methods of determining what students know and are able to do, in response to identified student needs, and then we will share our understanding nationally. Our investigations include:

  • a study designed to better understand the lowest performing students, to carefully examine the qualities of their performance, and to evaluate instructional and assessment techniques designed to make learning and assessment more accessible to all students;
  • an interdisciplinary pilot to develop high quality, validated within grade-level performance indicators and performance tasks to measure progress and attainment of “hard-to-assess” students; and
  • an evaluation of the performance impact and comparability of an online technology based assessment, including study of the users’ learning characteristics.

     Our project is sponsored by a consortium of States directly involved in the investigations, and has additional support from other states through consulting relationships (Massachusetts) and as members of the external review panel (staff from Arkansas, Delaware, Ohio, and Puerto Rico Departments of Education), along with external experts from advocacy organizations, university curriculum, special education, and measurement experts, and experts in inclusive assessment systems. Our project’s state partners have investigated participation and performance status in their states, and have identified options they believe these students need to be assessed well, to count in accountability systems, and to achieve the high expectations the states have set for them. This proposed project will allow systematic investigation of these options. Our external partners are committed to helping the three states capture lessons learned across these investigations. All partners in this project believe that it is through multiple investigations, collaborative thinking, and creative problem identification and solution finding that we will achieve truly inclusive assessment and accountability systems that benefit ALL students.


Pacific Assessment Consortium

     This application is being submitted by a Consortium of four State educational agencies (SEAs) and one private, not-for-profit educational research organization. The Consortium members include the Hawaii State Department of Education (HIDOE), American Samoa Department of Education (ASDOE), Commonwealth of the Northern Mariana Islands Public School System (CNMI PSS), Guam Public School System (Guam PSS), and Pacific Resources for Education and Learning (PREL). The purpose of the project proposed is to design, develop, and disseminate new K-3 English Language Proficiency (ELP) assessments that are appropriate to the four SEAs’ large populations of Pacific Islander students.

     Pacific Islanders represent one of the smallest racial minorities identified in the 2000 U.S Census, but make up a significant percentage of the general and student populations in American Samoa, the CNMI, Guam, and Hawaii. Although English is the medium of instruction in the public school systems of these four jurisdictions, many of the Pacific Islander students in these systems do not learn English as their first language. Instead, the first language is the local Pacific language and sustained exposure to English only begins when the child enters elementary school.

ELP is critical to the academic success of Pacific Islander and all other students in the U.S. To date, however, little is known about the ELP of Pacific Islander students because there are no existing, off-the-shelf assessments that provide reliable data for this population.

The Pacific Consortium believes that the significance of its proposed project is based on three factors:

  1. It fulfills the vision of NCLB by ensuring that a relatively small and overlooked subpopulation of students receives the services it needs to attain challenging academic standards;
  2. It addresses the qualitative differences between Pacific Islanders and other ELLs in terms of the former’s experiences with their native languages and the school systems in which they participate; and
  3. It provides a valuable educational tool for schools and districts on the U.S. mainland where Pacific Islanders are enrolling in increasing numbers.


Consortium for Alternate Assessment Validity and Experimental Studies

     The purposes of the consortium are to advance the validity of alternate assessment score interpretations and to conduct feasibility studies to inform future assessment practices. Specifically, it is proposed that a consortium of assessment leaders from seven states (AZ, HI, ID, IN, MS, NV, and WI) and measurement experts affiliated with Vanderbilt University collaborate to enhance scientific rigor of alternate assessment validity research through replication, increased samples, and tests of generalization. Each of these states requires additional validity evidence to support their alternate assessment score and usage claims. These states also want to better assess hundreds of students with less severe disabilities and will investigate item modification strategies to increase accessibility on multiple-choice tests. To address these needs, each state will (a) conduct a study to enhance evidence for the validity of its existing alternate assessment and (b) participate in experimental field trials to examine the effects of item modifications on accessibility and score comparability for students with disabilities. Idaho’s alternate assessment (IAA) was one of the first approved by the USDOE. Thus, Idaho will serve as the lead consortium partner. The validity study each state has agreed to complete replicates a multi-source, multi-method investigation conducted with the IAA to examine its concurrent and construct evidence for a robust sample of students with disabilities. The Vanderbilt measurement group will provide the consortium members research design and analysis resources for their validity studies and training sessions on universal design, alignment practices, and item modification principles for current and future alternate assessments. As a result, each state will conduct a portion of a multi-state, experimental study examining the effect of item modifications on the testing preferences and test performances of students with and without disabilities. Over the 18-month project, the consortium project will enhance the validity evidence for current alternate assessments, advance understanding of item modifications on score comparability for future alternate assessments, build capacity in each state to conduct future validity studies, and actively disseminate results to other states.

North Carolina

Strengthening the Comparability and Technical Quality of Test Variations

     With North Carolina serving as the lead state, a consortium of 29 states – Alabama, Alaska, Arizona, California, Connecticut, Delaware, Georgia, Hawaii, Kansas, Kentucky, Louisiana, Michigan, Minnesota, Missouri, Nevada, New Mexico, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Carolina, South Dakota, Texas, Virginia, West Virginia, Wisconsin, and Wyoming – and the Department of Defense Education Activity proposes to develop a research- and best-practice based handbook of procedures for evaluating and documenting the comparability of scores from different versions of state assessments that are intended to measure the same content standards, with the same rigor, as the general assessment.

     The proposed project involves (a) collecting and synthesizing published and unpublished research about conducting comparability studies and evaluating the results and (b) conducting research to develop procedures for evaluating the comparability of four categories of test variations: (1) translations; (2) clarified language versions; (3) alternative formats such as portfolios or non-parallel native language forms; and (4) computer-delivered versions of the paper-and-pencil test. The research builds upon existing research that is relevant, but usually not directly applicable, to state assessment programs. Each type of test variation will be studied using two state assessments across grade levels and content areas so that the results will be generalizable to multiple situations.

     Results of the literature review and research will be organized in the handbook to provide procedures, designs, and statistical techniques for evaluating and documenting the comparability of scores from each of the four categories of variations of general tests. The handbook will be disseminated to all states and extra-state jurisdictions through a web cast, in-person dissemination meetings, presentations at national conferences, and publications.


Increasing Assessment Validity and Reliability through Systematic Task Development, Training, and Supported Decision Making

     In this application for funding, we have formed a collaborative group of states along with the University of Oregon to investigate the technical adequacy of decision-making for the participation of students with disabilities in large-scale testing. The research begins with the use of a computer-based internet application: the Assessment and Decision Support System (ADSS) to guide teachers to a participation decision for the student that selects from among as many as five potential assessment options. The five potential options we consider are: (a) general assessment, (b) general assessment with accommodations, (c) an alternate assessment judged against grade level achievement standards, (d) an alternate assessment judged against modified achievement standards, and (e) an alternate assessment judged against alternate achievement standards.

     Building on five years of research with this application, we propose expanding the tool to include a teacher perception component and an assessment of student skill level to generate a report that can be used by Individualized Educational Program (IEP) teams when making participation and accommodations decisions for their students. Four other states will also participate in this aspect of the research. A research study on the ADSS, will use two basic designs: crossed (in which IEP teams are assigned to both this decision-making tool for one student or a typical check list as a control condition for another student) or nested (in which IEP teams are randomly assigned to use the ADSS or a traditional checklist).

     In this grant application, we also plan to develop item templates for an alternate assessment based on both modified and alternate achievement standards. We base these templates on performance tasks that have been the mainstay of two states’ alternate assessments but expand them to include more reach into grade-level content standards. We plan to align them with the grade level content standards in these two states (Oregon and Alaska) and vertically scale performance into grade groups (elementary, middle, and high school). In this process, we will create technical manuals that other states will be able to use in developing their own alternate assessments to ensure content-related evidence is explicitly addressed. Finally, we plan to develop systematic training for teachers that provides other states a model for ensuring reliability of administration and scoring of alternate assessments.

     Throughout the process, we focus on collecting reliability-related evidence in support of the alternate assessment, again providing other states a model for both research and development. We base our efforts on the need for more research that directly addresses measurement and technical adequacy, which, though common among state testing programs is often lacking in decision-making that includes students with disabilities in all three critical areas: development of an initial participation-accommodation station for making initial decisions about the use accommodations or participation in an alternate assessment, development of item templates for performance tasks aligned with grade level content standards, and training of assessors for being qualified. We propose not only conducting this research by developing exemplary protocols in all three areas but also collecting procedural evidence and analyzing the outcomes.

     We plan to disseminate our work in three ways: (a) posting our materials on the web with the University of Oregon and the Oregon Department of Education, (b) presenting at national conferences (Large-Scale and the National Council on Measurement in Education), and (c) publishing in refereed journals.

Rhode Island

Obtaining Necessary Parity through Academic Rigor (ON PAR): Research, Development, and Dissemination of Alternate (Parallel) Assessments for English Language Learners

     The Rhode Island Department of Elementary and Secondary Education (RIDE), on behalf of the 13-state World-class Instructional Design and Assessment (WIDA ) Consortium, proposes to develop and implement an accessible and valid assessment for beginning English language learners (ELLs) that can be used for state accountability purposes to meet the requirements of federal law. To achieve this aim, this project encompasses the following 6 goals: (a) develop a prototype of a standards-based science assessment instrument designed to measure the academic achievement of beginning ELLs; (b) conduct scientifically sound research to ensure the validity, reliability, fairness, and comparability of the assessment; (c) through research, explore links between English language proficiency (ELP) and academic achievement and chart student progress over time; (d) explore the feasibility of using this assessment for assessing the academic achievement of ELL and non-ELL students with particular disabilities; (f) disseminate information about this No Child Left Behind-compliant, standards-based academic assessment, the WIDA Consortium assessment system, and the results of related research; and (e) collaborate with states (SEAs), institutions of higher education (IHEs), and other experts to ensure that all goals are met.

     Through this project, WIDA will develop an academic assessment instrument for ELLs at the beginning levels (1–2 out of 6 total levels) of ELP and literacy development. These are the students for whom the regular state assessments, even in an accommodated form, do not usually yield valid results due to the assessments’ linguistic complexity, dependency on print, and limited visual and graphic support. The project proposed here focuses on developing an assessment in the area of science that will be a model for developing similar assessments in additional grades and content areas. These new assessments— tentatively titled Obtaining Necessary Parity through Academic Rigor (ON PAR) in the area of science—will achieve cognitive and psychometric rigor comparable to that of regular state assessments but will provide greater contextualization and linguistic scaffolding, thereby making content more accessible. ON PAR will be innovative; to date, no measures of achievement across the content areas and anchored in state academic content standards have been exclusively designed for and piloted, field-tested, and validated on ELLs. For the purposes of this project, validation studies will be focused in the New England Compact states of Rhode Island, New Hampshire and Vermont and in New Jersey, but alignment studies and additional research will ensure that it is valid, reliable and appropriate for all WIDA states.

     The WIDA Consortium currently represents approximately 380,000 English language learners in over 10,000 schools in the states of Alabama, Delaware, the District of Columbia, Georgia, Illinois, Kentucky, Maine, New Hampshire, New Jersey, Oklahoma, Rhode Island, Vermont, and Wisconsin. In addition, Kansas, North Dakota and Pennsylvania have expressed interest in this project with a formal letters of support. WIDA estimates that within these states at least 180,000 ELL students would be eligible to take ON PAR. The WIDA Consortium has a track record of success. Since its inception only three years ago, WIDA has developed ELP Standards and successfully launched the ACCESS for ELLs™ ELP test. Collaboration among partner states, contractors and consultants is a hallmark of the consortium and is fundamental to its success. This collaboration will continue with this new project. In addition to RIDE and other WIDA states, essential partners for this project include: Center for Applied Linguistics (CAL); National Center on Educational Outcomes (NCEO); Performance Assessment Network, Inc. (pan); Wisconsin Center for Education Research (WCER) at the University of Wisconsin–Madison, and individual consultants, particularly Dr. Margo Gottlieb, Dr. Rebecca Kopriva, and Martha Thurlow.

South Carolina

Adding Value to Accommodations Decision-making

     The South Carolina Department of Education (SDE) requests $1,371,324 for the Adding Value to Accommodations Decision-making for English Language Learners and Students with Disabilities (AVAD) project. Our goal is to ensure the valid, effective use of accommodations designed to increase access to large-scale assessments for English language learners (ELLs) and students with disabilities.

     Objectives of the project are to 1) validate and enhance the Selection Taxonomy for English Language Learner Accommodations (STELLA) and publish it on the SDE’s Web site; 2) create and validate a decision-making taxonomy for the Accommodation Station (AS) based on the ASES Accommodations Manual and publish it on the SDE’s Web site; 3) develop 20 protoypes of CAE science items for grade four; and 4) disseminate results through reports, the SDE’s Web site, User and Technical manuals, the ASES Professional Development Guide video, and Web-Ex teleconferences.

     The AVAD project addresses all four absolute priorities and all three competitive preferences. AVAD involves the collaboration of 38 states in two groups—the Assessing Special Education Students (ASES) SCASS and the Assessing Limited English Proficient Students (LEP) SCASS. Consultants from Michigan State University and the University of Oregon will also collaborate on the AVAD project. Through AVAD, we will validate and enhance the accommodations decision-making tools developed by the Taxonomy for Testing English Language Learners (TTELL) and the Achieving Accurate Results for Diverse Learners (AARDL) projects. The SDE has submitted its State Consolidated Plan to the U.S. Department of Education. In our plan, we include Section 6112, Enhanced Assessment Instruments, as a program in our consolidated strategy.

     The STELLA and the AS facilitate rigorous, systematic, and empirically based accommodations decision-making so that ELLs and students with disabilities receive the accommodations that best allow them to demonstrate what they know and can do. By enhancing and publishing these systems, the AVAD project will contribute significantly to the growing body of research about inclusive assessment strategies that yield accurate results for all students.