Scientifically Based Research — U.S. Department of Education– Pg 10

Submitted Paper—The Logic of Scientific Research—Valerie Reyna

Rationale

  • Why scientific research?
    • Research is the only defensible foundation for educational practice.
  • If not scientific evidence, then what?
    • Tradition
    • Superstition
    • Anecdote

Analogy to Medicine

  • Traditions: Bleeding people.
    • Good intentions are not enough.
  • Clinical trials are recent.
  • Why isn’t personal experience sufficient?
  • Clinical trials: Only way to really be sure of what works. (Logic)
  • Same rules apply to education: Brain surgery (NAS).

Strength of Evidence

  • In the meantime, hierarchy of evidence.
    • Not all-or-none
    • Possibly true to probably true versus nothing (NCES example)
  • Theory: Evidence-based
    • Knowing why and how something works
    • Key to generalization
    • Pitfalls of theory

Educational Sciences

  • No conflict between science and values.
    • Some decisions made on values.
    • Evidence is necessary but not sufficient.
  • Science with a human face.
  • How do we support translation of research into practice?
    • Suggestions welcome.

What is EBE?

  • Best available empirical evidence in making decisions about how to deliver instruction
  • Gaps in scientific evidence: Human judgment (bias and wisdom)

What is empirical evidence?

  • Scientifically based research from fields such as psychology, sociology, economics, and neuroscience, and especially from research in educational settings
  • Objective measures of performance used to compare, evaluate, and monitor progress

Scientifically Based Research

  • Quality
    • Measures and Methods
    • Scientific merit (double helix)
  • Relevance and Significance
    • Trivial
    • Number affected and severity
  • Two criteria of NSF

Quality: Levels of evidence

  • All evidence is NOT created equal
    • Randomized trial
    • Quasi-experiment, including before & after
    • Correlational study with statistical controls
    • Correlational study w/o statistical controls (class size, high expectations)
    • Case studies

Randomized Trials: The gold standard

  • Claim about the effects of an educational intervention on outcomes
  • Two or more conditions that differ in levels of exposure to the educational intervention
  • Random assignment to conditions
  • Tests for differences in outcomes

Why is randomization critical?

  • Assures that the participants being compared have the same characteristics across the conditions
  • Rules of chance mean that the smart, motivated, experienced, etc. have the same probability of being in condition 1 as in condition 2
  • Without randomization, differences between two conditions may result from pre-existing difference in the participants, e.g., more smart ones in condition 1

Why is randomization critical?

Without randomization, simple associations such as between Internet use and science grades have many different interpretations

Bar graph of Internet use and science grades for grades 4,8, and 12

Relevance

  • Does the study involve a similar intervention and outcome to those of interest?
  • Were the participants and settings representative of those of interest?
  • Were enough participants involved to justify generalization? (statistical inference)

EBE: How to use existing science

  • Search literature (Campbell Collaboration, PsychInfo, etc.)
  • Screen literature
    • Relevance
    • Quality
  • Search for pre-digested evidence
    • Narrative reviews (ERIC digests)
    • Systematic reviews (meta-analysis)

Screening research—Cautions

  • Unconditional conclusions
  • Conclusions involving hypotheticals
  • Conclusions that diverge from evidence
  • Strong calls to action
  • Mixtures of opinions with evidence
  • Low prestige publication outlet
  • Publication outlet with ideological agenda

EBE: How to use objective measures

Evidence-Based Education—Where are we?

Pie chart that shows the disproportion of external evidence to professional wisdom

Where the Research Dollars Flow

  • Of 84 program evaluations and studies planned by the Department of Education for fiscal year 2000, just one involved a randomized field trial
  • 51 were a survey of need
  • 49 had as their purpose program implementation/monitoring,
  • 15 were non-randomized impact evaluations.
  • Note: studies could have more than one purpose.
  • Source: Robert Boruch, Dorthy de Moya, and Brooke Synder in Robert Boruch and Frederick Mosteller, eds., Evidence Matters (Brookings, 2001).

What ED will do

  • The What Works Clearinghouse
    • Interventions linked to evidentiary support
    • Systematic reviews
    • Standards for & providers of evaluations
  • Preschool Curriculum Evaluation Research
  • Explanatory Research: Why and How
  • Funding for evaluations of promising innovations in the field
  • Build capacity internally and externally

Goals

  • ED will provide the tools, information, research, and training to support the development of evidence-based education
  • Education across the nation will be continuously improved
  • Wide variation in performance across schools and classrooms will be eliminated

The practice of evidence-based education will become routine