Unlocking Human Potential Part I: The Cost of Conflating Potential and Performance

Assistant Deputy Secretary for Innovation and Improvement Jim Shelton speaks at TEDx MidAtlantic on Saturday, October 27, 2012.

Assistant Deputy Secretary for Innovation and Improvement Jim Shelton speaks at TEDx MidAtlantic on Saturday, October 27, 2012.

I recently gave a TEDx MidAtlantic talk entitled Unlocking Human Potential: Why We Need a New Infrastructure for Learning about Learning. My premise was that we have the opportunity to tap into vast amounts of latent human potential; but, to do so quickly, we need to build a new national research agenda and apparatus focused on breakthrough learning outcomes.

The theme of this TEDx event was Be Fearless: Take Risks. Be Bold. Fail Forward — IMHO a perfect theme for all of education today. I have come to believe that “being fearless” requires one to ask oneself two foundational questions: (1) What do you believe (is possible), and (2) what are you willing to do? Therefore, I began my talk by addressing a common misconception that limits our ability to believe unprecedented learning outcomes can be produced at scale. Consciously and subconsciously, we often allow the conflation of potential (capacity) and performance to limit what learning outcomes we believe can be achieved by all learners. However, without entering the long and embattled debate about the existence and shape of the bell curve describing individual intellectual potential, we can turn this misconception on its head.

Rather than arguing about the range of human intellectual potential, the more important issue is the current use of the bell curve to describe the expected attainment of specific learning outcomes (i.e., educational performance) and the presumed correlation between the “potential” curve and the “performance” curve.

There is general acceptance of a broad distribution of performance in certain academic pursuits (less so when it comes to something like the alphabet, but very much so as subjects become more complex, like trigonometry and physics). We have become accustomed to the wide bell curve with grades F through A arrayed from left to right. Many are not only accustomed to this distribution, but comfortable with it because it validates the underlying belief that some individuals have the intellectual capacity and will to master certain topics and others do not. In math and science, as a country and culture, we are, unfortunately, exceptionally confident that there are “math and science people” and then there are others — often including ourselves among the “others” (aka the “neverbeens” — I’ve never been a “fill-in-the-blank person”). All of us should be discomforted by all of this, as it possibly represents the single most costly misunderstanding in the history of our country and potentially the world.

In most cases, the current performance bell curve could most reasonably be described as “the distribution of performance demonstrated by students on certain types of assessments after specific periods of time being taught in particular ways.” When described this way, it would seem absurd to assume that a student’s location on the curve is indicative of her true potential; but we and, more importantly, the students almost always say and believe that to be true. This has to change.

More than three decades ago, Benjamin Bloom, acclaimed researcher and creator of Bloom’s Taxonomy, demonstrated that one-to-one tutoring produced a two-standard-deviation improvement over classroom instruction. If the U.S. school population improved by just one standard deviation, we would be the top-performing nation in the world; our bottom decile of students would perform at the level of our current top-quartile students. (Would that then qualify them as “math people”?) Nobel Laureate Carl Wieman was famously able to redesign his physics course to produce more than two standard deviations of average improvement across instructors, effectively doubling the percentage of students passing the course and seemingly supporting their belief that they are “science people.” Others have produced similar results in mathematics courses, often while significantly reducing the required in-class time (i.e., reducing cost).

Student achievement improves dramatically for students who receive one-on-one instruction over traditional classroom instruction — Bloom’s Two Sigma Problem.

Student achievement improves dramatically for students who receive one-on-one instruction over traditional classroom instruction — Bloom’s Two Sigma Problem.

If we know these kinds of results are possible across student populations and various rigorous topics, then on what basis are we allowing our young people to be labeled as less than capable and, worse, believe themselves incapable of performing at the highest levels?

Two standard deviations would move average U.S. performance two times as far ahead of the world’s top-performing “nation,” Shanghai, as Shanghai is ahead of the U.S. today; and it would put our bottom decile students at the Shanghai average. The benefits in terms of global economic, political, and social leadership would be unequivocal; but, to date, our inability to affordably produce these kinds of outcomes — Bloom’s Two Sigma Problem — has perpetuated an educational model that wastes tremendous human potential and has allowed the U.S. to lose its position as an educational leader.

This is not an insurmountable barrier. It is the kind of challenge that America has used research, development, and systemic innovation to conquer time and again. If we believe the aforementioned learning outcomes are possible, we must simply be willing to build the infrastructure to do the same in education and training. If we are successful, there is no doubt we and our young people will “win the future.” So, we must ask ourselves: Are we willing to be fearless: take risks; be bold; and fail forward?

Jim Shelton is the Assistant Deputy Secretary for Innovation and Improvement.