—Bob Dahm
Every few years in American education, a new slogan is coined as the Next Big Thing. Total quality management, shared decisionmaking, and outcomes-based education all marched across the educational landscape once, grabbing headlines, filling copy—and leaving little improvement in learning in their wake.
Right now, data-driven instruction, results-oriented improvement, and evidence-based education are the watchwords. They show up everywhere—from state education department Web sites to principals’ and superintendents’ job descriptions—insisting that instructional practices should be driven by the analysis of student-achievement data as measured by prescribed standardized tests. Of course, data-driven instruction sounds tough and businesslike. No need to actually think about what you’re doing, just let the data drive you.
Teachers are no longer the drivers of reform, but the driven. Many teachers and schools, in fact, are being driven to distraction. Under the pressures of the federal No Child Left Behind Act and its mandate for “adequate yearly progress,” teachers in struggling schools are being told that only results matter—and even these rarely extend beyond tested achievement in literacy and math. In hurried meetings after school, educators go through endless reams of performance data, targeting the problematic cells where results are defective—a subject department here, a grade level there, a group of male minority students elsewhere.
With AYP deadlines looming and time running out, teachers have little chance to consider how best to respond to the figures in front of them. They find themselves instead scrambling to apply instant solutions to all the students in the problematic cells—extra test-prep, a new prescribed program, or after-school and Saturday school sessions. There are few considered, professional judgments here, just simplistic solutions driven by the scores and the political pressures behind them.
Data-driven instruction obliterates the crucial fact that to be effective, educators have to use many different kinds of information to think about what they are doing in classrooms. While statistics can be immensely useful, they do not automatically point to which instructional approaches will work best with the diverse learners that make up a school’s classes, or a nation’s schools. One child may struggle with underperformance because she has difficulties with reading, a second because he has a turbulent home life, and a third because she is a recent immigrant learning English as a second language. Faced with such diversity, teachers and educational leaders have to be intelligently informed by evidence, not blindly driven by it to teach a certain way.
When such evidence points to apparent performance problems, we can find ourselves in a position familiar to players of the popular board game Clue: The data indicate that an achievement crime has occurred, but they don’t tell us who did it, with what weapon, or in which room. Once performance problems have been exposed, instead of rushing to judgment about what must be done, we need more evidence, deeper reflection, and further inquiry before we act. Our instructional choices should be based on all kinds of evidence and experience, processed together in professional learning communities that help us identify common problems, swap ideas and strategies, and develop and deploy our own school-based assessment instruments. Mindful teaching needs to be evidence-informed, not data-driven.
Better alternatives already exist. This has become clear to us through a series of visits to schools in England. There, we have led a research team evaluating the Raising Achievement Transforming Learning, or RATL, project of the Specialist Schools and Academies Trust, an organization that coordinates many of the state secondary schools across the country. RATL comprises more than 330 English secondary schools. To be eligible to join the project, the schools first had to be identified as underachieving, as defined by a composite of diverse pupil-test-score data. In its first two years, student achievement at project schools has risen steadily, and, in almost three-quarters of them, strikingly so. How has this happened?
First, RATL is the antithesis of top-down improvement. Its principals receive the equivalent of about $16,000 each year to spend on whatever they deem best to raise pupil achievement. School leaders provide project members with a menu of short-, medium-, and long-term strategies that have had proven success when applied elsewhere. Schools are then networked with each other, attached to coaching principals from high-achieving mentor schools, and offered regional conferences to help principals understand what their pupil-achievement data mean and how best to capitalize on the information with their teachers. By following a teacher-friendly principle of schools helping schools, and providing principals and teachers with considerable latitude in defining and addressing problems, the network has achieved rapid and impressive success.
Data-driven instruction obliterates the crucial fact that to be effective, educators have to use many different kinds of information to think about what they are doing in classrooms.
Second, the project has a key cultural component that is based on the insight that test results rarely present self-evident instructional strategies to address the needs of struggling pupils. Rather, the data are, in and of themselves, often ambiguous, reflecting the nonlinear, and sometimes ingenious, ways that diverse learners acquire knowledge. As a consequence, project leaders do not rush from diagnosis to action, but emphasize the intermediary step of professional reflection and analysis. This step requires deep cultural change in many schools, as teachers work to shift their school culture from one of isolated instructors responsible only for their own pupils, to one of lifelong learners with the mission of improving the education of all learners in a school. As part of this cultural transformation, RATL leaders try to ensure that pupil-achievement data are embedded in school-based cultures that appreciate the value of tests, but are not limited to them.
In a country that has just abolished national achievement targets (with Wales going even further and abolishing standardized tests), RATL schools now have the freedom to set their own ambitious goals and targets, instead of frantically trying to comply with targets imposed by others. They emphasize that much school improvement involves cultural change, and that it takes time and professional sophistication to understand what test scores can and cannot tell us. In this model, careful scrutiny and discernment, not “drivenness,” are valued, and indeed are viewed as the heart and soul of successful educational change.
Third, RATL’s consistent focus on pupil achievement has not distorted or diminished the curriculum in ways that are becoming increasingly evident in many American schools. Here, standardized tests often have become the curriculum. In England, many principals have used the RATL funds to support art projects, physical education, or foreign-language courses. Principals of RATL schools in poor and working-class communities try to both broaden and deepen the curriculum to give all children multiple opportunities to flourish academically. In the United States, on the other hand, the achievement gap in tested performance coexists with a widening learning gap between functional basics for the poor and working class and an enriched and enlarged set of learning experiences for the privileged in the suburbs—where schools are free of many testing constraints and can (and do) fly far beyond the standards.
Evidence and experience, teachers working with teachers, schools helping schools, and continual reflective inquiry by educators—this path to improvement offers students so much in terms of teacher professionalism and creative responses to the on-the-ground realities of classrooms and schools. Do we American educators dare to learn from our British counterparts? Can we apply the inventiveness of professionals collaborating at their best, rather than adhere blindly to the fad of the moment marketed as “data-driven instruction”?