Policymakers often complain that teacher education programs don’t have to answer for the quality of their graduates. But over the past five years, as a result of new accreditation rules, hundreds of those institutions have been quietly revamping how they collect and use data about their students.
The standards, which were phased in by the Washington-based National Council for Accreditation of Teacher Education starting in 2001, require education schools to provide evidence that their graduates can successfully teach. Institutions seeking accreditation must assess their students’ performance regularly and use the results to refine and improve their programs.
“Professional Standards for the Accreditation of Schools, Colleges, and Departments of Education” is available for purchase from NCATE.
All of the approximately 600 NCATE-accredited institutions were expected to have put the standards fully in place by this academic year.
“I think it’s made a huge difference,” said Tes Mehring, the dean of the education college at Emporia State University in Kansas.
“In the past, we pretty much thought about the quality of our programs by how many students we graduated,” she said. “Now, we’re having to rethink that whole process in terms of how we can collect data that show our candidates are making a difference with pre-K-12 students. That’s a new phenomenon, I think, for higher education.”
A survey of more than 1,000 education school deans and NCATE coordinators this year found that 93 percent of respondents agreed or strongly agreed that as a result of working with the NCATE standards, their own institutions showed “better alignment between standards, curriculum, instruction, and assessment.”
More than eight in 10 said the standards have caused faculty members to focus more on student learning, to improve their assessment techniques, and to better track the knowledge and skills of teacher-candidates. The survey had a response rate of 66 percent.
Common Assessments
“What we find is that this has caused a significant reorientation, if you will, of emphasis within our institutions,” NCATE President Arthur E. Wise said of the new process, “focusing energy, really for the first time, on what students know and can do.”
In the past, he said, the reports submitted by institutions as part of the accreditation process consisted almost entirely of qualitative descriptions of the curriculum and experiences they offered students. “Now, if you pick up the reports and flip through them,” Mr. Wise said, “you will see they consist largely of data tables and explanations of what those tables mean.”
A number of education schools are requiring common assessments across specified courses taken by prospective teachers. At Idaho State University in Pocatello, for example, future teachers take a core set of courses, whether they are preparing to be elementary educators or high school history teachers.
“That’s largely where we’ve centered our heavy thrust around assessment,” said Larry B. Harris, the dean of the college of education. “The faculty have agreed that we will do the same assessments in all sections of the same courses. And they have agreed that the standards in the course, and thus the assessments, belong to the faculty as a whole and not to individual faculty members.”
Sam Evans, the dean of the college of education and behavioral sciences at Western Kentucky University in Bowling Green, which also plans to begin using such common assessments this fall, declared: “It’s really changing the mind-set within the higher education community.”
Historically, he said, professors have been given the academic freedom and flexibility “to teach pretty much whatever they wanted.” But in a standards-based system, he argued, the imperative that prospective teachers meet the standards outweighs the academic freedom of the faculty.
Teacher Work Samples
Many education schools also are adding teacher work samples, pioneered by Del H. Schalock, a professor of education at the Teaching Research Institute of Western Oregon University in Monmouth. The work samples ask student-teachers to design an instructional unit or series of lessons, choose pre- and post-tests to provide evidence of what pupils have learned, and then reflect on what they might have done differently to produce greater student learning. The work samples are then scored based on an agreed-upon rubric.
Two NCATE standards focus on the performance of future educators.
Standard 1: Candidate Knowledge, Skills, and Dispositions
“Candidates preparing to work in schools as teachers or other professional school personnel know and demonstrate the content, pedagogical, and professional knowledge, skills, and dispositions necessary to help all students learn. Assessments indicate that candidates meet professional, state, and institutional standards.”
What It Means:
Eighty percent or more of program graduates pass the subject-matter tests the state requires for licensure. During their clinical practice, prospective teachers demonstrate they can increase the learning of their K-12 students. One of the primary sources of evidence for this standard is candidate-performance data prepared before the site visit by the NCATE examiners’ team. Evidence includes data the institution collected internally, as well as external data, such as results on state licensing and other tests. During the site visit, teams will seek evidence that candidates have developed the expected proficiencies.
Standard 2: Assessment System and Unit Evaluation
“The unit has an assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the unit and its programs.”
What It Means:
The institution embeds assessments into the preparation programs, conducts them on a continuing basis, and provides candidates with ongoing feedback. The institution gives multiple assessments in a variety of forms and aligns them with its standards for graduation. These may come from end-of-course evaluations, written essays, or topical papers, as well as from tasks used for instruction (such as projects, journals, observations by faculty members, comments by cooperating teachers, or videotapes) and from activities associated with teaching (such as lesson planning). The institution also uses information available from outside sources, such as state licensing exams, evaluations during an induction or mentoring year, employer reports, follow-up studies, and state program reviews.
The institution establishes criteria for gauging levels of candidate accomplishment and for completing the institution’s programs. The institution uses results from candidate assessments to evaluate and make improvements in the institution, its programs, courses, teaching, and field and clinical experiences.
SOURCE: National Council for Accreditation of Teacher Education
The Renaissance Partnership for Improving Teacher Quality, a consortium of 11 universities and their schools of education, received a five-year, $5.8 million grant from the U.S. Department of Education in 1999 to devise a common set of teacher work samples, including common tasks and scoring guides. The institutions—California State University-Fresno; Eastern Michigan University, in Ypsilanti; Emporia State University; Idaho State University; Kentucky State University in Frankfort; Longwood University, in Farmville, Va.; Middle Tennessee State University in Murfreesboro; Millersville University of Pennsylvania; Southeast Missouri State University in Cape Girardeau; the University of Northern Iowa in Cedar Falls; and Western Kentucky University—also have identified exemplars of what high-scoring teacher work samples look like.
“One hundred percent of our students now, as they progress through a teacher education program, have to teach units collecting pre-test and post-test data on the students that they’re working with, and then analyze who actually learned from this lesson I taught, who didn’t learn, and how they could have taught that lesson more effectively,” said Dean Mehring of Emporia State.
But work samples have been controversial, in part because of the time demands on both prospective teachers and their cooperating teachers in the field.
“The cooperating teachers hate it because they feel like it misdirects attention during the student-teaching semester toward the documentation of the work sample and away from fully engaging in being responsible for lesson planning and teaching,” said Lorrie A. Shepard, the dean of the school of education at the University of Colorado at Boulder.
On the other hand, she said, faculty members have spent more time teaching about assessment after seeing some of the narrow measures that student-teachers were using in their work samples.
E-Portfolios
To both aggregate and break down all their assessment data—including students’ scores on state licensing exams, course-based tests, work samples, the observations of cooperating teachers, follow-up studies of program graduates, and other measures—at least some colleges are moving to Web-based systems or e-portfolios.
At Oral Roberts University in Tulsa, Okla., education school Dean David B. Hand said would-be teachers used to carry around big three-ring binders that contained all the evidence needed to qualify for graduation. Now, all that information is available online and can quickly be gathered, broken down, and converted into charts and graphs for the faculty to talk about.
“We have more data now than we know what to do with,” said Mr. Hand. “Now, we have to assess this data and make improvements.”
Based on what the aspiring teachers were learning, he said, the private Christian university redesigned its student-teaching experience. It added two weeks of intensive modules before candidates go out into the field, as well as more structured opportunities for candidates to talk among themselves about what they are learning.
“They felt they didn’t have enough dialogue with one another to share their challenges, and professors didn’t have enough input,” Mr. Hand said.
Idaho State University added more instruction on how to work with student differences, including children with disabilities, before the student-teaching experience, based on feedback from its assessment system.
Cleveland State University has framed its Web-based portfolio system around 12 outcomes that candidates are expected to demonstrate to graduate. Prospective teachers have to perform at a certain level on each outcome before they begin student-teaching. And employers now rate graduates of the Ohio program in the same 12 areas during follow-up surveys.
Illinois State University in Normal invested $100,000 to create an electronic portfolio system that includes everything from students’ passing rates on state licensing tests to their reflective essays about student-teaching.
“We graduate between 1,100 and 1,480 students every year,” said Dianne E. Ashby, the dean of the college of education. “We’ve worked really hard to try to figure out how to do this on a big-school model.
“We had to be very conscious and deliberate in decisions we made across all the teacher education programs about the kinds of data that are important, and make sure they get collected,” she said. “If it hadn’t been for our being so supportive of national accreditation, we would never have invested so quickly or so seriously in this extensive a data system.”
That system generates about 150 pages of data each year that Illinois State’s council for teacher education can use to make better decisions about education programs, Ms. Ashby added.
Stepping on Toes
But education deans also pointed out that making decisions based on data has not been without its challenges. Several deans said they are probably collecting too much data now and need to streamline their efforts to focus on what’s most important. Finding the resources for data collection and analysis is also an issue.
Only about six in 10 of those surveyed by NCATE agreed or strongly agreed that the costs, time, and energy associated with building and maintaining their assessment system were “worthwhile.” About three in 10 disagreed; the rest were unable to evaluate the question.
“I’ll tell you where the push-back within the college of education comes from, and that is the lack of resources and personnel to do this systematically,” said James A. McLoughlin, the dean of the school of education and human services at Cleveland State. “I’ve had to put people in my outer office who pull the data together, analyze it, and report it back, for example, to the chair of the English department.”
It’s also led to some interesting, but not always easy, discussions with faculty members in colleges of arts and sciences, where many prospective teachers earn their majors.
“When we aggregate and disaggregate the subject-area data, then I start stepping on toes in the arts and sciences, and they have not been overly excited about some of this information,” said Dean Hand of Oral Roberts University.
Still, he and others suggest that the accountability pressures now being felt in teacher education are a precursor of what’s coming for all of higher education.
“It’s foreshadowed the kinds of assessment systems that the North Central Association [a regional accrediting body] and the federal government are going to start considering for all disciplines, regardless of whether it’s teacher education,” said Dean Ashby of Illinois State.
At Oral Roberts, that shift has already begun. The college of education recently trained the rest of the faculty in how to create e-portfolios for the university’s incoming freshmen.
Using K-12 Assessments
Despite the focus on results, none of the institutions is judging the performance of student-teachers or recent graduates based on whether they raise the standardized-test scores of the students they teach, although some pilot efforts are underway.
“It’s expected that we demonstrate ‘positive effects on student learning’; those are the exact words,” said Dean Evans of Western Kentucky University. “I am not seeing a lot of indication that institutions are reporting that, or are able to report that, at this time.”
Part of the challenge, he said, is gaining access to K-12 data.
“Certainly, if any institution wants to try this approach, it is welcome to do so,” said Mr. Wise of NCATE. “However, the analytical and data challenges inherent in this are huge. Whether it will work is frankly, to me, an open question.”
Moreover, while NCATE now requires that 80 percent of an institution’s graduates pass state teacher-licensing exams as a condition of accreditation, the passing scores set by states on those tests vary widely. In June 2003, NCATE and the Princeton, N.J.-based Educational Testing Service announced that they would devise a national benchmark on the most widely used of those exams, the Praxis II, to help NCATE interpret test scores across state lines. That has yet to happen.
NCATE’s standards are “certainly much better than they were 10 years ago—there’s no question,” said Kate Walsh, the president of the National Council on Teacher Quality, a policy group based in Washington. “They’re definitely pointing institutions to follow the quality of their graduates and look for multiple ways to do that.”
The hitch, she argued, is that NCATE does not set uniform benchmarks for acceptable performance, in part, she said, because too many institutions would fail to meet them. “While the rhetoric is largely on target, in terms of what’s important and is not important, it leaves it up to the schools to decide, largely, how they’re being successful,” Ms. Walsh said.
Mr. Wise responded: “At this stage of the game, there are no external, agreed-upon measures, other than teacher-licensing tests, that can be used by us. Over time, the field may settle upon some.”
Yet what used to be the defining difference between the 51-year-old NCATE and the rival Teacher Education Accreditation Council—a focus on what graduates actually know and can do, based on individual institutions’ criteria—is fading away.
“There still would be differences between, probably, the level of prescriptiveness that remains,” said Frank B. Murray, the president of the Washington-based TEAC, which was formed in 1997. “But the general idea is the same: You ought to have some evidence that your students can do what you said or what the standards say. The great challenge is there aren’t very many good measures of student accomplishment, because all the measures we have are flawed.”
The other unanswered question is whether NCATE’s performance-based accountability system will satisfy policymakers’ demand for results.
“I don’t know,” said Ms. Shepard of the University of Colorado. “And I think that’s the one big tension.”