Every day, Kenneth Grover, the principal of the 175-student Innovations High School in Salt Lake City, wades through printed ads and emails pushing everything from computers to lighted pens.
“If you read the brochures with beautiful and happy kids on them, you’re thinking, ‘Wow, this is what I’ve been looking for,' " Mr. Grover said.
From his experience, though, vendors cite research in their promotions only 20 percent of the time—and, upon investigation, only about half that research is conducted by an independent party or self-administered under strict guidelines.
School districts are bombarded with marketing materials from companies claiming their products can help change the face of education and raise student achievement. Yet a common complaint is the lack of significant data to back up the slick slogans.
As educators try to balance a desire for evidence with the need for innovation, at a time when standards are rising and technologies are advancing at breakneck speed, many find themselves undergoing rapid-fire pilot projects to determine which products and services best suit their districts.
“If you’re waiting for all the evidence to be fully baked, you’re going to be waiting a long time,” said Kenneth Zeff, the chief strategy and innovation officer for the 95,000-student Fulton County, Ga., school system in the Atlanta metropolitan area.
Independent research is often seen as the gold standard for authenticating effectiveness claims, and the U.S. Department of Education’s What Works Clearinghouse is considered the leading source of scientific evidence of what works in education.
But many small companies simply don’t have the budgets to pay for such research, and the largest, most respected studies cost a lot and can take years to complete. (Mathematica Policy Research, for instance, recently completed a seven-year, $4 million study for the Knowledge Is Power Program, or KIPP, charter school network, which runs 125 schools in 20 states, serving 41,000 students.)
But even small companies with modest resources can demonstrate efficacy, according to the Education Industry Association, a trade group based in Vienna, Va.
The association has begun encouraging members to seek out independent validation to set themselves apart from their competitors during the relatively recent “explosion of entrepreneurship into the K-12 marketplace,” said Steve Pines, the association’s executive director.
“What’s missing,” he said, “is some third-party documentation that can separate the wheat from the chaff.”
A Discriminating Consumer
That absence of independent research about certain products can be particularly difficult for smaller districts with even fewer resources.
Read a related story, “Big-Name Companies Feature Larger-Impact Research Efforts.”
The 7,000-student Henry County school system in Collinsville, Va., deals with that situation, in part, by empowering teachers to experiment with free apps, about 10 to 15 per month, which helps the district do its own research before purchasing a product or service. Administrators there also look for positive results from districts of comparable size and demographics before deciding to implement a program, according to Janet Copenhaver, the district’s director of technology and innovation.
Even large education providers struggle with technological advances often outpacing the speed of rigorous research.
“That becomes a real challenge for school leaders, because being able to move quickly and accurately to make decisions in real time is critical,” said Joseph Olchefske, the president of Mosaica Online at New York City-based Mosaica Education Inc. The private company manages 75 schools in the United States and overseas, serving 19,000 students.
Instead, Mr. Olchefske directs his company to look at research for broad direction, then commit to its own continuous review, analysis, and evaluation of student performance. He referred to that commitment when addressing criticism for lower-than-average student achievement data and higher-than-average disciplinary problems, among other issues, that Mosaica faced recently.
Mosaica examines its results quarterly and makes midcourse corrections, Mr. Olchefske said, which allows for improvements to be made even without waiting for the golden seal of approval from top-quality research.
“You can be a discriminating consumer, but what you can’t really do is get to a place where there’s a definitive conclusion,” he said. “At some point, you have to take a risk. Research never does away with the need for judgment.”
The 39,000-student Cherokee County school district in Canton, Ga., puts more weight on its own standards than on independent studies.
“It really is about our own research,” said Bobby Blount, the district’s assistant superintendent for accountability, technology, and strategic planning.
That philosophy was more out of necessity than choice nearly 10 years ago, when the district wanted to begin using new interactive whiteboards—a market that had not yet been scrutinized by research.
“All we had to go on at that point were sales people telling us how great and wonderful their product was,” Mr. Blount recalled.
The Cherokee County system decided to perform its own test, installing whiteboards in about a dozen classrooms in various grades. The whiteboards proved to be effective, improving both teaching and learning.
When it was time to make the larger investment in a districtwide rollout, “we had two vendors come in and pretty much do a dog-and-pony show,” Mr. Blount said, which led district officials at the time to choose one company’s product for elementary classrooms and the other company’s product for middle and high school classrooms.
School districts want to know the products and services they buy will be worth the investment—and solid research can help them make that judgment. But how should administrators evaluate the studies companies cite when trying to land a sale?
Here’s some advice on what to ask, courtesy of Ellen Bialo, the president of New York City-based Interactive Educational Systems Design, which specializes in market and product research and analysis; Rob Foshay, a senior partner with the Foshay Group, a Dallas-based training and education company; and Kenneth Zeff, the chief strategy and innovation officer for the Fulton County, Ga., schools:
» Was the study conducted in a district that will allow school officials to observe the intervention in action? The opportunity to meet with those doing the implementation, as well as firsthand observations, can clarify nuances and success factors that would be lost in a written report.
» Do the players in the study—both students and teachers—represent what your district looks like? If they don’t have the same socioeconomic, cultural, and educational backgrounds, the findings may not be transferable.
» Can the company easily explain the product or service, and the confirming research, to a variety of stakeholders? If the methodology is too obscure, or the program seems counterintuitive, it will be harder to rally the support that is an important predictor of success.
» How meaningful are the measures used for each benefit claimed? For example, before-and-after gains are relevant only if both measurements are done with the same test, or tests designed to be compared. Also, a state-test passing rate or score may not be sensitive enough to measure what the product or service is designed to teach or facilitate.
» The study claims gains in achievement, but compared to what? If there’s no comparison group, you can’t tell if the product or service improved on what a district was already doing. And the comparison is meaningful only if both groups were similar at the start of the study, or if statistical adjustments were made to compensate for differences.
» Was the study conducted, written, and released or published according to professional standards for design integrity and research ethics? Ask the company how well the study conforms to guidelines from the American Educational Research Association, the American Evaluation Association, the Software and Information Industry Association, and the What Works Clearinghouse.
» What type of effectiveness research has been done by a third party? For supplemental products, has a white paper been done to tie it to other research? A case study is nice for anecdotal research, but is it also backed by ample data?
SOURCE: Education Week
Cherokee is now piloting five types of math software this school year, while examining findings about the software from other districts.
“We rely on each other quite a bit,” Mr. Blount said. “That lends more credence than anything else.”
‘Practical Considerations’
When the 28,000-student Colorado Springs School District 11, in Colorado, was considering a couple of years ago whether to buy ST Math software from the MIND Research Institute, a nonprofit education research company based in Irvine, Calif., the institute’s own data-collection process gave administrators a good first impression.
Randomized studies conducted in collaboration with the University of California, Irvine, sweetened the pot. Then visits to see the program in action at schools in Anaheim, Calif., and Chicago sealed the deal.
David Sawtelle, the math facilitator for the Colorado Springs district, has learned over time to press vendors who claim little more than that their products are “research-based.”
“What that turns out to mean is that a product is designed in accordance with research around best practices, and then there’s a citation of a study that was done in which that practice was potentially effective,” he said. “We’ve become more discriminating. We ask, ‘If you’re research-based, how is your research validated?' "
The right answer to that and other critical questions—such as how the program was implemented, what kind of professional development is needed, and what is the right environment for it to be successful—depends on each district’s needs.
“Context is very important,” said Steven M. Ross, the evaluation director for the Center for Research and Reform in Education at Johns Hopkins University’s school of education in Baltimore. “It’s not like picking a prescription out of a box. You have to be much more nuanced in your selection.”
Smaller companies without the budgets to perform independent studies can take heart from the fact that research only goes so far, experts say, and that for educators, it’s all about what happens during and after implementation that seems to count the most.
Isaak Aronson, the president and chief executive officer of SmartStart Education, an education and training provider based in New Haven, Conn., said that his company’s reliance on case studies and other self-generated research hasn’t stopped its ability to impress clients with in-house statistics.
“Perhaps we sacrifice a bit of scientific rigor,” Mr. Aronson said, “but I’m always cognizant of practical considerations versus valid and reliable research.”
Other companies are trying establish mutually beneficial research partnerships with schools.
Zane Education, a New Zealand company that provides subtitled videos to schools and has a U.S. office in Thousand Oaks, Calif., has started approaching schools about collaborating.
“We’re going to them and saying, ‘Hey, would you like to work with us on this research? We’ll provide you with those results at no cost,' " said the company’s director, Nicholas Tee.
In the end, even companies that can afford top-level research sometimes don’t measure up as well as expected.
Mr. Grover, with Innovations High School in Utah, felt a bit frustrated in 2012 by what he describes as “a big company that must do a billion dollars of business a year.”
Though the company assured him it would provide a seamless transition to a digital curriculum, he said, there were obvious problems a month after implementation. The learning management system that was needed to run the curriculum wasn’t completed as promised, and Mr. Grover accused its representative of not delivering what was promised.
“They made a strong effort to fix it with patches, and it’s working now,” Mr. Grover said. “But the point is, had I not asked the questions that I did, how much more would they have hooked me?”