School leaders debating whether to buy educational technology often find themselves weighing the promised benefits against their worst fears—soaring costs, disruptive breakdowns, and befuddled teachers and students—not knowing whether they’re about to make a purchase their districts will come to regret.
Now, nascent efforts in schools and districts across the country are underway to help administrators become smarter consumers of technology, so they understand the risks and rewards upfront.
In some cases, those initiatives are being led by school officials determined to approach technology purchases in a more methodical way. In other circumstances, the process is being guided by nonprofits attempting to screen or help school leaders and teachers evaluate ed-tech products and test them under the right conditions in classrooms.
Some school officials and organizations taking on that work see it as connected to the broader goal of bridging what many see as a prevailing divide between technology developers, who complain about not being able to access schools or break into K-12 markets, and administrators and teachers, who complain that too many of the products and services thrust at them are useless or impractical.
Despite their best efforts, many districts “fundamentally don’t have the expertise to choose technology products,” said Muhammed Chaudhry, the president and CEO of the Silicon Valley Education Foundation, a San Jose, Calif.-based nonprofit that seeks to improve education in that region and around the country. Districts often make decisions about ed-tech purchases based on reputation, word-of-mouth, and pitches they hear from sales staff, Mr. Chaudhry argued, leaving them susceptible to making decisions based on “relationships, rather than using a rigorous process for choosing products.”
His organization is helping lead a new program called the Learning Innovation Hub, which aims to foster a freer exchange of information and feedback between ed-tech developers and classroom teachers.
Schools Evaluate Products
Efforts to evaluate educational technology in more sophisticated ways are also playing out in individual schools.
Barton A. Dassinger, the principal of the Cesar E. Chávez Multicultural Academic Center, a pre-K-8 traditional public school on Chicago’s South Side, had grown used to hearing companies make rosy promises about how their products would improve student learning and school efficiency.
So he set up a process that allows his school and its teachers to test individual ed-tech products, monitor their performance, and decide whether to keep using them.
Mr. Dassinger, in consultation with his teachers, chooses supplemental classroom materials for pilot-testing and other types of trial runs throughout the year, as well as during after-school and summer programs.
He keeps a detailed Excel file on his computer that attempts to track the impact of individual products on student achievement. Down the left side of the screen, students’ names, grades, and homeroom assignments are listed. There’s information indicating whether they belong to English-language-learner or special-needs populations. Across the top row, there’s a list of companies whose products are being tried—recently, the names included classroom tools by Compass Learning and ALEKS (Assessment and Learning in Knowledge Spaces), a McGraw-Hill Education product.
Students’ scores on various tests—district, state, and subject-specific—appear in green, red, and yellow colors, linked to each product, showing how academic progress has changed over time. Companies often ask to be evaluated by metrics of their choosing, but Mr. Dassinger wants to be able to judge them by the same tests and metrics against which his school—which serves an overwhelmingly Hispanic and socioeconomically disadvantaged population—and its students and teachers are judged.
The process is not perfect, he acknowledges. Chávez school officials can’t know for sure, for instance, if students’ academic gains or declines can be attributed to any ed-tech product or to other factors.
But when combined with the feedback he gets from teachers about how they view products, and how students’ respond to them, Mr. Dassinger is confident the process is helping him evaluate technology in a systematic way.
“It allows us to filter out some of the field, to get to some of the best companies out there,” he said. In reviewing the products, “if students are not engaged, or they don’t understand a technology, that’s kind of a red flag.”
Competing Instincts
Part of the dilemma school leaders face in weighing the value of digital tools is that those leaders are often torn by different, competing instincts, said Steven Hodas, the executive director of Innovate NYC Schools, a program within the 1.1 million-student New York City district whose goal is partly to give vendors insights on the needs of educators and other district officials.
Superintendents and curriculum and technology officials want to get a good price and avoid costly and embarrassing mistakes, Mr. Hodas said. But they also want to be bold. And buying technology brings its own set of temptations and complications for district leaders, he said, one of which is that so many devices and programs seem impressive and cutting edge at first glance.
“In the presence of bright and shiny, people tend to make bad decisions,” Mr. Hodas observed.
Barton Dassinger is the principal at Cesar E. Chavez Multicultural Academic Center, a school serving a socioeconomically disadvantaged population on the South Side of Chicago. Dassinger has set up a process for piloting and closely tracking the performance of digital resources in his school. He and his teachers collect information on products and how they affect student achievement and engagement, as well as how easy they are for educators to use.
To avoid problems, he said, it’s critical for school leaders to clearly identify the problem they hope technology will help solve and to involve principals and teachers early and often, since those educators’ acceptance of ed-tech products will go a long way to determining their success.
At the Chávez school, some products go through the review process and come out winners. One such product was st math, developed by the MIND Research Institute, an Irvine, Calif.-based nonprofit that has won good reviews from the school’s classroom teachers and appeared to show positive academic results since it was introduced a few years ago. That tool, which stands for spatial-temporal math, is game-based instructional software that seeks to build skills through visual learning.
The performance of various reading products, however, has been mixed at best.
Mr. Dassinger recalled one company’s initial showing with a reading product as a “disaster,” replete with a lot of black computer screens.
“The teachers just gave up because there were so many hands going up [from students needing technical help],” Mr. Dassinger said.
But even that error produced lessons. One teacher continued to experiment with the product, and the vendor that developed it asked for another chance. The company has since boosted its support for the school’s use of the program, and it has become popular among students, Mr. Dassinger said.
Chávez has also helped pilot a new, broader project being organized across Chicago, called LEAP Innovations (for Learning Exponentially. Advancing Potential.), which is meant to give schools access to ed-tech products that have been screened for quality—and to give companies the opportunity to test their products in classrooms.
After an initial period of beta-testing, the program will officially launch a fuller pilot this fall, with a focus on delivering pre-K-8 literacy instruction to schools. Schools apply to participate, identifying the specific educational needs they hope to address through technology as part of their applications. They will receive free software for the length of the program and training on how to implement the product in classrooms. More than 50 schools applied for the fall program—traditional public schools, charter schools, and private schools in Chicago are eligible—and 15-20 are expected to be selected to participate. Companies apply for the program by submitting a detailed application, and if they make it through an initial screening, appear before a “curation panel” consisting of educators, literacy specialists, and a learning scientist.
One of the chief benefits is the opportunity to have technology tools evaluated in real classrooms, using “flexible but rigorous” research methodology, including internal and nationally norm-referenced tests. If they achieve positive results, that’s information they can use to market themselves to schools across the city—a major market, with 400,000 public school students—and beyond, said Phyllis Lockett, the CEO of LEAP Innovations.
About 30 percent of the vendors applying for the program are “mature” companies, while the rest are early or midstage, said Ms. Lockett. Initial funding for the nonprofit came from the Bill & Melinda Gates Foundation, and funding or support has also come from numerous other sources, including the Michael & Susan Dell Foundation and the Parthenon Group, a national consulting company, among others.
“We want the best solutions,” Ms. Lockett said, and the process is “agnostic as to which companies get chosen.”
Educator Feedback
A similar concept is at work with the Learning Innovation Hub, a program launched this year by the Silicon Valley Education Foundation, in cooperation with the NewSchools Venture Fund, an Oakland, Calif.-based philanthropic investment group. The iHub, as it is known, will attempt to connect ed-tech entrepreneurs with educators, who will give companies feedback on their products and what could be changed to improve them.
After an application process, eight teachers were named fellows through the program. They received stipends of $1,750 and agreed to devote 40 hours over the course of the spring 2014 semester for professional development on using the education products, provide vendors with feedback, and collect data.
In the first year of iHub, four companies were chosen to participate in the program, which this year focuses on middle school math instruction.
One of the foundation’s hopes, said Mr. Chaudhry, whose organization is co-leading the project, is that both the process of reviewing products—and the products themselves, if educators find them to be of high-quality—will “permeate up.”
“Ideally, it scales from the classroom to the entire district,” he said.
Starting small, whether through pilot projects or other small trials of educational technology, is a good idea for schools and districts, argued Mr. Hodas of Innovate NYC Schools. Doing so gives school administrators the freedom to make mistakes and make adjustments before the stakes are too high.
“You’re going to be mostly wrong a lot of the time,” he said. “You learn more from being wrong than being right. You have to build that into your process.”