The U.S. Department of Education has released new resources for schools on artificial intelligence that include recommendations on a range of potentially thorny issues, including the use of AI detection tools that may falsely accuse students of plagiarism and how to build educators’ AI literacy skills.
The two reports come at a time when educators are still puzzling through how to approach this powerful and fast-advancing technology. Many teachers are hesitant of using AI in the classroom, surveys show. Meanwhile, students are increasingly using AI tools. A recent survey from Common Sense Media found that half of teens have used an AI text generator, 34 percent have used an image generator, and 22 percent have used a video generator.
Taken together, the department resources detail both potential pitfalls that could stem from AI and the opportunities that it has for K-12 education.
Even though many states are crafting AI guidance for schools, federal guidance on AI is still needed, said Pat Yongpradit, the chief academic officer at Code.org and a leader of TeachAI, an initiative supporting schools in using and teaching about the technology.
“We really need to move beyond AI is bad [or] AI is good, and get super nuanced about the proper and improper uses of AI in education,” he said.
These resources, he said, give schools a good starting point to have those conversations.
The Education Department’s office for civil rights report, which was released last week, focuses on how AI could infringe on the rights of protected groups of students and details several scenarios where schools’ use of, or response to, AI could trigger an OCR investigation.
Among the examples:
- A teacher uses an AI detection tool to determine if students used a generative AI program like ChatGPT to write an assignment. Unbeknownst to the teacher, the tool has a much higher false-positive rate with students who are learning English, meaning English learners are falsely flagged and accused of cheating while their native English-speaking peers are not. (Some research has found that this happens.)
- School administrators don’t respond aggressively enough after being tipped off that a student is creating “deepfake” nude images of their female classmates.
- A school uses an AI tool to create the schedule for sports practices and games, and female teams are assigned worse times and days to play. The school does not respond to the student-athletes’ complaints.
- A school district purchases facial-recognition technology that misidentifies Black students and incorrectly flags them as known criminals from a database.
Those are a sampling of the potential issues OCR has identified that might arise from schools overrelying on AI and not keeping real people in the decisionmaking loop. But they’re not purely hypothetical: Schools are already dealing with some of these issues, such as students making sexually explicit deepfake images of their classmates.
“The examples that they have in the document are quite real,” said Yongpradit. “These are not two sentence descriptions of a potential action. These sound like they are already happening. And it should be a wake-up call when it comes to the risks of AI in schools. There’s actual discrimination that could be exacerbated or created because of improper use of AI in schools. And it really alludes to the need for comprehensive AI literacy.”
However, he said, the takeaway shouldn’t be that AI is bad, and education leaders shouldn’t react by trying to ignore or disengage from it.
That’s where the second resource from the Education Department’s office of educational technology, released in October, comes in. While a portion of the tool kit is devoted to the risks of AI, it also offers practical tips on approaching topics like evaluating AI interventions and updating school technology policies for AI.
The tool kit was developed with support from Digital Promise, a nonprofit group that focuses on equity and technology issues in schools. A group of 16 teachers, principals, superintendents, and other educators also contributed their insights.
The tool kit is comprised of eight modules that address three broad themes: AI risk mitigation, strategies for integrating AI into instruction, and effective use and evaluation of AI.
For example, that last theme includes a module on building AI literacy that gives an overview of what AI literacy looks like for educators, its importance, and a list of topics that AI literacy professional development initiatives should cover, including the technology’s history and origins and data and machine learning.
So, how should school leaders approach these reports? Yongpradit recommends using the resources to open up discussions in faculty meetings.
“The tool kit is more directive—the modules are set up as book club readings or practical activities that teachers can do,” he said. “The office of civil rights guidance is really focused on discussion and picking apart the scenarios and reflection on whether the school is proactively addressing the potential for discrimination, or if the school is doing some of these things, or if teachers are putting themselves at risk and their learners at risk.”