How educators make ethical decisions about the use of AI for teaching and learning is affected by all kinds of factors.
One big one is gender, according to a new report from the University of Southern California’s Center for Generative AI and Society.
The study comes a little more than a year after ChatGPT and other generative AI tools entered the K-12 scene. Still, many teachers are unfamiliar and uncomfortable with the technology. An EdWeek Research Center survey conducted in the fall found that two-thirds of teachers are not using AI-driven tools in their classrooms.
The USC study explores how teachers make ethical judgments about using AI in the classroom. It surveyed 248 K-12 educators from public, charter, and private schools across the United States and asked them to rate how much they agreed with different ethical ideas and whether they were willing to use generative AI tools in their classrooms.
It found that female teachers were more likely to be proponents of rule-based ethical perspectives (such as AI must protect user privacy and confidentiality and AI should be fair and not biased), whereas male teachers were more likely to be proponents of outcomes-based perspectives (such as AI can improve efficiency and people might become too reliant on AI).
As AI tools become staples in the classroom, “it’s really important for teachers to feel empowered to be engaged in these conversations, not from a place of fear or pushback but to shape the technology so it suits their needs,” said Stephen Aguilar, the associate director of the USC Center for Generative AI and Society and an associate professor of education at the university.
In a Zoom interview with Education Week, Aguilar, the author of the report, explained the importance of examining teachers’ ethical judgments and what the study’s results mean for K-12 schools.
This conversation has been edited for brevity and clarity.
Why is it important to study teachers’ ethical judgments about using generative AI in their classrooms?
Whenever a new tool comes up on the market, teachers always make judgments about [whether they are] going to use this new technology or not. Teachers are the arbiters of their classrooms, and they decide what gets used and what doesn’t get used, often regardless of what administrators say. Those judgments often, even though we might not think about them, come from an ethical framework that we happen to just have.
One of the things I noticed about the discourse, how people were talking about artificial intelligence in education, was it really focused on consequences. If we use AI, things will be more efficient. We’ll be able to maximize some sort of outcome that we care about. That’s just one ethical framework. What I thought about was, “Well, how else are teachers thinking about this?” Are they thinking about the rules that you just shouldn’t break? It’s important to have that conversation because that drives decisions.
What was the most surprising finding for you?
One of the things that was most surprising was an apparent difference in how women and men were making ethical decisions. This isn’t like one group only had outcomes-based perspectives. It’s about which ones they weighed more. Seeing that men in our sample weighted outcomes-based perspectives more than women did and women weighted rules-based perspectives more than men did—it’s something that I want to keep digging into. If there are differences in how groups of folks are making these decisions, then we need to pay attention to organizations and how they’re structured and who’s in those organizations because there might be subgroups that are thinking about things differently.
My go-to example here is: Teaching, especially in K-12, is predominantly [done by] women. Whereas if you look at tech startups, it’s [predominated by] men. If there’s a difference in values there, then what’s being created versus what’s being deployed, there’s going to be this tension that we need to address.
Frankly, the only way to actually address that is through a process of co-design or the process of getting different groups together to actually think through their core values and what it is they want a piece of technology to accomplish. Without doing that, then what we get is this tension. It’s never a good thing to just have one dominant ethical perspective rule how things are created, because it just ignores an entirely different way of thinking.
What would you want K-12 teachers to take away from this study?
Be willing to have conversations with administrators and to engage with tech companies about your concerns and about what’s happening but not from a perspective of “I’m afraid of being replaced by AI,” because that’s not going to happen. Generative AI technologies are simply a tool that will get integrated into their practice, just like everything else in the classroom was invented at some point.
If teachers come at it from [that perspective], then [they can ask]: “How do I want to use it? What are the policies that I think should be in place? What do I notice when my kids are using it in the classroom?
The only people who have any real insight into that are teachers. Everyone else isn’t in the room. It’s really important for teachers to feel empowered to be engaged in these conversations, not from a place of fear or pushback but to shape the technology so that it suits their needs. Otherwise, it’ll be the Khan Academys of the world that decide what’s going to be important.