Even though ChatGPT brings with it a host of potential problems, most of us in schools are beginning to accept that artificial intelligence tools are coming, and we are weighing the best ways to respond. “What will stop kids from plagiarizing?” “How do we deal with misinformation?” “How will we know what students actually know?”
I’ve taught middle school science for 11 years and lately I’ve heard a lot of teachers express fears about this new technology. It’s hard for most of us to understand what’s under the hood of AI. Maybe that’s part of why some education leaders have responded defensively, with schools and districts blocking access to OpenAI’s chatbot from the schools’ networks and devices—though New York City schools dropped its ban last month.
I can understand the rationale for blocking: by shielding our classrooms from generative AI, we can keep teaching as we always have. But I prefer a different approach. When my students encounter something new, I task them to play, experiment, and generally learn what the new thing is and how to use it. Educators can do a lot with generative AI, even if we aren’t experts in it. Our classrooms can help set students up for success in an ever-evolving technological world.
This starts with how students are tested. Formal assessments often pose questions that are scored automatically, graded on a right/wrong binary. Until now, that’s all that our technology allowed us to assess efficiently. But the data educators receive from these tests aren’t great. They often don’t pinpoint missteps or advances in student thinking and they are usually turned around to schools too late to be actionable.
Our assessments are measuring the wrong things in the wrong way, and we now have the technology to fix that. Generative AI based on massive data sets that provide “large language models” lets us ask richer questions that illuminate student thinking. Instead of relying on items that are easy to grade, we can ask higher-order questions in extended tasks. If we can change how we assess students, we transform how we teach.
I tested this out during this year’s science fair. All of my students complete a science fair project as a capstone, conducting experiments and creating display boards to present to our school community. However, students need wildly different amounts of help compared with each other and at different points during the arc of the assignment. Some take weeks to collect measurements and data, others are done with data collection in minutes but require help with analysis, and still others are confounded by the very basics of research. It’s almost impossible for me to give 128 students adequate feedback on a daily basis—at least by myself.
Educators can do a lot with generative AI even if we aren’t experts in it.
So, I gave ChatGPT my rubric. Then, I had students start submitting their project outlines to ChatGPT for feedback. Within seconds, the AI had processed everything and was crafting targeted constructive feedback. It flagged plagiarism. It offered tweaks to improve replicability and validity. It complimented innovative and unique ideas. In fact, it summarized all of its feedback with lots of “glow and grow” phrasing.
This is the kind of feedback I want to give constantly, but it takes hours. With AI, students don’t have to wait on me. They can edit, tweak, and resubmit over and over, continually getting formative feedback. Is it cheating? I don’t think so. It’s real-time feedback and it’s richer and more complex than what I could do alone. It accelerates learning.
I tried one more experiment—this time, from a language arts lens. I gave ChatGPT test questions from my upcoming science test about force and motion and, then, I gave the machine-generated answers to students. “I gave some test questions to a robot,” I told them. “Can you make the answers better?”
I had never seen such motivation—indignation, really. Students were offended at the notion that a robot could be smarter than they are and worked collaboratively to find any way to strengthen the otherwise very strong responses.
“I think the robot’s word choice would be confusing to people,” Kalya told me. “So, I’m going to swap these words out for easier to understand words that mean the same thing.”
“The robot wasn’t specific at all when it mentioned force,” Aleiyah said. “There are lots of different kinds of forces, so I’m going to specifically name them.”
“It seems like the robot is just describing what happened without explaining why,” Le’shawn noted. “I’m going to add a sentence after its description to explain that.”
It took less work for me to increase the rigor, collaboration, and depth of thinking in my classroom when I brought generative AI into my teaching.
So, how do we as teachers embrace AI in the classroom?
First, validate the world students actually live in and question rigid attachments to pedagogy that don’t fit the world they’ll inherit. As teachers, it is our responsibility to open ourselves up to the challenges students will have to face. If we focus our time and energy on that, we’ll be able to do it better. It’s OK to let go of the rest.
Second, change the relationship among students, teachers, and technology. Teachers often think students are using technology against them—distracted by phones, cheating with search. But every student I know wants to be better than a robot. Challenge the students to form an alliance with you, to create content and express knowledge better than a generative AI tool like ChatGPT.
Third, we have to change the way we assess students and the role those assessments play in school accountability. Our assessments are mostly designed to test student thinking on items that are easy to ask and measure on a test. But just because they’re easy to measure doesn’t mean we’re measuring the right things.
Let’s move toward a future where teachers and assessments focus on collaborative, real-world performance rather than answers to narrow skill or fact questions. And let’s embrace ChatGPT and other AI software to help us get there.