AI in schools could be a disaster, but it doesn’t have to be
Credit: Alison Yin / EdSource
The history of education is littered with technologies that were going to change everything.
Just in the past few decades, we have had MOOCs (widely accessible online university courses), which were going to revolutionize higher education, and a one-laptop-per-student spending spree in K-12. Both of these were ultimately flops, as were so many educational technologies of the past. The backlash has become so complete that some states are now mulling screen bans in school for some children.
Generative artificial intelligence (AI) is the newest technology that’s supposed to revolutionize teaching and learning. Billions are already being spent to rush AI tools into the classroom, and teachers in K-12 and higher education are already training on how to incorporate AI in their teaching. We don’t yet know the ultimate impact, but the early data suggest that AI is going to break the pattern of prior educational technologies.
It’s not going to have no impact; it’s going to make things much worse.
Experts have been warning about generative AI in education, but the train keeps barreling down the tracks. A report from Brookings earlier this year weighing the risks and potential benefits concluded the risks were far greater. This is because GenAI has the potential to “undermine children’s foundational development” — not just their ability to learn the content, but their social and emotional development, their autonomy and agency as learners, and even their trust in important institutions like schools. These risks extend beyond individuals to our general collective knowledge as a society, which experts worry is also at risk.
The early data suggest that AI is going to break the pattern of prior educational technologies.
We can already see AI’s negative effects in the data. Recent survey data from Pew shows that more than half of teens are already using AI for their schoolwork — about 10% report almost all of their schoolwork is done with AI. And about 60% say that students at their schools are using AI often to cheat. Teens in that survey recognize that AI can help them complete tasks, but their biggest worry is that overreliance will undermine their ability to think for themselves. New analyses of students’ in-school AI usage show that a full 20% of interactions involve potentially troubling behavior like cheating, bullying or self-harm.
My own research team at the University of Southern California has also recently surveyed representative samples of teens and parents about AI, and our findings mirror Pew’s but deepen the concern. For starters, parents don’t realize the depth of teens’ use. We found only 7% of parents thought their teens were using AI for schoolwork multiple times a week or more, but 27% of teens said they were (itself surely an undercount). Parents also overwhelmingly don’t know their teens’ schools’ AI policies. Both parents and teens are more likely to agree than disagree that AI causes more harm than good, and more than a third of teens say AI is making their ability to think for themselves worse.
These results are alarming, and they’re likely to worsen as AI becomes more integrated in society and in our education systems. But they shouldn’t be surprising. AI helps individuals complete tasks — there is no doubt about that. I have used it to help me write cover letters or references, for instance, and it’s very efficient and does a fine job. But learning is rarely about the product itself ; it’s almost always about the process of getting there. Shortcutting the struggle to master new skills and make connections will have long-term negative consequences for individuals and society.
It’s not too late to avert the impending disaster. To start, we need leaders at all levels — but especially at the state level where policy is usually made — to offer districts clearer support and guidelines about appropriate and inappropriate uses of AI in schools. It simply cannot be left up to 1,000 individual school districts to figure out how to navigate effective AI policy.
State policy recommendations should start from an understanding of good and bad uses of AI for both teachers and students.
For teachers, the rule of thumb should be that AI uses make their jobs easier and more efficient without sacrificing quality. They should also be targeted at areas that teachers often struggle with. This could mean AI tools to help them with repetitive grading, summarizing student misconceptions across a classroom, or helping them generate materials to differentiate instruction for high- and low-achievers, for instance.
Students are all but certain to use AI regardless of school policy, so schools need to figure out how to ensure that kids are still doing the hard work necessary to learn in spite of AI’s temptations. This probably means moving more student work and assessment into the classroom, getting rid of busywork assignments that are easily thwarted by AI, and coaching both children and parents on the dangers of the technology.
Even these solutions, I fear, will not be enough to fend off what’s coming — a generation of children whose educational experiences are diluted by generative AI, leaving them unprepared for what’s to come. Neither the children themselves — nor we as a society — can afford that level of disruption.
•••
Morgan Polikoff is a professor of education at USC Rossier School of Education.
The opinions expressed in this commentary represent those of the author. EdSource welcomes commentaries representing diverse points of view. If you would like to submit a commentary, please review our guidelines and contact us at commentary@edsource.org.