AI researchers are to blame for serving up slop

I’m not surprised to read that the field of artificial intelligence research is being overwhelmed by the very slop that it has pioneered (Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’, 6 December). But this is a bit like bears getting indignant about all the shit in the woods.It serves AI researchers right for the irresponsible innovations that they’ve unleashed on the world, without ever bothering to ask the rest of us whether we wanted it.But what about the rest of us? The problem is not restricted to AI research – their slop generators have flooded other disciplines that bear no blame for this revolution. As a peer reviewer for top ethics journals, I’ve had to point out that submissions are AI-generated slop. But many academic experts are not well-versed enough in slop to spot it so quickly. Nor, understandably, are they inclined to immerse themselves in the “genre” to get up to speed.This means that the process of weeding out the slop will be slower and clog up the process. AI allows the volume of rubbish to be scaled up to unmanageable levels so that traditional quality‑control mechanisms like peer review are overwhelmed.We are on the verge of academic virtues and standards falling away, the “signal” being drowned out by the “noise” across the board, at which point research may face a downward-spiralling bad imitation of itself, from which there is no obvious escape.Those who continue to look away from this problem must not later be allowed to get away with claiming that they weren’t warned about and couldn’t have foreseen the consequences of their neglect.Dr Craig ReevesSchool of law, Birkbeck, University of London Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.
AI Article