The real crisis of literacy – we’re reading too much

Not long ago, in a bid to become even more insufferable, I finally got rid of my smartphone. I’ve already proselytised enough to friends about the benefits – increased concentration span, better sleep, bizarrely thick hair growth – so I won’t go into all of that here. Something less obvious I’ve noticed, though, is just how much less dominated my life now is by language.  There’s been a great amount of fuss made lately about modern technology’s supposedly deleterious effects on literacy, from this viral Substack by James Marriott, to an essay in this month’s Baffler. But the last few weeks have made me wonder if, paradoxically, we should be more worried about the opposite: our smartphones bombarding us with too many words, and thus making us, in an admittedly limited sense, hyper-literate, with no real sensitivity to the subtleties of human experience beyond that which can be cleanly expressed in code. Before ditching my iPhone, my days were a nonstop blizzard of words: news, podcasts, YouTube videos, Twitter, WhatsApp, email, Wikipedia, books, magazines. True, even without it, I still spend a decent chunk of my time reading, writing, listening, and thinking “linguistically”. But what made the smartphone particularly pernicious was that it plugged every last gap in the day where I might otherwise have had a brief moment of respite – from words. Even walking between rooms in my flat, I’d reflexively get out my phone and find some kind of text to scroll through. Some days, the longest I’d go without engaging with language was a few seconds in the shower. Things are rather different now: I can happily take a long bus or train journey without having to find some text to hook my mind onto. This has taught me a couple of things. First, just how much language reshapes our sense of reality, by peeling us away from the here and now – very simply, we confuse the map for the landscape. Second, and by extension, that this confusion distorts our understanding of intelligence – and thus blinds us to how different human intelligence is from current approaches in AI. It’s no revelation that language, and the written word in particular, abstracts us from the world around us. This, to bastardise the philosopher Gottlob Frege, is because the most fundamental aspect of language – what the words actually mean –is not something actually situated in time or space. A “meaning” has no physical shape or location. Nor is it restricted to any particular moment or event: if I write you a letter, the words don’t stop meaning what they mean while it sits unread in the postal van between our houses. The meaning is something quite separate from the ink the words are written in, or the mind deciphering them. For philosophers, this has represented a longstanding puzzle. For everyone else, it has concrete psychological implications. The more time we spend with words, the less we’re actually “in” the world around us: our minds are, as it were, operating on a different plane, navigating an invisible network – a bit like those diagrams that show satellite beams or internet data floating phantasmically over the Earth. We can think of language, in fact, as the very first social network – a parallel world we retreat to in our heads, ostensibly to connect with others, but one which makes us gradually lose contact with the world immediately in front of us.  In the age of LLMs, this seems like a double problem. We have become deadened to the uniqueness of non-linguistic experience at the precise moment when machines are beginning convincingly to mimic our expertise with words. As AI gets better, something that still genuinely sets us humans apart is the way we experience the physicality of the world around us, and feel the strange, continuous, seamless passing of time. There’s good reason to suspect, indeed, that our intelligence emerges from these worldly facts. But we are increasingly blinded to this aspect of our existence by the unending assault of words, words, words. Treat yourself or a friend this Christmas to a New Statesman subscription from £1 per month And so we forget something that ought to be very obvious: that our capacity to understand the sentences that AI produces is a non-linguistic capacity. That is, our recognising something as true or interesting or relevant is not itself something we do by shuffling words into the right order, but by using some prior and broader faculty that “observes” language, and makes judgements about it, from without. It seems highly likely that this capacity comes from, and depends on, our more expansive, pre-linguistic experience of the world. Of course, there are some AI theorists who think we can, and will, “embody” artificial minds in physical robots that can be taught to “experience” the actual world as humans can. In one sense, they’re right. Obviously you can hook up an LLM to a camera, say, get it to analyse the pixels it receives, link these to a set of instructions about when to move its wheels forward, and so on. But in no sense does it have any real awareness of the physical world – all it has is a bunch of digital data which it can interpret to behave in broadly human-like ways. Indeed, if LLMs really could think, it would be nigh on impossible to convince them of the reality of the material world – anything that falls in the gaps between words and numbers, as our sensed experience of physical objects does, would be quite literally unimaginable. And how would you get a robot to “feel” time passing, rather than just logging a series of timestamps?  True, I suppose we can’t categorically rule out that our own continuous sense of time will, one day, turn out to be nothing but the net result of binary data sparks in our brains creating the illusion of seamless forward-moving experience (though it strikes me as profoundly unlikely). But we can say with confidence that any AI which doesn’t start with the non-semantic aspects of intelligence, from which human linguistic reflection itself emerges, almost certainly won’t be anything like us. Now, it’s true that humanity’s reification of language didn’t exactly start with smartphones. Already in the 19th century, philosophers were warning us not to confuse word with world. Nietzsche, for instance, wrote: “mankind set up in language a separate world beside the other, a place it took to be so firmly set that, standing upon it, it could lift the rest of the world off its hinges and make itself master of it”. True, part of his concern stemmed from an anti-democratic distaste for mass literacy and the rise of print media: men, he wrote, “have lost the last remnant not only of a philosophical but also of a religious mode of thinking, and in their place have acquired not even optimism but journalism, the spirit and spiritlessness of our day and our daily papers”. But beneath Nietzsche’s elitism was also a genuine concern that language rides roughshod over raw experience – that it makes philosophical problems seem too tangible and easily solvable. And so the story arguably goes right back to our species’ very first words. Language, by dividing up experience into discrete things, sets in motion our process of mechanising the world – and seems now to be leading to the final transformation of the mind into a computer. I suspect that the temptation of mistaking language for reality is perennial – and that modern technology simply makes that temptation harder to resist. But it also offers us a curious opportunity: abandoning your phone, you get something that would have been very difficult for previous generations to achieve: a sudden, dramatic, and therefore radically vivid return to the underlying substratum of human experience before we placed the grid of language on top of it. Want to know why AI isn’t going to replace you? Turn off your phone and take a walk.  [Further reading: English literature’s last stand] Content from our partners Related
AI Article