Last week, I wrote an essay defending Katie Whittemore’s use of ChatGPT in her translation of Attila by Aliocha Coll. The long and short of it was, since LLMs have become, whether we want them or not, a feature of every website and piece of software, we might as well use them, provided that they give us accurate information, which proved to be the case with ChatGPT. I tested this with an example from my own translation, “Fleckerlschuh”, an obscure word from Bavarian dialect that I encountered in play by the Munich cabaretist Karl Valentin. ChatGPT provided me with the correct answer, a sort of rustic handmade shoe made from fabric and leather scraps, which I confirmed by reading some articles about mountaineering in the Alps. In any case, ChatGPT was more accurate than Google.
So why am I saying that translators shouldn’t use the technology? We might as well be pragmatic and use all tools available to us, shouldn’t we? Yes, but then I wouldn’t be writing about translation problems in obscure literature if I was a pure pragmatist. I’m only a pragmatist insofar as I want to be a responsible custodian for the literature that I care about, and it’s for that reason, for custodianship, that I’m reconsidering my position.
It’s no secret that basic literacy and numeracy are in sharp decline, especially among young people. The use of LLMs is a major—though not sole—cause of this decline. The health of any kind of literature, no matter how rarified and erudite, is downstream from a culture’s basic capacity to teach reading and writing. In the United States, we’re seeing a abdication of that responsibility at every level of society, from parents to teachers to administrators to politicians. This is especially bad with foreign languages, which are being eliminated wholesale from school curricula. Language instruction, precarious as it is, provides the employment basis of many fellow translators and scholars.
The situation is as annoying as it is troubling. AI has become constant irritant, a retarding force—in every possible sense of the word—to economic activity and social relations. The vacant, unhappy stares that often attend unscripted social situations is proof enough that the unfettered use of this technology is antithetical to human flourishing. It is fun to play around with LLMs, to have them draw little cute pictures based on prompts; they have some usefulness in translation. But their use has to be attended with some heavy caveats.
In the case of translation, there’s a danger of rendering the activity pointless. If we’re not engaged with the material at hand, considering and reconsidering the work as it passes between languages, and letting the difficulties of the process shape, in turn, the way we engage with language more generally, then I’m not sure why we should even bother. At that point, it’s more or less data entry, and poorly remunerated data entry at that. Automated methods still produce books, but without care and attention these would be little different than the unedited, barely intelligible drivel that gets churned out by the cubic yard every second.
To use a metaphor from land management, using ChatGPT is like using herbicide to restore degraded habitat. When a patch of forest becomes overwhelmed with invasive species, empress trees for example, often the best option is to use glyphosate, applying it to cuts made to a tree trunk. It’s usually a less intrusive option than clearing the land. To be sure, this kind of chemically enhanced horticulture is what brought over the invasive plants to begin with, but clean hands are a luxury. Targeted applications of LLMs, just like herbicide, can improve the linguistic environment.
Again, this sounds like an argument for the use of AI; the difference, I suppose, is that I’m now quite ambivalent about promoting the use of ChatGPT outside of a narrow professional context. Part of the reason why Whittemore wrote her introduction to Attila was to raise awareness of the technology, but awareness has been raised already. Awareness is inescapable online. The technology itself is widely discussed and widely adopted. Awareness raising seems superfluous now.
Part of why printed books continue to have value, for me at least, is that they provide a necessary refuge from the sensory onslaught that greets me every time we unlock a phone and log onto a computer. Even though Whittemore’s introduction was helpful, I can imagine it being quite demoralizing for others that share my preferences but not my vocation, cracking open a book and find a screenshot of ChatGPT in it.
As I was thinking about this problem, the old cliche of “how the sausage is made” kept reappearing in my mind. Even in Bavaria, a culture that values the sausage, it serves a metaphor about ugly or unnecessary details. “Mir is Wurst,”—“it’s all sausage to me” is a common expression of indifference throughout southern Germany. I don’t think it’s much a stretch to think that many readers feel that way about AI technology in writing, something like making sausage or spraying for weeds—best discussed among those involved.