So you use AI to write…
You're probably using AI to write. Both ChatGPT and Google AI Studio prefer to construct their sentences in specific and characteristic ways and anyone who’s been a commissioning editor for at least a few years will find the signs of its influence hard to miss, even if you personally think it’s undetectable.
You're probably going to continue using AI to write. There's nothing I can do about that. In fact, as AI models improve, my ability to prove you used AI to write some text will only decline. And as a commissioning editor focusing articles about science, health, and environment research, I'm also not entirely at liberty to stop working with a writer if I notice AI-written text in their reports. I can only warn them. If I threaten to stop, I'm certain that one day I'll have to impose the ultimate sanction, and after that I'll have to look for new writers from a pool that's already quite small. That's a bad proposition for me.
To be sure, I realise I'm in a difficult position. People, especially those without a good grasp of English writing as well as those without the means to attain that grasp, should have the opportunity to be understood as precisely as they wish to be among English-speaking readers. If immaculate writing and grammar also allow their opinions to reach audiences that shunned them before for being too difficult to understand, all the more reason, no?
This has been something of an issue with the way The New Yorker wields its grammar rulebook, with its commas, hyphens, accents, and umlauts in just the right places. As the author Kyle Paoletta disputed in a 2017 critique in The Baffler:
For The New Yorker, a copy editor’s responsibility to avoid altering the substance of a writer’s prose is twisted into an utter disinterest in content writ large. … Content must be subordinated—thoughtfully, of course!—to the grammatical superstructure applied to it. Not only does this attitude treat the reader as somewhat dim, it allows the copy editor to establish a position of privilege over the writer. …
[Famed NYer copy-editor Mary] Norris frets over whether or not some of James Salter’s signature descriptive formulations (a "stunning, wide smile," a "thin, burgundy dress") rely on misused commas. When she solicits an explanation, he answers, "I sometimes ignore the rules about commas… Punctuation is for clarity and also emphasis, but I also feel that, if the writing warrants it, punctuation can contribute to the music and rhythm of the sentences." Norris begrudgingly accepts this defence, but apparently only because a writer of no lesser stature than Salter is making it. Even in defeat, Norris, as the tribune of The New Yorker’s style, is cast as a grammatical arbiter that must be appealed to by even the most legendary writers.
I shouldn't stand in judgment of when and how a writer wishes to wield the English language as they understand and have adopted it. But with AI in the picture, that could also mean trusting the writer to a degree that also overlooks whether they've used AI. Put another way: if it's levelling the playing field for you, it's getting rid of another sign of authenticity for me.
Perhaps the larger point is that as long as you make sure your own writing improves, we're on the right track. To be clear: whose English should be improved? I'm saying that's the contributor's call. Because if they send a poorly written piece that makes good points, some of what I’m going to be doing to their piece is what AI will be as well (and tools like Grammarly have already been). Second, as AI use becomes more sophisticated, and I become less able to tell original and AI-composed copies apart, I’ll just have to accept at some point that there's nothing I can do. In that context, all I can say is: "Use it to improve your English at least".
Or perhaps, as in the words of Northern Illinois University professor David J. Gunkel, "LLMs may well signal the end of the author, but this isn’t a loss to be lamented. … these machines can be liberating: they free both writers and readers from the authoritarian control and influence of this thing we call the 'author'. For better or for worse, however, I don’t see how a journalistic publication can adopt this line wholesale.
* * *
As a commissioning editor in science journalism specifically, I've come across many people who have useful and clever things to say but their English isn't good enough. And from where I'm sitting, using AI to make their ideas clearer in English only seems like a good thing… That's why I say it levels the playing field, and when you do that in general, the people who already have a skill because they trained to acquire it lose their advantages while the new entrants lose their disadvantages. The latter is of course unfair.
In the same vein, while I know AI tools have been enormously useful for data journalists, asking a programmer what they feel about that would likely elicit the same sort of reaction I and many of my peers have vis-à-vis people writing well without having studied and/or trained to do so. That would make the data journalists’ use problematic as well by virtue of depriving a trained programmer of their advantages in the job market, no?
Short of the world's Big AI companies coming together and deciding they're going to insert their products' outputs with some kind of inalienable watermark, we have to deal with the inevitability of our being unable to tell AI-made versus human-made content apart.
Yes, AI is hurting what it means to be human, at least in terms of our creative expression. But another way to frame all this is that the AI tools we have represent the near-total of human knowledge collected on the internet. Isn't it wonderful that we can dip into that pool and come away as more skilled, more knowledgeable, and perhaps better persons ourselves?
Science communication, as it focuses on communicating what scientists have found, might become redundant and low-value, at least among those that can afford these subscriptions (whose prices will also fall and accessibility will increase) because people can feed a model a paper and ask for the output. Hallucinations will always be there but it's possible they could become less likely. (And it seems to me to be somewhat precarious to found our own futures as journalists on the persistence of hallucinations.) Instead where we can cut ahead, and perhaps stay that way, is by focusing on a journalism of new ideas, arguments, conversations, etc. And there the playing field will also be less about whether you can write better and more about what and how you’ve learnt to think about the world and its people.
We all agree democratisation is good. Earlier, and to a large degree still, education facilitated and facilitates that to a significant degree. Now the entry of AI models is short-circuiting that route, and it's presenting some of us with significant disadvantages. Yes, there are many employers who will try to take advantage of this state to bleed their workforce and take the short way out. But it remains that — setting aside the AI models coming up with BS and equivocating on highly politicised issues — the models are also democratising knowledge. Our challenge is to square that fact (which can't be un-facted in future) with preserving the advantages for those that 'deserve' it.
As if mirroring the question about who gets to decide who should improve their English, who gets to decide who gets to have advantages?