This may be my algorithm talking, and others may not have noticed it so far, but on my LinkedIn timeline there is a surge of posts pointing out the incompetence of AI detectors that flag their writing as machine-generated. Most have tested it with articles written before 2022, when OpenAI launched ChatGPT. Many of these writers are native English speakers.
I have wanted to write on this for at least two years now. That time, though, my article would have lacked authenticity. This is because I started observing this bias through the lens of writers from the Global South, whose apprehensions are usually dismissed. I can’t recall the exact year, maybe late 2023 or 2024, but there was an article on the use of LLMs for writing by research students. Apparently, the word ‘delve’ – yes, before the em dash, the word delve was the centre of hate – indicated the use of LLMs.
A tech expert from the US then posted on X that he used the word to filter out AI-generated emails. This led to an outcry on X, with most users suggesting that the word is quite common. The expert argued that the language has to be simple and close to everyday use. By contrast, those from countries where English is a second language explained that they learned the language through books and movies and would naturally adopt that style. This, however, they said, did not imply that they used LLMs to write something.
Since we were in the middle of a genocide at that time and a lot was happening around the world, I could not follow up and do not know how the debate ended. But what users on X debated was relatable. From 2019 to 2020, I worked as a digital media specialist at a local software house. My job included writing content for US-based clients. During the 18 months I spent there, not once was my writing flagged for plagiarism. Honestly, when writing is your job and when you are told what to write, it becomes easy. Writer’s block happens when you have to switch gears for something more creative, more personal, maybe. Just like this article. It could not have happened two years back because I was not sure how to make my argument stronger.
In late 2022/early 2023, I briefly worked as a content writer as a side hustle. I could not hold on to the job, mostly because clients would use AI detectors and ask us to botch a perfectly written article. At that time, articles written in the third person would almost always get flagged as AI-generated. Clients would not accept a score less than 95 per cent, and it used to be torture. This led to frustration and ultimately, I couldn’t manage the undue pressure.
In some respects, I agree with the tech expert about the language. You don’t need to beat about the bush to express your thoughts. But what most people either ignore or fail to understand is that written language is also different from spoken language and one has to be careful with the words. While blogs did free written language from formality, how we write remains nuanced. And whether something is AI or human-generated can best be detected by a human eye, not the machines. This argument would have fallen flat two years ago when most AI companies were more interested in proving that machines were fast becoming superintelligent.
I asked my friend, who was working as a content writer, about this and she told me how complicated her work had become ever since these detectors came into play. Writers from the Global South have little to prove their credibility and integrity. Of course, how can they master a language they do not hear 24/7? This bias may not be apparent, but it does leave its signs. For example, rates for writers in the Global South are criminally low, mostly because they are non-native. If writing requires heavy editing, then yes, the rates could be justified. But if an article is published on a client’s website, LinkedIn timeline, etc without a single change, what justification is there to give peanuts to writers here?
Now, all these native speakers are clutching their pearls over the inaccuracy of AI tools. This discourse never happened when writers from the Global South were being accused of using LLMs and when studies would be published on the use of ‘delve’. The idea that machines could be wrong was non-existent then. People who are not English speakers may have a different writing style, but they still have a distinct and original voice. Languages cannot be gatekept, and we can only hope that this example will help our friends in the Global North realise that our reality could be different from theirs, and that they do not necessarily have to wait for things to get murkier before they accept our words.
PS: I am a writer from the Global South who used three AI detectors for this piece. One said it was 7.5 per cent AI-generated; the other two said the use of AI was zero. Now, naturally, readers would be guessing what that 7.5 per cent could be. If you can identify, do let me know, because I also want to learn where the machine copied me.
The writer heads the Business Desk at The News. She tweets/posts @manie_sid and can be reached at: [email protected]