AI’s potential for undermining the culture of writing and teaching
| A |
n ominously quiet threat is sweeping across Pakistani universities. AI and its various manifestations pose a challenge unlike any experienced in recent times. While the campuses across the world are deluged by the new technologies and their unquestioned threat to creativity, especially writing, there is at least a growing conversation on how to create meaningful barriers against them. In contrast, our universities reveal disarray and confusion and are stuck in an erratic process of stocktaking.
Pakistan’s elite campuses have grasped the new beast’s destructive potential for undermining the culture of writing and teaching. Hence some on-again, off-again but largely muffled conversation is under way. We need to sit up and take remedial measures on an emergency basis. At stake is our youth’s capacity for cognition, which means understanding concepts and making judgment. At the fundamental level, it has to do with expression and argumentation. Professors have the responsibility of conveying these skills to young people.
Viewed closely, all this has to do with our capacity to think. When machines take on the role of thinking for human beings, we risk the danger of losing track of complex arguments. Ultimately, this will diminish our capacity to create judgment. Before we extend this conversation, let’s look at the pervasive AI cheating now under way across world universities, including the elite ones at home and abroad. While data is available about the massive cheating across foreign campuses, which threatens to reduce a bulk of students to sub-cognitive creatures, we are still assessing the immediate consequences. Imagine the repercussions for a culture like ours where students’ linguistic potential is already at a minimal level. This could be destructive for the microscopic minority of people focused on reading and writing in what is largely an oral civilisation. Critical thinking is already scarce here. The ability to create sentences is being threatened like never experienced before.
Here’s what universities such as the Lahore University of Management Sciences have devised thus far to stave off the latest threat of cheating. Of course, it is, at best, a coping mechanism and not a comprehensive strategy. Professors have been forced to revise their evaluation strategies. They are now reverting to actual exam venues to assess students’ knowledge in diverse fields, especially writing and communication. The so-called take-home graded assignments and exams are giving way to previous real-world exam centres. This has been an effective evaluation method for decades. Hence, the climb back is justifiably warranted, at least for the time being, until new technological safeguards arrive.
It is hugely interesting, and shocking as well, to know what has forced professors’ hand to embrace traditional exam-centred student evaluations. For instance, this professor’s in-class experience of teaching writing to students for some 15 years at LUMS has revealed a pervasive use of AI technologies for English essay writing in the last three years.
Notwithstanding the university’s acquisition of modern anti-plagiarism software, students have found ways to skirt around the filtering devices. Not all students are driven by AI technologies; there are exceptions, no doubt. Even if the writing class is smaller, such as we have in LUMS, the chances for an AI-driven assignment/ essay are abundant. The only way a teacher can be a hundred percent sure of an AI-assisted assignment/ exam is if a poorly performing student produces something magical at the end of term. A short post-exam conversation with the offending student reveals more than they suspect. The going gets tough when students use a humanising software to bedevil a soft unsuspecting academic.
Here’s the crux of the problem: unlike well-heeled foreign universities, we lack data about the extent of AI-assisted cheating across our institutions; how, then, can we as a culture adapt assessment systems to the increasing pressure of artificial technologies, especially AI-propelled writing machines?
That is a daunting challenge.
We have yet to think in terms of recording AI misuse as a clear manifestation of misconduct. Just how many students/ scholars have been penalised thus far for misuse of artificial technologies is hard to tell. Again, no data is available.
To be fair, some students do admit to using AI technologies. They maintain that it has more to do with brainstorming a topic for writing—and triggering ideas. It has been a surprise that some writing students use AI for summarising long chapters or texts. Maybe, there are other clandestine uses. Professors and universities are learning fast.
On the other hand, technological entities are massively focused on students worldwide as a dramatic profit-generating population. A lot will naturally depend on how we, as a society, reduce the risks for students as they opt for technologies that will dent their creativity and competence for the real world. Pakistan’s public academic environment has struggled with what has historically been a plagiarism rush. Viewed against this background, the massive AI inroads into the academia over the last three years are definitely a cause for worry.
The writer teaches media writing and communication at the Lahore University of Management Sciences. He is a former editor of The Frontier Post.