Tell a student not to use AI and you will be lecturing against the tide. Walk into any workplace today and you’ll find people using smart tools to draft, calculate, design and decide.
So why do we still insist that classrooms pretend those tools don’t exist? Insisting that students avoid AI during exams is like telling pilots to ignore navigation instruments because ‘real flying’ once used only maps.
A bright student confessed, honestly and worried: “I used AI to draft my essay. It’s better than what I could write in two hours. What should I do?” That question deserves an answer that reflects reality, not denial.
The old model treats exams as a way to catch cheaters and reward memory. It assumes knowledge is a stack of facts you either possess or don’t. AI changes the game: rote recall is cheap, instantly available, and often more polished than what most students can produce unaided. But that doesn’t mean learning is dead. It means assessments must test what AI cannot replace: judgment, context, ethics, synthesis and the messy craft of making sense when data is imperfect.
So what should change? First, move from asking ‘What do you know?’ to ‘What can you do with what you know?’ Project-based assessments, long-term portfolios and real-world problems push students to apply knowledge over time. Ask them to design a small research project, run a user test or solve a local community problem – tasks where process, reflection and iteration matter more than a polished one-time answer.
Second, test the human edge. Oral exams, vivas and supervised presentations reveal reasoning under pressure, the ability to defend a position and the skill of thinking on your feet. These formats are not trickery; they’re honest checks of understanding. If a student can explain why they chose a method, what went wrong and how they would change it, that tells you far more than a flawless AI-generated essay.
Third, evaluate AI literacy itself. In the real world, people who use AI well are not those who blindly copy outputs, but those who can prompt, critique and correct machine-generated suggestions. Design assessments where students must use an AI tool to generate options and then critically evaluate its mistakes, biases and blind spots. That tests judgment – a deeply human skill.
Fourth, prioritise collaborative and interpersonal skills. Group projects, peer review and client-style briefs measure negotiation, communication and responsibility – indispensable in modern workplaces. AI can draft an email or a plan; it cannot be accountable to teammates in the same way a person is.
Fifth, redesign grading rubrics. Weight process – drafts, design decisions, peer feedback and reflective journals – not just the final product. Make academic honesty a learning conversation rather than a punitive trap. Teach students how to cite AI, when to rely on it, and how to explain their contribution versus the tool’s.
Teachers and institutions must be supported to shift. That requires training in assessment design, smaller class sizes for meaningful feedback and exam boards that allow for varied, continuous assessment methods rather than a single high-stakes test. Boards and accreditation bodies should pilot alternative formats and publish clear guidance; students and parents need that roadmap.
Parents, too, should shift their expectations. The goal isn’t to make exams harder to ‘catch’ students, but to make school more relevant to life after school. A child who can use AI responsibly, spot its errors and argue their decisions will be more employable and more ethical than one who can memorise pages of textbook facts.
There will be pushback. Change is messy. Standardised tests are easy to grade and hard to replace; universities and employers rely on snapshots as selection tools. But clinging to snapshots that test what AI does best – regurgitation and polishing – is short-sighted. We must invest in assessment forms that measure resilience, creativity, and character – skills an algorithm doesn’t have.
We can start small. Introduce a portfolio course that counts for credit. Require one oral defence per semester. Ask students to submit a brief critique of the AI output with their work. These are low-cost, high-value steps that signal a serious shift.
AI is not the enemy of education; it’s a mirror. It shows us what parts of learning machines can manage and which parts remain human. If we want schools to prepare students for the real world, we must stop pretending that exams are islands untouched by technology. It’s time to rewrite assessment so it rewards what matters – not what a machine can echo back at us.
The writer is an MBA student at IBA. He writes on urban policy and transport. He can be reached at: [email protected]