A couple of years ago, I co-founded an AI startup in Pakistan. We build practical, accessible solutions for the local market, grappling with low digital literacy, basic smartphones and patchy internet connectivity. We use AI not as a grand disruptor but as a quiet enabler, trying to make it work where it was never really designed to.
So, when I say that the fear of AI making humans obsolete is overstated, I am not speaking from a position of distant optimism. Rather, I am speaking in view of the constraints on the ground.
Technologists predict the wholesale displacement of human labour; traditionalists dismiss AI as a passing disruption. Both miss what is actually happening – not a replacement, but stratification. AI is not eliminating the value of human effort. It is clarifying it, concentrating it, and in many markets, dramatically inflating its price.
Years ago, while sourcing goods in the luxury textile trade, I noticed something that has stayed with me. The most expensive Pashmina shawls were not the ones produced with surgical precision. They were the ones with the slight, almost imperceptible irregularities of hand embroidery – patterns that no algorithm could replicate precisely because their value lay in the fact that they could not be replicated at all. The wealthy client was not buying a shawl. She was buying proof that another human being had spent hours making something exclusively for her.
The same logic governs the edible desi ghee that urban elites now pay a premium for, even though commercially produced ghee (cooking oil) is chemically identical. It governs the hand-stitched shoe, the handwritten letter and the in-person consultation with a physician when a telemedicine app would suffice. These are not irrational preferences. They are a coherent economic signal: as automation commoditises the output, the process becomes the product.
This is the Pashmina Paradox. The more perfectly a machine can replicate an output, the more valuable the ‘imperfect’ human original becomes. To understand what is truly at stake, we need precision about what AI is actually good at. Large language models excel at pattern completion – they synthesise existing knowledge rapidly and fluently. They are extraordinary at the middle tier of cognitive work: drafting standard contracts, writing code, summarising documents and generating first-cut analyses.
The latest developments in AI also show how they can now operate as autonomous agents – planning multi-step tasks, using tools, browsing the web, running calculations and course-correcting when they encounter obstacles, all without human intervention. They work in parallel across thousands of instances simultaneously, a feat no human team can match. This is genuinely transformative. It is also critically not the whole of human value.
What AI cannot do – and what I observe daily in my own company – is exercise judgment in conditions of genuine ambiguity, take moral responsibility for outcomes or build trust through shared vulnerability. When we produced a documentary screened at an international forum, the script was written by a human expert, not because AI could not generate words, but because our audience needed to sense that a real person had staked their credibility on every claim. The AI could approximate the sentiment. It could not bear the weight.
Responsibility is not a feature you can train into a model. It is a distinct human liability – and as it turns out, in a world of AI-generated noise, liability is enormously valuable.
The more honest concern about AI is not mass unemployment but mass stratification. As AI handles routine cognitive labour efficiently and cheaply, two tiers of the economy will diverge sharply.
One tier will interact almost exclusively with AI interfaces: automated customer service, AI-generated legal templates and algorithmic health screening. The other tier will pay a steep premium for human attention: a lawyer who not only knows the law but also understands your particular fear; a physician who treats your illness alongside your dignity; a consultant who has the authority to say, with their name on the line, that they believe this is the right decision.
This is already visible in how Pakistan’s elite avail services. The wealthy do not want telemedicine; they want their doctor’s mobile number. They do not want an algorithm to manage their portfolio; they want a banker who will answer on a Sunday. The human is not merely a preference here – the human is the product and the scarcity of genuine human attention is what commands the price.
The risk, then, is not that AI will take jobs. It is that AI will divide the labour market into those who can offer irreplaceable human value – empathy, judgment, responsibility, creativity rooted in lived experience – and those who are left competing with machines on machines’ own terms. The latter is a race no human should want to run.
There is a deeper psychology here worth naming. Social scientists call it effort justification – our tendency to value things more highly when we perceive real effort behind them. A student from Shakargarh who scores 1300 on the SATs is celebrated in ways that a well-resourced Aitchisonian with the same score is not, because the journey differs even when the result does not. As AI floods every channel with frictionless, effortless output, the perception of genuine human struggle will become a quality signal in its own right – one with measurable economic consequences.
For a country with Pakistan’s demographic profile – a young, largely under-employed population with enormous potential – the AI debate carries particular stakes. The instinct of many policymakers is to either dismiss AI as irrelevant to our immediate development challenges or to fear it as an imported disruption. Both responses are flawed.
The correct response is to invest urgently in the capacities AI cannot replicate: deep domain expertise, cultural and contextual intelligence, the ability to exercise ethical judgment and the interpersonal skills that build institutional trust. A generation of Pakistanis trained to think critically, communicate with authority and take professional responsibility for outcomes will not be displaced by AI but amplified by it.
The professionals who have invested in genuine depth of knowledge and judgment are not at risk. It is those who have spent careers performing the appearance of expertise – generating reports, attending meetings, producing documents – without developing the irreplaceable core underneath. For them, AI is indeed a threat, because AI can perform the appearance of expertise rather well.
Humans are not going out of business. We are, for those willing to do the harder work of becoming irreplaceable, going upmarket. The question is not whether there will be a place for human talent in an AI world. The question is whether we have what it takes to deserve that place.
The writer is a Chevening scholar and co-founder & CEO of DAT. He can be reached at: https://www.linkedin.com/in/abubakarumer