Understanding the AI revolution

Nida Mujahid Hussain
February 8, 2026

Can artificial intelligence be prepared for all the biases humans are vulnerable to?

Understanding the AI revolution


T

he concept of artificial intelligence (AI) in a newsroom seems pleasantly futuristic on the surface. It is a bandwagon many wants to jump on. However, the complexity and magnitude of the problems associated with AI development and use require a level of understanding that is yet to be achieved.

From unsteady dissemination of news, instability of the internet and self-censorship, journalism in Pakistan is fighting battles on several fronts. Teaching a machine to help in the varying contexts will always remain a work in progress.

When it comes to ‘sensitive’ stories - ranging from security operations, political bias or gender-related issues - a journalist’s lived experience is highly valuable. The question arises whether one can prepare AI all the biases humans are vulnerable to?

One can certainly not dis-invent the AI. It is here to stay. The question is: how can it be used to help the average newsroom staffer without creating more problems than it solves?

AI relies on deep learning: systems that share knowledge. This allows it to make predictions. Too many users are now proactively consuming news through AI.

A study in 2024 had showed that ChatGPT-4 produced false information 28.6 per cent of the time. This was an improvement over ChatGPT-3.5 where hallucination (making up things) rate was 39.6 percent.

According to a BBC Science Focus article, AI learning models follow a specific mimicking pattern that at times has trouble adapting. The article mentioned something we have all encountered several: “The AI confidently invents facts, cites non-existent research or asserts something completely false.”

So how will the information be verified in developing country news outlets fraught with political and economic instability and the hazards of misinformation, disinformation, deep fakes and embedded political agendas avoided?

The other challenge is for humans to train the machine to reliably connect the dots using quality learned experiences.

Here, the question that first comes to a journalist’s mind is: do the individually searched queries or phrases justify the context?

To put things into perspective, The News on Sunday spoke to a mix of newsroom leaders and technological experts to understand how AI operates in Pakistani news context.

With regard to how Pakistani users will see websites and navigate their searches, Asad Baig from Media Matters for Democracy says that the shift to AI has hit home. However, he warns that most Pakistani newsrooms are ill-equipped to deal with it.

“The easier kind of content that websites in Pakistan rely on for quick and steady traffic such as weather, exchange rates, will be gone quickly as AI can do that faster than it has been done traditionally. What generative AI cannot do is field reporting or quality feature reporting. Pakistani newsrooms are highly dependent on disposable content such as what is happening on social media etc. We need to pause and focus on the content to produce now?

Reflecting on MMFD’s work vis a vis training of Pakistani newsrooms and equipping them with the capacity to deal with the AI shift, Baig says that the first phase is about getting acquainted with the AI shift; the second phase is strategic.

“We will work with newsroom managers and editors to embrace the impending shift. In the third phase, we will focus on monetization; how their business models will change.”

According to digital content strategist Shaheryar Popalzai the picture is not so black and white. “The AI isn’t going to “dominate” newsrooms in some dramatic takeover. It is quietly becoming infrastructure. It’s embedded into every layer of the editorial process across video, print, social, digital, distribution, strategy and planning etc.”

A study in 2024 showed that ChatGPT-4 produced false information 28.6 per cent of the time. This was an improvement over ChatGPT-3.5 where hallucination (making up things) rate was 39.6 percent. 

“Wire services report that automation has freed up significant reporter time for higher-impact investigative work. Many media organisations are using AI to search archived videos in minutes rather than hours.”

He says that the problem with the hype cycle is that it makes some people think that the AI in journalism means robot reporters writing copy. That’s the least interesting application and often the worst one. The technology is aggressively evolving toward multimodal capabilities: automated creation of text from video, video from text, systematic monitoring of government activities, even agents that can deconstruct complex investigative tasks into actionable steps.

Where does this leave the spirit of good old journalism?

BBC Urdu News Editor Zeeshan Haider says: “I believe that it cannot undermine the core value of human journalists because the very heart of journalism relies on skills no machine can replicate. This idea of AI domination in news rooms still feels more theoretical than anything we are seeing on the ground. I think the AI sits in the background as a super helpful tool rather than a substitute for editorial judgment. You can speed up things like transcription process, identification of patterns out of big data sets along with routine bits like headline suggestions or quick copy checks with it.

“It is a noticeable shift. However, the heart of journalism remains stubbornly human. It is about checking facts, deciding what truly matters, reading context and asking the questions no machine knows should be asked.”

Given the ground realities, will the adequately address the queries?

Haider says that AI’s training data often excludes unpublished accounts, suppressed narratives and the emotional texture that comes from being on the ground. It can still help with things like spotting coordinated suppression, tracing digital blackouts and highlighting anomalies in official narratives.

Popalzai is of the opinion that self-censorship remains a challenge.

“LLMs inherit the biases of their training data—research shows that models remain heavily skewed toward English-speaking, high-income countries, failing on regional knowledge that global datasets miss. Researchers in the Global South are building alternative models to fill these gaps.”

A Reuters study regarding the top trends in 2026, explicitly stated that to keep up with readers primarily consuming AI-based information, newsrooms will have to scale up and build relevant infrastructure.

The report highlighted how the advent of AI will specifically empower the cadre of data journalists.

It also reflected on expert opinion: how AI breaks articles rather than presenting an entire coherent publication.

Will AI become the ultimate assistant the average Pakistani newsroom worker hopes it might? The answer depends on how well one wields the sword. ChatGPT, Gemini, Grammarly, Otter.ai, Claude and Perplexity offer fine assistance for day to day tasks for an average newsroom staffer.

Popalzai says that Pakistani newsrooms should explore the possibility of building their own tools and agents. This may be good for learning, good for complete control and cost-effective.

“Real power comes from combining local LLMs with your own knowledge base - your archives, source materials and institutional memory. Newsrooms worldwide are doing this. If building from scratch isn’t feasible, free tools like Google Pinpoint for document analysis work well. For Pakistan specifically, local-first matters because commercial AI under-represents South Asian contexts. Also, when you’re working on sensitive stories, you shouldn’t be uploading to servers you don’t control.”

Working with an AI-led venture on news, digital journalist Ovais Jaffar voiced similar thoughts. “AI is an excavator. It can help you dig in minutes what would otherwise take hours with a shovel. But if you’re careless, you can just as easily dig your own grave - faster. The judgment, accountability and consequences still belong to the human.”

“Newsrooms are built on judgment. That judgment shifts constantly because societies shift constantly — what we thought was right yesterday is no longer right today. AI can process information at scale, but it cannot interpret consequence, nuance or cultural sensitivity in real time,” he says.


The writer is a bilingual journalist, having a decade-long career with national and international media. She can be reached at @nidamujahid.

Understanding the AI revolution