Losing agency to thinking machines

Naveed Rafaqat Ahmad
March 29, 2026

The growing role of artificial intelligence also raises questions about who may control the future

Losing agency to  thinking machines


O

nly a few years ago, artificial intelligence was something that most people barely noticed. It worked quietly in the background—suggesting songs, correcting spelling or helping businesses sort their data. It was seen as a tool, something humans used to make work easier and faster. Today, that is changing. AI is no longer just assisting humans; in many areas, it is starting to make decisions on its own. This shift raises an important question: are we handing over too much control to machines?

AI was designed to assist us. Early systems helped doctors analyse medical images, helped banks detect fraud and helped businesses understand customer behaviour. The decisions always rested with human beings. In many cases, that is no longer true. Consider the hiring processes. Many companies today use AI systems to screen job applications. These systems scan hundreds or thousands of resumes and decide which candidates should move forward. A human manager may only see a shortlist generated by AI. While this makes the process faster, it also means the ‘system’ is effectively deciding who gets an opportunity and who does not. There have already been cases where such systems were found to be biased. A well-known company had to abandon its AI hiring tool after it was found to favour male candidates over female ones because it had learned from past hiring patterns that were themselves biased. The system did not create the bias, but it repeated and strengthened it.

AI is also playing a growing role in financial decisions. Banks and financial institutions now use AI to decide whether someone qualifies for a loan or a credit card. These systems analyse income, spending habits and behavioural patterns. In many cases, the decision is made instantly, without any human review. While this speeds up the process, it also creates a new problem. If someone is denied a loan, they may not even know why. The decision is made by a system that is often too complex to explain in simple terms. This lack of transparency can feel unfair. A person’s financial future may depend on a decision made by an algorithm they cannot question. At the same time, AI is shaping everyday choices in less obvious ways. Social media platforms and streaming services use AI to decide what content users see. These systems are designed to keep people engaged, often by showing content similar to what they have already watched or liked. Over time, this can limit exposure to different viewpoints and create narrow information bubbles. While this may not look like a direct decision, it still influences thinking and behaviour in a powerful way.

In healthcare, AI offers both promise and risk. It is now being used to analyse medical images, detect diseases early and suggest treatment options. AI tools, for example, can identify signs of cancer in scans with a high level of accuracy, sometimes catching details that human doctors might miss in early stages. This has the potential to save lives and improve outcomes. However, it also raises difficult questions. If a doctor relies heavily on AI and the system makes a mistake, who is responsible? There have already been situations where AI systems produced incorrect results because they were trained on incomplete or biased data. In healthcare such errors can have serious consequences. This is why many experts argue that AI should support doctors rather than replace them. Human judgment, experience and responsibility remain essential.

One reason AI is becoming so influential is speed. Machines can process information far faster than humans. They can analyse large amounts of data in seconds and produce results instantly. This is useful, but it can also be risky. When decisions are made quickly, there is less time to question or review them. In the past, decisions such as hiring or approving loans took days or weeks, allowing for discussion and reconsideration. Today, the same decisions can be made almost instantly. This creates a situation where efficiency is often valued more than fairness or careful judgment.

The growing role of AI also raises questions about control. On the surface, humans still appear to be in charge because people design these systems and decide how they are used. In reality, many decisions are now delegated to machines. In large organisations, decision-makers may rely on AI recommendations without fully understanding how those recommendations are produced. Over time, this can reduce human oversight. There is also the issue of accountability. When a human makes a decision, responsibility is clear. When an AI system makes a decision, it becomes hard to assign responsibility. This lack of clarity is one of the major concerns surrounding the increased use of AI.

This shift is already visible in real-world situations. In some countries, AI is being used in parts of the justice system to assess the likelihood that a person will re-offend. These assessments can influence decisions about bail or sentencing. While this may help manage large caseloads, it has also been criticised for reinforcing existing biases in the system. Studies have shown that such tools can produce different outcomes for different groups even when the underlying facts are similar. When decisions about personal freedom are influenced by algorithms, the consequences are significant.

The issue, therefore, is not whether AI is useful—it clearly is—but how it is used. AI can improve efficiency, reduce costs and provide insights that would be difficult for humans to generate on their own. It can help governments deliver services more effectively, help businesses operate more efficiently and help professionals make better-informed decisions. However, it should not replace human judgment entirely. A balanced approach is necessary, where AI supports decision-making but does not control it. Humans must remain involved, particularly in decisions that have serious consequences for people’s lives.

Transparency is equally important. People should have the ability to understand how decisions affecting them are made. Whether it is a loan application, a job opportunity or a medical diagnosis, individuals should not be left in the dark about the reasoning behind outcomes. Without transparency, trust becomes difficult to maintain. Education also plays a role. As AI becomes more common, it is important for people to understand its strengths and its limitations. Without this understanding, there is a risk of placing too much trust in systems that are not perfect.

AI has clearly moved beyond being a simple assistant. It is now influencing decisions in hiring, finance and healthcare - even law. This does not mean that AI is inherently harmful, but it does mean that its growing influence needs careful management. The real concern is not that AI is becoming powerful, but that humans may become too passive in relying on it. If decisions are accepted without question simply because they come from a machine, control can gradually shift without anyone noticing. The challenge is to use AI responsibly, making full use of its benefits while staying aware of its limitations. In the end, decisions that shape human lives should still be guided by human judgment, not left entirely to algorithms.


The writer is a chartered accountant and a business analyst.

Losing agency to thinking machines