close

The end of our intelligence

December 17, 2025
A illustration showing a robotic figure against the backdrop of AI written in the background on May 4, 2023. — Reuters
A illustration showing a robotic figure against the backdrop of AI written in the background on May 4, 2023. — Reuters

Are we approaching a moment when human intelligence will eventually disappear gradually? Humans are the only mammals capable of developing complex notions grounded in thought and supported by knowledge. These include new scientific inventions, philosophical ideas, queries about the universe, and much, much more.

Yet with the advent of AI, if it endures rather than collapsing, as happened with the dot-com boom of the early 2000s, there is a troubling possibility that it could strip humans of their ability to develop nuanced ideas and think independently.

People worldwide are turning to AI search engines such as ChatGPT, Meta, Gemini and many others to answer basic questions about life. They ask AI to solve everyday problems that any person beyond toddler age should be able to solve on their own. There are examples which send out warning signals. An instance of poisoning by ChatGPT emerged in the US a few months ago, when a man in his 60s asked AI to tell him how he could eliminate the chloride part of the sodium chloride formula, which constitutes salt, from his diet. ChatGPT suggested replacing chloride with bromide, thinking perhaps that the purpose of the question was what to use to say clean a swimming pool or some other task which did not involve ingestion by a living being. The man, who then followed the AI suggestion, ended up in the hospital with hallucinations and other psychotic symptoms. All he had really needed was a brief visit with his GP or a nurse. Instead, he had to undergo a panel of tests before doctors determined what had happened and corrected the AI error.

Other examples of similar use of AI are emerging, with people using chatbots as replacements for therapists, subject matter experts or even romantic partners, even though the technology is at the moment too infantile to be dependent on to reliably solve any medical problem or other complex issues which inevitably crop up in the life of all persons.

Of course, there is a possibility that AI will improve and be given better information to churn out when people hit the search button. Its output is based essentially on the many books and documents it has been fed without the ability to recognise when the information these contain is relevant, appropriate, and fits the situation for which it has been sought. People using AI, often with huge amounts of enthusiasm, do not always recognise this.

There may be additional risks ahead as AI develops rapidly. But at present, the main danger is the way in which it is snatching away the human capacity to reason, to conduct research grounded in judgment and relevance and the ability to use a mind evolved to solve multiple problems and puzzles. Particularly at a time when most people are thought to only use a fraction of their cognitive potential, findings such as those from a MIT study, which found that the outsourcing of tasks to AI was associated with lower critical cognitive ability, should give us pause.

In this situation, the increasing use of AI by students, researchers, journalists and others to produce work submitted for examination or publication is dangerous. Even in journalism, the use of AI is likely inevitable, given that journalists have too few resources and work for media houses that are rapidly reducing staff and time. Students, too, will use it to meet essay deadlines and save themselves a few hours in a library. For now, most good educational institutions have teachers and reviewers with the skills needed to detect AI-generated text. It is often shallow, simply reiterates paragraphs from previous material and lacks any sense of grey areas, presenting a black-and-white picture of reality. But the fact that this is what students see as the right way to produce bodies of work is in itself dangerous.

We already have a generation which has grown up with social media determining many of its actions. In some cases, this is beneficial. Social media, after all, has given us access to a vast array of information and handed to young people knowledge they may never have acquired in the recent past. But it has stolen from them, and from all other users, the ability to think as individuals and to develop opinions and ideas which are badly needed in today’s world.

This is a world in which an increasing number of school-aged children, particularly in developed countries, report that they wish to be YouTubers and media influencers. The realm of ‘influencing’ has dangers of its own. Too often, people are persuaded that only a certain way of looking, specific ideas, limited travel spots or particular activities are ‘right’ for them. This is far from true, given the magnitude of human capacity and human ability.

AI, used properly, can be a valuable tool. When this does not happen, it poses many dangers, leading more and more people to believe that what a computer produces when it is asked a question is the only truth. The world, of course, has many truths and many ways of thinking. We need to ensure that the increasing reliance on AI does not undermine this capacity.

Of course, AI has the potential to bring us huge advantages. It can perform tasks in a few seconds that would take humans months or even years. However, we should remember that, while AI has the potential to change the world for the better, it also has the potential to destroy it forever. For now, in its current form, it needs to be used with caution, guided by the wisdom and rationality that human beings have been given since their creation.


The writer is a freelance columnist and former newspaper editor. She can be reached at: [email protected]