close
US

IS AI MISOGYNIST AND RACIST?

By  Lubna Jerar Naqvi
28 November, 2025

Do you think artificial intelligence (AI) is misogynist and racist?

IS AI MISOGYNIST AND RACIST?

COVER STORY

This is debatable and many people will disagree. They are not wrong, mainly because everyone using AI uses different prompts, has different asks, or phrases questions differently, leading to multiple different types of content created.

Before I delve into my experience of using AI over the past two years, it should be clear that everyone’s experience is different, and the outcome is also based on the bias, opinions, and questions they ask AI. Another important aspect is that many creators are probably male and European (Caucasian), or those who live or are influenced by Caucasian ideas of how people look, and they have unintentionally added their perspective and bias that are reflected in the outcomes.

Don’t get me wrong, I love the idea of using AI and how it is integrated in our lives and work but there is always another side to everything.

I realised the issue of misogyny and racism when I was studying about artificial intelligence (AI) in a journalism course a few years ago. Realising the importance of AI in content creation, especially journalism (I have discussed this in my book to be printed in early 2026), I began using different prompts to see the outcome of the content, especially images.

We were told by our instructor that as fascinating as AI is, it still needs the intervention of humans. With advances in AI, the human element may be decreased but may not be eliminated completely.

As an experiment I used different prompts, asking different AI apps – including the AI mode on Google – to create images of beautiful women. The results were interesting as all the images generated in the first attempt were Caucasian (white-skinned; of European origin) women. The second attempt included images of women of other races but also from a particular perspective.

Next, I asked for images of Muslims, and AI generated images of a man, two women, and a young boy, all wearing what are considered traditional clothes associated with Muslims. The women’s heads were draped by scarves or hijabs, the man and boy were shown wearing white knitted prayer caps, wearing a shirt or kurta, and the child was even shown reading the Holy Quran.

The third ask was generating images of a journalist, and once again a green-eyed, fair skinned man was shown holding a microphone and wearing a headset. Changing the query to women journalists leads to images of pretty women with nice long hair draped over their shoulders, wearing low cut tee-shirts with a jacket, holding a microphone.

IS AI MISOGYNIST AND RACIST?

The caption with the female images was absent from the male image: “Expressions: They are often shown smiling or looking confidently at the camera, projecting an image of trustworthiness and professionalism”. The AI thought it necessary to add this explanation, as if to provide more credibility to the images.

Another thing that is evident is that all the women journalists are fair skinned, Caucasian women, as if there are no women in other regions of the world who have different skin, hair, and eye colours and who wear other clothes than the ones shown. This is mostly true if you ask for images of women lawyers, doctors, teachers, etc.

It is interesting that AI reflects the ideas of those who have programmed it. This is not to say that all of them are AI tries to imitate human intelligence and reasoning, whereas automation simply uses a very direct rules-based approach to make decisions and act. The former is – somewhat like organic intelligence – subject to whims and errors and occasional outright hallucination. The latter does exactly the specific tasks it is programmed to do, every single time. The result is that AI can make undesigned mistakes, whereas automation makes mistakes only when its designers fail to consider all the possible circumstances their systems will encounter.

IS AI MISOGYNIST AND RACIST?

I asked my instructor about this, and he agreed that AI could seem racist, misogynist, or culturally blind when it was viewed out of the fishbowl of the creators who were influenced by different things. He was confident that as more people used AI, things would change to cater to the usage, adding that the AI that is used by most people is still (at least for now) dependent on what one asks it or the prompts that are used, and it only generates what is already available. It does not think completely independently for now. AI learns and does not understand – yet.

In the UN Women’s frequently asked question section, one question focusses on AI-powered online abuse, how AI is amplifying violence against women, and what can stop it. This elaborates on a serious problem, ‘misogynistic online content [leading] to real life harm’ which is a serious issue and should be addressed quickly. AI is only helping to amplify online misogyny, hence causing threats to the vulnerable sections of society. And since AI imitates or tries to imitate human intelligence and reasoning, it will not be difficult to change the way AI creates content that lacks gender, race, religious, etc sensitives. AI can evolve and easily become gender sensitive and learn that there are different kinds of people who all look differently. AI needs to begin creating inclusive and diverse content to break the mould of misogyny and racism.

A good start would be to include gender sensitive organisations, like the UN Woman, to work on developing AI which learns gender, racist, and religious equality. Experts are divided on this, with some saying that AI-human integration will get better and can help improve human effectiveness but may adversely impact the human autonomy and capabilities. Others say that as the use of AI increased, humans will become more dependent.

IS AI MISOGYNIST AND RACIST?

Although AI is a buzzword nowadays, it is not a new phenomenon. Groundwork on AI began way back in the early 1900s. The phrase artificial intelligence (AI) was coined in 1956 at the Dartmouth Summer Research Project at Dartmouth University in New Hampshire, USA. This event marked the AI@50 event which was celebrating 50 years of work in the field, but it was recognised as a field at this event.

The early years of the 1980s are considered to be a decade of AI boom, when there were many great achievements in the field. Most notably, Rollo Carpenter, a computer programmer, invented the chatbot Jabberwacky in 1988 which was programmed to provide interesting and entertaining conversation.

By the 1990s and early 2000s, AI was being used in cars and phones, like the GPS and Google Maps, and more popularly it has been used by the Siri virtual assistant for more than a decade now (since 2011).

According to a study of attitudes towards AI (Forbes, 2023), most of the survey respondents said they did not think AI is ready to take over. 67% don’t want AI to make life or death decisions in war; 64% don’t want AI as a jury in a trial; and 57% don’t want AI to fly airplanes.

IS AI MISOGYNIST AND RACIST?

A high percentage of people believe that humans will do a better job in a wide range of activities, like investigating corruption (65%), choosing gifts (67%), deciding on a raise at work (69%), teaching a morality course (73%), administering medicine (73%), picking work outfits (75%), writing laws (76%), voting (79%), and doing our jobs (86%).

As exciting as it seems for some that AI will help improve some sectors, like the availability of healthcare, there is a downside to it as well. Many people, including teenagers, are taking advice from AI applications. Asking for help to understand and also find cures online began with browsers, with people asking questions like “how to cure…”, “what medicines…”, and “symptoms of…”.

Over the years, people began seeking medical advice from the online ‘help’ or virtual assistants which was a bad idea. In October 2024, CNN published an article titled ‘There are no guardrails. This mom believes an AI chatbot is responsible for her son’s suicide’. This Florida mother said an online "platform lets users have in-depth conversations with artificial intelligence chatbots" and believes an AI chatbot was responsible for her 14-year-old son's suicide.

Other cases of teenagers and people in their early 20s ending their lives have also surfaced, with one “using the artificial intelligence chatbot as a substitute for human companionship” and another asserting that “AI chatbot pushed their 22-year-old son to suicide, claiming the tool offered methods and emotional support”.

With technology evolving at the rate it is, it is ridiculous to try to live under a stone and avoid using it. Humans should embrace it and help improve and evolve it and customise it to be used globally. And this can only be done when people begin using different technology, especially AI, and understand its potential and how it can be harnessed to improve human life.

More From US
COMIC RELIEF
By US Desk

TRUST US
By US Desk

THE GREEN ROOM
By Sameen Amer

POETS’ CORNER
By US Desk

IS AI MISOGYNIST AND RACIST?
By Lubna Jerar Naqvi

It's okay to not be okay
By Farah Alam

Reflection
By US Desk

US Mail
By US Desk