Deepfake Danger: Protecting Society from AI Misinformation

The Holy Quran says, “O believers, if a troublemaker brings you news, verify it first, lest you harm others unknowingly and later regret what you have done” (49:6–8). Most commentators state that this verse was revealed in relation to Walid bin ʿUqbah. After the tribe of Bani al-Mustaliq accepted Islam, the Prophet Muhammad (PBUH) sent Walid to collect zakat from them. For some reason, Walid became afraid, returned to Madinah without meeting the tribe, and falsely reported that they had refused to pay zakat and had even attempted to kill him. The Prophet (PBUH) became angry and was about to send a force against them. Before this could happen, the tribe’s leader, Harith bin Dirar, arrived in Madinah and clarified that they had never seen Walid, remained firm in Islam, and had no intention of refusing zakat. Upon this clarification, the verse was revealed. The Holy Prophet (PBUH) said: “It is enough of a lie for a person to narrate everything he hears” (Sahih Muslim). On another occasion, he said: “Truthfulness leads to righteousness, and righteousness leads to Paradise… and falsehood leads to wickedness, and wickedness leads to the Fire” (Sahih al-Bukhari, Sahih Muslim). The World Economic Forum (2025) has included misinformation and disinformation among the world’s top global risks, particularly highlighting the increasingly thin line between artificial intelligence (AI)-generated and human-generated content. These risks pose serious threats to both individuals and businesses. Well-known public figures with a strong presence in print and electronic media are especially vulnerable to deepfake technology. While the list of risks is extensive, some of the most significant include identity theft and impersonation, reputational damage, emotional and psychological harm, harassment and exploitation, loss of privacy, and financial fraud. A few days ago, Ashley St Clair, the mother of one of Elon Musk’s children, filed a lawsuit against his company xAI over sexually explicit deepfake images of her circulated on the social media platform X. Last year in Baltimore, USA, a racist AI-generated deepfake audio clip of a local school principal making derogatory remarks caused widespread division within the community. The principal also received death threats, prompting intervention by local authorities and police to control the situation. Similarly, Irish politician Cara Hunter was targeted in 2022 during her election campaign through a pornographic deepfake video that nearly ended her political career. The propaganda had a deeply negative impact on her campaign; however, she narrowly won the election by just 14 votes. Subsequently, she became a leading advocate for legislation against deepfake intimate image abuse. Developed countries are also facing this challenge, but the cultural context of this risk is significantly different in Pakistan. Pakistani society is highly sensitive, particularly regarding religious matters and issues related to women. This sensitivity is even more pronounced in rural areas, which constitute a large portion of the population. Incidents of honour killings are frequently reported in national media, though many cases go unreported. Femicide—through domestic or family violence, discrimination, or contempt toward women—remains a harsh reality in rural Pakistan. Matters related to religion or women’s honour often provoke immediate reactions without fact-finding, and once facts emerge, society may either overreact or disregard legal authority and the rule of law. People often believe what they see or hear and respond impulsively without verification. Even schoolchildren and teachers are not spared. There has been a steady rise in reports of AI-generated deepfake images and videos being maliciously used against pupils and teachers, often involving false, manipulated, or sexually explicit content shared without consent. Countries around the world are introducing laws to counter deepfake abuse. In England and Wales, legislation such as the Online Safety Act and the Data (Use and Access) Act 2025 has criminalised the creation, sharing, and solicitation of deepfake intimate images. France has updated its criminal code to prohibit the sharing of AI-generated visuals or audio without the subject’s consent, with penalties including fines and imprisonment. In the United States, Texas has passed laws criminalising the possession and promotion of AI-generated obscene images of minors. Similarly, South Korea has introduced laws criminalising the production and distribution of deepfake pornography and non-consensual AI-generated images and has gone further by criminalising possession and viewing of such material. In Pakistan, the Prevention of Electronic Crimes Act (PECA) 2016 addresses cyber harassment, cyberstalking, identity theft, and the distribution of unlawful or defamatory content. Section 21 specifically deals with the unauthorised use and transmission of intimate or explicit content, which may include deepfakes depending on the circumstances. Under the newly introduced Section 26A, anyone who intentionally spreads false information likely to cause fear, panic, or unrest may face up to three years’ imprisonment, a fine of up to PKR 2 million, or both. Some provinces, such as Punjab, have also enacted defamation laws to address fake news and defamatory content on electronic and social media, including the Punjab Defamation Act 2024. Deepfakes—AI-generated manipulated images, videos, and audio—are on the rise in Pakistan, targeting women, politicians, teachers, and students. Conversely, Pakistan still lacks a law that explicitly defines or criminalises “deepfake” content as a distinct offence. Instead, deepfakes are addressed under general cybercrime and defamation provisions. Pakistan must take both administrative and social awareness initiatives to tackle the growing misuse of deepfake technology. At the administrative level, the government should follow international examples and introduce legislation that clearly defines, regulates, and penalises the creation and distribution of harmful deepfake images and videos. Recent surveys indicate that Pakistan’s national literacy rate has reached approximately 63%. While digital literacy, the ability to use digital tools safely and effectively, is not measured separately, related indicators show increasing digital access: household internet access stands at around 70%, individual internet usage at 57%, and smartphone ownership at about 96%. These figures suggest improved digital inclusion; however, access alone does not equate to digital literacy. Many people still lack basic digital skills, making it imperative to launch large-scale digital literacy programmes, particularly in remote rural areas. At the social level, there is an urgent need to educate society to question whether what they see or hear is genuine or the result of deepfake technology before reacting. Alongside strict and enforceable laws against the misuse of deepfake technology, media literacy programmes should be incorporated into educational curricula to teach children that “seeing and hearing” are no longer reliable indicators of truth. Likewise, mosque imams—both in urban and rural areas—should address the issue of spreading false information in their Friday sermons, drawing guidance from the Qur’an and Sunnah and linking these teachings to modern challenges posed by deepfake technology. At home, in schools, mosques, and workplaces, educated individuals should openly discuss the dangers of deepfakes and raise awareness among those around them. Deepfakes pose a real and growing threat to individuals and society, blurring the line between truth and fabrication. Combating this danger requires a combination of strong legal frameworks, widespread digital literacy, and public awareness so people can question what they see and hear online. Only by educating citizens, enforcing laws, and fostering responsible use of technology can Pakistan protect its society from the harmful impacts of AI-generated misinformation.

Share this post

Scroll to Top