It started innocently enough—a viral “Saree Portrait” trend sweeping across South Asia. Women from Pakistan, India, Bangladesh, and Sri Lanka uploaded their photos to AI apps that transformed them into digital saree portraits: glowing faces, soft backdrops, and perfectly draped sarees. For many, it was playful, even empowering—a chance to see themselves through a lens of beauty and culture. But for one Pakistani woman, the experience quickly turned unsettling. After submitting her photo to Google’s Gemini AI, she received a generated image showing a mole on her arm—one that existed in real life but wasn’t visible in the original photo. What seemed like a mere technical coincidence left her deeply unnerved. Could the AI somehow “know” details she hadn’t shared? Could it infer private information from patterns she never consented to disclose? This isn’t just the story of one woman or one image. It’s a window into how technology, when unregulated, can become a tool of fear—especially for women in patriarchal societies. Generative AI systems like Gemini and Midjourney are trained on massive datasets of images, videos, and text from the internet. They don’t “see” like humans; instead, they detect and replicate patterns, analyzing faces, shapes, and cultural cues, allowing them to predict or recreate details—even those not visible in the original image—based on prior data. In countries with robust data protection laws, this is already a major concern. In Pakistan, where data privacy legislation is still only a draft, it’s a crisis waiting to happen. The Personal Data Protection Bill, modeled after Europe’s GDPR, promises user rights and transparency—but years later, it remains unimplemented. Meanwhile, women navigate a legal vacuum where neither AI-generated images nor AI-based inferences are adequately addressed. Digital rights advocate Sadaf Khan explains, “Even without AI-specific policy, Pakistan’s Peca law can address harms like deepfakes, but data protection laws don’t fully cover AI-related issues. Since most AI platforms operate abroad, holding them accountable is difficult, though individuals in Pakistan who misuse AI can still face prosecution.” Yet justice in cases of gendered digital abuse remains elusive. Across Pakistan and its neighbors, the weaponization of women’s images is escalating at alarming speed. Deepfake pornography, once a fringe threat, has become widespread. Victims often wake up to find fabricated nude images of themselves circulating on Telegram groups or being used for blackmail. Earlier this year, a young Pakistani content creator fell victim to digital manipulation when her Instagram photos were altered to create fake explicit images. The doctored visuals spread rapidly online, leading to public shaming and harassment. Despite being the victim, she faced severe backlash and character attacks—showing how technology-enabled abuse is compounded by a culture that blames women instead of protecting them. Across the border, in India, a similar nightmare unfolded when women journalists, activists and even students found their photos listed in a mock ‘auction’ on an app that digitally placed their faces on pornographic images. The creators, young men, called it a joke. For the victims, it was a violation that went beyond the digital realm; it entered their homes, their families, and their safety. In conservative societies, where women’s reputations are fragile currency, the damage is not limited to the internet; it can lead to social ostracism, professional ruin, or even physical danger. Sadaf Khan, who is also the founder of a leading media development organisation, Media Matters for Democracy (MMfD), highlights that “deepfakes blur real and fake, exposing women to safety threats and stigma. Although Peca criminalises such acts, legal protections often fail to prevent harm or stigma, underscoring the need for a deeper societal response”. These synthetic images spread faster than truth can catch up. Algorithms reward virality, not accuracy. Once a deepfake is online, the burden shifts to the victim to prove that what people are seeing is not real. The psychological toll of that inversion is immense. While men are also targeted by digital manipulation, the harm is not gender neutral. In Pakistan and the broader region, where women’s honour and privacy are bound to societal expectations, such violations become instruments of control. They reinforce silence, shame and withdrawal from digital spaces. Women stop posting, stop engaging, stop existing online. The cost is not just personal, it’s political. It erases their voices from public discourse. The Saree Portrait trend, in this light, feels less harmless. Every upload, every viral challenge adds to the pool of high-resolution female imagery feeding global AI systems. While most platforms claim to delete or anonymise data, transparency is rare and accountability nonexistent. Sadaf Khan further points out that “holding major tech and AI platforms accountable is a global challenge. Initiatives like the UN’s High-Level Body on AI and the Global Digital Compact are shaping governance around AI and women’s safety, emphasising that true protection requires embedding safety and accountability into AI systems from the design stage”. Even if Gemini or other major platforms act responsibly, their datasets are not isolated. Once personal photos exist online, they can be scraped, traded, or used to train other, less regulated models. The next generation of deepfakes won’t need hacking; it will need only imagination. Education is the first line of defence, but it must go beyond basic digital literacy. Women in Pakistan and South Asia need to understand how AI works and how it learns, infers, and deceives to better protect themselves in the digital age. Legal reform is urgent. Pakistan needs to pass the Personal Data Protection Bill and clearly define AI-related offences, as outdated laws like Peca no longer suffice. Regional cooperation is also vital, through shared protocols, hotlines and tech partnerships, to combat deepfakes that easily cross borders. And lastly, individual precautions are crucial. Women should think carefully before joining AI trends, avoid sharing high-resolution or identifiable photos, and use blurred or cropped versions instead. If a deepfake or altered image appears, they should report it immediately and keep records such as screenshots, timestamps, and links. The Saree Portrait trend may pass, but its warning remains: in societies where images can define a woman’s fate, AI’s ability to ‘see’ too much is dangerous. The real concern is not AI’s knowledge, but our readiness to face the consequences of allowing it to learn from us.
AI Expands Its Learning: Now Understanding Women Beyond Images

