Why actress Rashmika Mandanna's deepfake video raises serious concerns around Artificial Intelligence
July 20, 2025
  • Read Ecopy
  • Circulation
  • Advertise
  • Careers
  • About Us
  • Contact Us
Organiser
  • ‌
  • Bharat
    • Assam
    • Bihar
    • Chhattisgarh
    • Jharkhand
    • Maharashtra
    • View All States
  • World
    • Asia
    • Europe
    • North America
    • South America
    • Africa
    • Australia
    • Global Commons
  • Editorial
  • International
  • Opinion
  • Op Sindoor
  • More
    • Analysis
    • Sports
    • Defence
    • RSS in News
    • Politics
    • Business
    • Economy
    • Culture
    • Special Report
    • Sci & Tech
    • Entertainment
    • G20
    • Azadi Ka Amrit Mahotsav
    • Vocal4Local
    • Web Stories
    • Education
    • Employment
    • Books
    • Interviews
    • Travel
    • Law
    • Health
    • Obituary
    • Podcast
MAGAZINE
  • ‌
  • Bharat
    • Assam
    • Bihar
    • Chhattisgarh
    • Jharkhand
    • Maharashtra
    • View All States
  • World
    • Asia
    • Europe
    • North America
    • South America
    • Africa
    • Australia
    • Global Commons
  • Editorial
  • International
  • Opinion
  • Op Sindoor
  • More
    • Analysis
    • Sports
    • Defence
    • RSS in News
    • Politics
    • Business
    • Economy
    • Culture
    • Special Report
    • Sci & Tech
    • Entertainment
    • G20
    • Azadi Ka Amrit Mahotsav
    • Vocal4Local
    • Web Stories
    • Education
    • Employment
    • Books
    • Interviews
    • Travel
    • Law
    • Health
    • Obituary
    • Podcast
Organiser
  • Home
  • Bharat
  • World
  • Operation Sindoor
  • Editorial
  • Analysis
  • Opinion
  • Culture
  • Defence
  • International Edition
  • RSS in News
  • Magazine
  • Read Ecopy
Home Bharat

Why actress Rashmika Mandanna’s deepfake video raises serious concerns around Artificial Intelligence

Different sectors have been working to address the deepfake problem. Recently, Indian film actresses Rashmika Mandanna and Katrina Kaif fell prey to deepfake incidents, prompting their peers to call for legal measures

by Subhi Vishwakarma
Nov 8, 2023, 09:00 pm IST
in Bharat
Screengrab taken from the viral video (Moneycontrol)

Screengrab taken from the viral video (Moneycontrol)

FacebookTwitterWhatsAppTelegramEmail

A widely circulated video features a woman dressed in black entering an elevator, garnering millions of views across social media platforms. The attention-grabbing aspect of the video is the uncanny resemblance between the woman in the clip and the actress from the film Pushpa, Rashmika Mandanna. Rashmika boasts an impressive following of over 40 million on Instagram and enjoys a substantial fan base.

What raises concern about this video of Rashmika is that the woman depicted is not actually her. This incident highlights the alarming prevalence of deepfake technology, which enables the creation of convincing likenesses of individuals through digital manipulation.

Notably, Katrina Kaif, who is gearing up for the release of “Tiger 3,” has also fallen victim to this digital threat. This issue, however, extends beyond Indian actresses. Hollywood star Emily Blunt was among the first to experience her likeness being used without consent in unauthorised productions. A deepfake video of Emily portraying Black Widow in the Marvel Cinematic Universe circulated widely, causing a stir among fans and falsely suggesting an audition for the role originally played by Scarlett Johansson.

Rashmika’s video goes viral

Both Rashmika and the woman featured in the video, Zara, have expressed being “deeply disturbed” and “upset” by this incident.

Rashmika wrote, “I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary, not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused,” she wrote in a post on X (formerly Twitter).

Zara Patel wrote, “Hi all, it has come to my attention that someone created a deepfake video using my body and a popular Bollywood actress’s face. I had no involvement with the deepfake video, and I’m deeply disturbed and upset by what is happening,” the British-Indian influencer said in an Instagram story.

This article aims to shed light on the available software and technologies in the public domain, as well as the potential consequences they may entail in relation to the concerning video in question.

What is deepfake?

It is crucial for readers to be aware of the numerous websites where manipulated images and fake videos of celebrities, often in highly inappropriate contexts, can be found. These include depictions of Hollywood stars in altered appearances, ranging from an Indian aesthetic to a less affluent one, and various other alterations. While the practice of replacing faces and manipulating images is not new for celebrities, what sets these instances apart is the incorporation of artificial intelligence (AI) technology. There are even mobile applications that facilitate face-swapping, along with a plethora of computer software programs that offer even more convincing results.

The term “deepfake” derives from “deep learning,” which refers to a form of machine learning involving extensive layers of data processing and requires a profound understanding of software and computation.

This technology enables the creation of remarkably realistic faces of individuals you may recognise. To generate such a face, the software necessitates multiple images of the person from different angles, which are often readily available in the public domain for celebrities. Drawing from this information and a provided library, the software constructs a profile of the subject as specified by the user. Subsequently, the software progresses through stages involving a generator and discriminator, and following some fine-tuning, the final result is produced. This process is also known as a Generative Adversarial Network (GAN).

Potential AI tools which can be used in wrong way

With the prevalence of social media, individuals with malicious intent find it much easier to exploit information, particularly in the case of deep fake videos where there is a wealth of available material. The concerns extend beyond just deepfake videos; the burgeoning field of AI presents significant worries for the general populace. Without proper regulation, this could lead to serious problems in the near future.

It’s important for readers to be aware of a software program called DALL.E, a text-based tool that enables users to generate images based on written instructions. For example, if someone wants to create an image of a one-horned dog with seven different colors, wings, and a particular body shape, they can simply provide the instructions and the software will generate the image. This tool is particularly valuable for those involved in template designs and other visual projects.

Another notable tool is ChatGPT, frequently utilised for writing and research purposes. One of its positive features is its ability to perform internet searches, though this also means its accuracy cannot be entirely verified. However, ChatGPT is not without its biases. For instance, it may make jokes about Bhagwan Ram, but refrain from doing so about Prophet Mohammad in Islam or Jesus in Christianity.

Additionally, Microsoft has introduced an AI software called VALL-E (X), which functions as a language and speech synthesis model. This software listens to a human voice for three seconds in order to analyze language, sound features, accents, and other factors. It then replicates the user’s voice with specific stops and texture, mimicking human speech. Users can input written text, and the software will generate audio in a contextually appropriate manner.

Another notable AI model is Nural Sync, which operates akin to deepfake technology. Additionally, there are other tools like Hazen.Ai, HeyGen, and Verbatik AI that have the capability to transcribe speech in a staggering 142 languages. Many of these tools are accessible with minimal to no subscription fees.

It’s crucial for readers to be informed about Verbatik, an AI-driven text-to-speech platform that converts written text into natural-sounding speech. This platform boasts an extensive repertoire of over 600 realistic voices across 142 languages and accents. It also offers unlimited voiceover revisions to ensure impeccable audio outputs. Users have the ability to fine-tune the voice output, including adjustments in tone, emotion, and speech rate, enabling them to achieve the ideal voiceover to suit their specific needs.

Verbatik allows users to export the generated speech in both MP3 and WAV formats, ensuring compatibility with a wide array of audio playback devices. Whether producing a podcast, video tutorial, or presentation, these lifelike voices streamline the process, saving both time and resources while delivering top-notch audio quality.

Potential threats of AI

When readers connect the dots and consider the collective capabilities of these AI tools, it can be truly disconcerting. These tools have the capacity to substitute faces in videos, replicate and manipulate voices, and even converse in multiple languages. If utilised with malicious intent and combined effectively, these tools could become exceedingly difficult to combat.

For example, if a video surfaces of a politician discussing illicit activities with their own voice synthesized through AI, it could rapidly gain traction. By the time lawmakers address the issue, the reputation of that individual could already be irreparably damaged.

Despite the potential for misuse, these software applications do have positive applications, such as aiding in age progression and regression, as well as seamlessly dubbing audio in films without disrupting the overall mood.

A few months ago, images of Bhagwan Ram circulated on social media, generated by an AI model known as Mid Journey. Mid Journey is a generative artificial intelligence program and service developed and hosted by the San Francisco-based independent research lab, Mid Journey, Inc. This program generates images based on natural language descriptions, known as “prompts,” much like OpenAI’s DALL-E and Stability AI’s Stable Diffusion.

Sextortion through AI tools

While there are certainly positive applications for AI, it’s imperative to recognise the potential for harm. There have been reported cases worldwide of victims, often young individuals, taking their own lives due to AI-generated content. Numerous articles and research studies have shed light on the detrimental effects of AI misuse. The New York Post, for instance, has published an article addressing the tragic consequences of teens falling victim to the misuse of AI tools, serving as a guide for parents to be vigilant against the rise of sextortion crimes facilitated by such tools.

While there are established regulations to govern such crimes, in the age of the internet, the time required for identifying such instances is often sufficient for significant damage to occur. In response to the Rashmika video incident, the IT Minister issued an advisory through his social media account.

Regulation norms

The Ministry of Electronics and Information Technology (MeitY) promptly dispatched advisories to social media platforms, including Facebook, Instagram, and YouTube, instructing them to remove fake content generated through artificial intelligence (AI) within a 24-hour timeframe.

Sources indicate that the advisory reemphasised the existing legal provisions that these platforms must adhere to as online intermediaries. It reaffirmed the rules outlined in the Information Technology Act of 2000, which include penalties for fraudulent impersonation using computer resources, carrying a potential imprisonment term of up to three years and a fine of up to Rs 1 lakh.

The advisory also cited “IT Intermediary Rules: Rule 3(1)(b)(vii),” which states that social media intermediaries must exercise due diligence, ensuring that their rules, regulations, privacy policies, and user agreements explicitly prohibit the hosting of content that impersonates another individual.

It is worth noting that a more rigorous approach may be required to prevent potential major crises in the future.

Topics: AI toolsdeepfakeRashmika MandannaRashmika Mandanna VideoMisuse of AI toolsDeepfake videosai
Share21TweetSendShareSend
✮ Subscribe Organiser YouTube Channel. ✮
✮ Join Organiser's WhatsApp channel for Nationalist views beyond the news. ✮
Previous News

Demonetisation: Modi’s Gamechanger

Next News

Kerala: New turn in gold smuggling case of 2020; customs clamps Rs 66.60 crore as  penalty

Related News

Students at Nav Gurukul, Dantewada (ANI Photo)

Chhattisgarh: “Nav Gurukul” scripting new identity of naxal-hit Dantewada

The Vidya Bharati Akhil Bharatiya Shiksha Sansthan held its annual national press conference on June 20 at the Constitution Club of India, New Delhi

“Panch Parivartan”: Vidya Bharati’s five-point plan for educational future of Bharat

Chhattisgarh CM Vishnu Deo Sai addressing the event  at Raipur, Image courtesy ANI

Chhattisgarh: CM Vishnu Deo Sai inaugurates state’s first AI Datacentre Park

India’s AI-DPI Synergy: The blueprint for population scale, low cost, ethical governance

Representative image

Groundbreaking study reveals how our brain learns

“It was an honour”: Elon Musk after talk with PM Modi, says looks forward to India visit

Load More

Comments

The comments posted here/below/in the given space are not on behalf of Organiser. The person posting the comment will be in sole ownership of its responsibility. According to the central government's IT rules, obscene or offensive statement made against a person, religion, community or nation is a punishable offense, and legal action would be taken against people who indulge in such activities.

Latest News

(Left) Congress Leader Shashi Tharoor (Right) Rahul Gandhi

Congress plans to muzzle Tharoor in upcoming parliament session over pro India, pro Modi remarks on Ops Sindoor

Representative image

The Secular Paradox in Bharat: How constitutionally guaranteed freedom of religion is being eroded for Hindus

Karnataka Congress Rift Widens: Siddaramaiah vs Shivakumar heats up

Karnataka: Cracks within Congress widen as Siddaramaiah, Shivakumar rivalry spills into public view

At Hyderabad on  “The New World: 21st Century Global Order and India” book discussion event.
Uma Sudhir, Ambassador Venkatesh Varma, Dr Ram Madhav Prof. Krishna Deva Rao (left- right)

Telangana: Ram Madhav urges Bharat to rise above domestic hurdles and embrace a global outlook

Demonise RSS, Eulogies Radicals: The hypocritical model of Congress leader Rahul Gandhi

Representative image

Shakti, Shraddha and Shaastra: Weapons of Sanatan Dharma in the war of narratives

Representative Image

Beyond Indus Shock: India fast-tracks 9 J&K dams after Pahalgam attack, escalates water offensive against Pakistan

Representative image

Protecting local: India imposes 18 anti-dumping duties on China 

SAU report exposes Leftist plot to serve Non-Veg on Maha Shivratri, ABVP stands vindicated

Plot to serve Non-veg on Maha Shivratri exposed: SAU verdict unmasks Left Cabal’s conspiracy, vindicates ABVP stand

Lakshadweep: India transforms Bitra Island to expand naval reach & assert strategic role in the Arabian sea

  • Privacy
  • Terms
  • Cookie Policy
  • Refund and Cancellation
  • Delivery and Shipping

© Bharat Prakashan (Delhi) Limited.
Tech-enabled by Ananthapuri Technologies

  • Home
  • Search Organiser
  • Bharat
    • Assam
    • Bihar
    • Chhattisgarh
    • Jharkhand
    • Maharashtra
    • View All States
  • World
    • Asia
    • Africa
    • North America
    • South America
    • Europe
    • Australia
    • Global Commons
  • Editorial
  • Operation Sindoor
  • Opinion
  • Analysis
  • Defence
  • Culture
  • Sports
  • Business
  • RSS in News
  • Entertainment
  • More ..
    • Sci & Tech
    • Vocal4Local
    • Special Report
    • Education
    • Employment
    • Books
    • Interviews
    • Travel
    • Health
    • Politics
    • Law
    • Economy
    • Obituary
    • Podcast
  • Subscribe Magazine
  • Read Ecopy
  • Advertise
  • Circulation
  • Careers
  • About Us
  • Contact Us
  • Policies & Terms
    • Privacy Policy
    • Cookie Policy
    • Refund and Cancellation
    • Terms of Use

© Bharat Prakashan (Delhi) Limited.
Tech-enabled by Ananthapuri Technologies