Why actress Rashmika Mandanna’s deepfake video raises serious concerns around Artificial Intelligence

Published by
Subhi Vishwakarma

A widely circulated video features a woman dressed in black entering an elevator, garnering millions of views across social media platforms. The attention-grabbing aspect of the video is the uncanny resemblance between the woman in the clip and the actress from the film Pushpa, Rashmika Mandanna. Rashmika boasts an impressive following of over 40 million on Instagram and enjoys a substantial fan base.

What raises concern about this video of Rashmika is that the woman depicted is not actually her. This incident highlights the alarming prevalence of deepfake technology, which enables the creation of convincing likenesses of individuals through digital manipulation.

Notably, Katrina Kaif, who is gearing up for the release of “Tiger 3,” has also fallen victim to this digital threat. This issue, however, extends beyond Indian actresses. Hollywood star Emily Blunt was among the first to experience her likeness being used without consent in unauthorised productions. A deepfake video of Emily portraying Black Widow in the Marvel Cinematic Universe circulated widely, causing a stir among fans and falsely suggesting an audition for the role originally played by Scarlett Johansson.

Rashmika’s video goes viral

Both Rashmika and the woman featured in the video, Zara, have expressed being “deeply disturbed” and “upset” by this incident.

Rashmika wrote, “I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary, not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused,” she wrote in a post on X (formerly Twitter).

Zara Patel wrote, “Hi all, it has come to my attention that someone created a deepfake video using my body and a popular Bollywood actress’s face. I had no involvement with the deepfake video, and I’m deeply disturbed and upset by what is happening,” the British-Indian influencer said in an Instagram story.

This article aims to shed light on the available software and technologies in the public domain, as well as the potential consequences they may entail in relation to the concerning video in question.

What is deepfake?

It is crucial for readers to be aware of the numerous websites where manipulated images and fake videos of celebrities, often in highly inappropriate contexts, can be found. These include depictions of Hollywood stars in altered appearances, ranging from an Indian aesthetic to a less affluent one, and various other alterations. While the practice of replacing faces and manipulating images is not new for celebrities, what sets these instances apart is the incorporation of artificial intelligence (AI) technology. There are even mobile applications that facilitate face-swapping, along with a plethora of computer software programs that offer even more convincing results.

The term “deepfake” derives from “deep learning,” which refers to a form of machine learning involving extensive layers of data processing and requires a profound understanding of software and computation.

This technology enables the creation of remarkably realistic faces of individuals you may recognise. To generate such a face, the software necessitates multiple images of the person from different angles, which are often readily available in the public domain for celebrities. Drawing from this information and a provided library, the software constructs a profile of the subject as specified by the user. Subsequently, the software progresses through stages involving a generator and discriminator, and following some fine-tuning, the final result is produced. This process is also known as a Generative Adversarial Network (GAN).

Potential AI tools which can be used in wrong way

With the prevalence of social media, individuals with malicious intent find it much easier to exploit information, particularly in the case of deep fake videos where there is a wealth of available material. The concerns extend beyond just deepfake videos; the burgeoning field of AI presents significant worries for the general populace. Without proper regulation, this could lead to serious problems in the near future.

It’s important for readers to be aware of a software program called DALL.E, a text-based tool that enables users to generate images based on written instructions. For example, if someone wants to create an image of a one-horned dog with seven different colors, wings, and a particular body shape, they can simply provide the instructions and the software will generate the image. This tool is particularly valuable for those involved in template designs and other visual projects.

Another notable tool is ChatGPT, frequently utilised for writing and research purposes. One of its positive features is its ability to perform internet searches, though this also means its accuracy cannot be entirely verified. However, ChatGPT is not without its biases. For instance, it may make jokes about Bhagwan Ram, but refrain from doing so about Prophet Mohammad in Islam or Jesus in Christianity.

Additionally, Microsoft has introduced an AI software called VALL-E (X), which functions as a language and speech synthesis model. This software listens to a human voice for three seconds in order to analyze language, sound features, accents, and other factors. It then replicates the user’s voice with specific stops and texture, mimicking human speech. Users can input written text, and the software will generate audio in a contextually appropriate manner.

Another notable AI model is Nural Sync, which operates akin to deepfake technology. Additionally, there are other tools like Hazen.Ai, HeyGen, and Verbatik AI that have the capability to transcribe speech in a staggering 142 languages. Many of these tools are accessible with minimal to no subscription fees.

It’s crucial for readers to be informed about Verbatik, an AI-driven text-to-speech platform that converts written text into natural-sounding speech. This platform boasts an extensive repertoire of over 600 realistic voices across 142 languages and accents. It also offers unlimited voiceover revisions to ensure impeccable audio outputs. Users have the ability to fine-tune the voice output, including adjustments in tone, emotion, and speech rate, enabling them to achieve the ideal voiceover to suit their specific needs.

Verbatik allows users to export the generated speech in both MP3 and WAV formats, ensuring compatibility with a wide array of audio playback devices. Whether producing a podcast, video tutorial, or presentation, these lifelike voices streamline the process, saving both time and resources while delivering top-notch audio quality.

Potential threats of AI

When readers connect the dots and consider the collective capabilities of these AI tools, it can be truly disconcerting. These tools have the capacity to substitute faces in videos, replicate and manipulate voices, and even converse in multiple languages. If utilised with malicious intent and combined effectively, these tools could become exceedingly difficult to combat.

For example, if a video surfaces of a politician discussing illicit activities with their own voice synthesized through AI, it could rapidly gain traction. By the time lawmakers address the issue, the reputation of that individual could already be irreparably damaged.

Despite the potential for misuse, these software applications do have positive applications, such as aiding in age progression and regression, as well as seamlessly dubbing audio in films without disrupting the overall mood.

A few months ago, images of Bhagwan Ram circulated on social media, generated by an AI model known as Mid Journey. Mid Journey is a generative artificial intelligence program and service developed and hosted by the San Francisco-based independent research lab, Mid Journey, Inc. This program generates images based on natural language descriptions, known as “prompts,” much like OpenAI’s DALL-E and Stability AI’s Stable Diffusion.

Sextortion through AI tools

While there are certainly positive applications for AI, it’s imperative to recognise the potential for harm. There have been reported cases worldwide of victims, often young individuals, taking their own lives due to AI-generated content. Numerous articles and research studies have shed light on the detrimental effects of AI misuse. The New York Post, for instance, has published an article addressing the tragic consequences of teens falling victim to the misuse of AI tools, serving as a guide for parents to be vigilant against the rise of sextortion crimes facilitated by such tools.

While there are established regulations to govern such crimes, in the age of the internet, the time required for identifying such instances is often sufficient for significant damage to occur. In response to the Rashmika video incident, the IT Minister issued an advisory through his social media account.

Regulation norms

The Ministry of Electronics and Information Technology (MeitY) promptly dispatched advisories to social media platforms, including Facebook, Instagram, and YouTube, instructing them to remove fake content generated through artificial intelligence (AI) within a 24-hour timeframe.

Sources indicate that the advisory reemphasised the existing legal provisions that these platforms must adhere to as online intermediaries. It reaffirmed the rules outlined in the Information Technology Act of 2000, which include penalties for fraudulent impersonation using computer resources, carrying a potential imprisonment term of up to three years and a fine of up to Rs 1 lakh.

The advisory also cited “IT Intermediary Rules: Rule 3(1)(b)(vii),” which states that social media intermediaries must exercise due diligence, ensuring that their rules, regulations, privacy policies, and user agreements explicitly prohibit the hosting of content that impersonates another individual.

It is worth noting that a more rigorous approach may be required to prevent potential major crises in the future.

Share
Leave a Comment