Fraudsters are employing remarkably convincing AI-enabled voice cloning technologies, which are publicly accessible online, to steal from victims by impersonating family members in a new generation of schemes that have alarmed US authorities.
Cybercriminals create deepfakes, which refer to the “broad range of generated or manipulated digital media (e.g., images, videos, audio, or text; collectively referred to as “synthetic content” or “synthetic media”) created using artificial intelligence and machine learning processes. Deepfakes can depict the alteration or impersonation of a person’s identity to make it appear as if they are doing or saying things they never did”.
The capacity of artificial intelligence to blur the lines between reality and fiction poses the greatest threat since it will provide cybercriminals with a simple and efficient tool for spreading misinformation, experts said.
While talking to the media about AI-enabled voice cloning scams, the chief executive of Blackbird.AI, Wasim Khaled, said, “AI voice cloning, now almost indistinguishable from human speech, allows threat actors like scammers to extract information and funds from victims more effectively”.
He said, “With a small audio sample, an AI voice clone can be used to leave voicemails and voice texts. It can even be used as a live voice changer on phone calls”. Khaled added, “Scammers can employ different accents, genders, or even mimic the speech patterns of loved ones. [The technology] allows for the creation of convincing deep fakes”.
In a report titled “The Artificial Imposter” by McAfee, an online security firm, the participants who took part in the survey were not able to distinguish between the real and the cloned voice.
In this global study of 7054 people from nine nations, including the United States and India, indicated they had fallen victim to an AI voice cloning fraud themselves or knew someone who had. However, the latest AI scam poses a greater risk, with researchers claiming that cloning someone’s voice is now a major tool in cyber criminals’ arsenal.
Officials in the United States have expressed concern about an increase in the “grandparent scam,” in which a fraudster assumes the identity of a grandchild who is in desperate need of money.
In March, the US Federal Trade Commission explained the modus operandi of the AI voice scam, “You get a call. There’s a panicked voice on the line. It’s your grandson. He says he’s in deep trouble —- he wrecked the car and landed in jail. But you can help by sending money”. As per the advisory, to distinguish between the actual and cloned voices, cross-check it by calling the real person to verify the story.
Even from India, 1010 respondents participated in this survey. The report said, “The survey reveals that more than half (69 percent) of Indians think they don’t know or cannot tell the difference between an AI voice and real voice”.
About 83 per cent have faced financial losses due to these new AI-based voice scams. The report said, “About half (47 per cent) of Indian adults have experienced or know someone who has experienced some kind of AI voice scam, which is almost double the global average (25 per cent). 83 per cent of Indian victims said they had a loss of money- with 48 per cent losing over Rs 50,000”.
More than 80 per cent of Indians transfer their voice data online, especially through recorded audio notes on social media, voice notes, and other channels at least once a week. From these sources mentioned above, the audio data gets into the hands of cybercriminals who use it to clone a person’s voice. It just takes three seconds of audio to clone somebody’s voice.
The report said, “Particularly if they thought the request had come from their parent (46 per cent), partner or spouse (34 per cent), or child (12 per cent). Messages most likely to elicit a response were those claiming that the sender had been robbed (70 per cent), was involved in a car incident (69 per cent), lost their phone or wallet (65 per cent) or needed help while travelling abroad (62 per cent)”.
McAfee CTO Steve Grobman said, “Artificial Intelligence brings incredible opportunities, but with any technology, there is always the potential for it to be used maliciously in the wrong hands. This is what we’re seeing today with the access and ease of use of AI tools helping cybercriminals to scale their efforts in increasingly convincing ways”.
Comments