Microsoft President Brad Smith cited the use of deepfake technology, which can produce realistic-looking false content, as his main worry regarding artificial intelligence. Some nefarious users have even gone so far as to use the technology to create celebrity pornographic videos and revenge porn.
Smith called for measures to ensure that people can tell when a photo or video is real and when it is generated by AI while giving a speech in Washington on May 25 intended to address the issue of how to regulate AI, which went from wonky to widespread with the arrival of OpenAI’s ChatGPT. He also wrote the forward of Microsoft’s report “Governing AI: A Blueprint for the Future”.
Microsoft says, “A deepfake is a fraudulent piece of content—typically audio or video—that has been manipulated or created using artificial intelligence. This content replaces a real person’s voice, image, or both with similar looking and sounding artificial likenesses”.
Deepfakes are frequently employed to disseminate false information, and they can be used in scams, election rigging, social engineering attacks, and other types of fraud. In creating a deepfake, artificial intelligence and machine learning are key components.
The Microsoft President said, “We’re going have to address the issues around deepfakes. We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians”. He added, “We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI”.
Smith further called that the most necessary forms of AI should be given licenses with an obligation “to protect the security, physical security, cybersecurity, national security”. In addition, he stated, “We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements”.
Smith also argued that people must be held responsible for any AI-related issues, and he encouraged lawmakers to make sure that AI used to manage the water supply, the electric grid, and other crucial infrastructure has safety brakes installed so that humans remain in control.
He advocated implementing a “Know Your Customer”-style system for developers of powerful AI models to monitor how their technology is used and to notify the general public of what content AI is making so they can recognise fake videos.
Recently, while speaking to the media on May 24 during Wall Street Journal CEO Council conference, Eric Schmidt, the former CEO of Google, expressed his apprehensions about Artificial Intelligence. He stated, “My concern with AI is actually existential, and existential risk is defined as many, many, many, many people harmed or killed. And there are scenarios not today but reasonably soon, where these systems will be able to find zero-day exploits, cyber issues or discover new kinds of biology”.
Schmidt’s comments came when European Union industry chief Thierry Breton stated that Alphabet, Google’s parent company and the European Commission plans to create an artificial intelligence (AI) pact to govern the technology.
Last week, on May 16, Sam Altman, CEO of Open AI, which created ChatGPT, testified before the US Congress. According to Altman, regulation of artificial intelligence’s “increasingly powerful models” is “critical” to reducing the threats the technology presents. He also mentioned how AI could be used to tamper with the elections is a “significant area of concern”.
AI poses security risks if it falls into the wrong hands. For example, malicious actors could use AI to create sophisticated phishing attacks or to break into secure systems. This highlights the need for strong cybersecurity measures to protect AI systems and data.
Furthermore, India is also considering a legal framework for regulating technology and AI-enabled platforms, such as ChatGPT, at a time when authorities in many places, including Europe and the US, have advocated for AI regulation.
Comments