The advent of Artificial Intelligence (AI) may have changed the world to a large extent. There are debates galore on whether AI would replace humans in the years to come.
While those discussions are on, there is also a dangerous side to AI and that is terror groups have started using the technology extensively. The exploitation of generative AI and its rise has led to urgent policy debates.
AI becomes a problem for counterterrorism officials since outfits such as the Islamic State have intensified their commitment to using the media to wage Jihad. The Islamic State in particular has been very media savvy and has explored the medium to such an attack that its propaganda material is sufficient for radicals to turn into lone wolves and carry out attacks.
Terror and the use of AI
The Islamic State has encouraged its operatives to exploit AI for content creation. In 2023, the Islamic State put out a guide on how to use generative AI effectively. A post was identified which include a guide to memetic warfare and the use of AI to make propaganda memes.
A large push was however made in May this year when the Islamic State doubled capabilities to use AI to boost the scope of its public content. The first experiment involved a news bulletin which had an AI generated presenter to read out Islamic State propaganda and claims aloud.
In another incident an Islamic State operative used Rocket Chat, which is an encrypted platform to disseminate a video news bulletin. This was accompanied by a broader propaganda campaign against Russia. The video had AI generated characters which were designed to emulate news broadcasters. Further it had the aesthetics to mimic the style of broadcasting by the mainstream media. The entire propaganda was prepared by using text to speech AI to translate written information into speech and audio with a human voice.
Beyond media propaganda
The threat of terrorists using AI technology does not just restrict itself to media propaganda. The experts worry that AI would be used by terrorist groups in operational strategy. Further the tactics would eventually shift and this would help them maximise the impact of their strikes while minimising their chances of being identified.
Further terror groups have already started using AI powered chatbots in radicalising individuals. The use of AI systems would be an effective mechanism for terror groups in virtual recruitment and planning. This would be used on those who rely on lone wolf attacks.
Terror groups are already exploring the possibilities of using automated vehicles to carry out attacks. This they would do so by exploiting the vulnerabilities in the AI powered systems. Terror groups are also capable of using the technology to hack into the traffic guiding systems and manipulating the same to cause loss of lives.
A report by Tech Against Terrorism said that AI tools can be used to auto-translate propaganda into multiple languages. They would also use it to create personalised messages to facilitate recruitment efforts, the report also said.
The multiple uses of AI
Analysts have found that for now terrorists could use AI for six different purposes. The first is media spawning. With a single image or video a terrorist could generated scores of manipulated variants capable of circumventing hash-matching and automated detection mechanisms.
With automated multilingual translation the terrorist could translate text based propaganda into multiple languages. Terrorists could generate completely artificial content that could include speeches, images and interactive environments.
Further AI could also be used to design variants of propaganda specially engineered to bypass existing modern techniques. AI could also be used to customise messaging and media to scale up targeted recruitments of specific demographics.
Lastly groups such as the Islamic State could use AI to repurpose old propaganda to create new versions.
UNICRI Report
A report by the United Nations Interregional Crime and Justice Research Institute (UNICRI) said that new technologies such as AI can be extremely powerful tools. While it has several positive uses, it can also be used for malicious purposes if it falls into the wrong hands.
The report says that although terrorist organisations have to a certain degree employed various forms of low-tech terrorism such as firearms, blades and vehicles, terrorism itself is not a stagnant threat. As soon as AI becomes more widespread, the barriers to entry will be lowered by reducing skills and technical expertise needed to employ it.
The UNICRI says that this report should serve as an early warning for potential malicious uses and abuses of AI by terrorists and help the global community, industry and governments to proactively think about what we can do collectively to ensure new technologies are used to bring good and not harm.
Comments