Islamic State supporters turn to AI to bolster online support – World

Table of Contents

Days after a deadly Islamic State (IS) attack on a Russian concert hall in March, a man clad in military fatigues and a helmet appeared in an online video, celebrating the assault in which more than 140 people were killed.

“The Islamic State delivered a strong blow to Russia with a bloody attack, the fiercest that hit it in years,” the man said in Arabic, according to the SITE Intelligence Group, an organisation that tracks and analyses such online content.

But the man in the video, which the Thomson Reuters Foundation was not able to view independently, was not real — he was created using artificial intelligence, according to SITE and other online researchers.

Federico Borgonovo, a researcher at the Royal United Services Institute, a London-based think tank, traced the AI-generated video to an IS supporter active in the group’s digital ecosystem.

This person had combined statements, bulletins, and data from IS’ official news outlet to create the video using AI, Borgonovo explained.

Although IS has been using AI for some time, Borgonovo said the video was an “exception to the rules” because the production quality was high even if the content was not as violent as in other online posts.

“It’s quite good for an AI product. But in terms of violence and the propaganda itself, it’s average,” he said, noting however that the video showed how IS supporters and affiliates can ramp up production of sympathetic content online.

Digital experts say groups like IS and far-right movements are increasingly using AI online and testing the limits of safety controls on social media platforms.

A January study by the Combating Terrorism Center at West Point said AI could be used to generate and distribute propaganda, to recruit using AI-powered chatbots, to carry out attacks using drones or other autonomous vehicles, and to launch cyber-attacks.

“Many assessments of AI risk, and even of generative AI risks specifically, only consider this particular problem in a cursory way,” said Stephane Baele, professor of international relations at UCLouvain in Belgium.

“Major AI firms, who genuinely engaged with the risks of their tools by publishing sometimes lengthy reports mapping them, pay scant attention to extremist and terrorist uses.”

Regulation governing AI is still being crafted around the world and pioneers of the technology have said they will strive to ensure it is safe and secure.

Tech giant Microsoft, for example, has developed a Responsible AI Standard that aims to base AI development on six principles including fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

In a special report earlier this year, SITE Intelligence Group’s founder and executive director Rita Katz wrote that a range of actors from members of militant group al Qaeda to neo-Nazi networks were capitalising on the technology.

“It’s hard to understate what a gift AI is for terrorists and extremist communities, for which media is lifeblood,” she wrote.

Source Link

Website | + posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to content