Radicalization through the use of generative AI represents a “major risk to UK national security”, experts have told ITPro.
Josh Boer, director at tech consultancy VeUP, said the use of generative AI tools for nefarious means is a serious threat to the country, adding that the technology could be used by cyber criminals to inflict serious harm.
Boer’s comments come in the wake of a report by Johnathan Hall, an independent reviewer of terrorism legislation, that warned the technology could be used to fuel radicalization.
Writing in the Daily Telegraph, Hall described his conversation with a chatbot which “did not stint in its glorification of Islamic State”.
The chatbot in question was created using Character.ai, a tool which allows users to create chatbot tools and train them according to specific personalities.
Software like this currently operates under little legislation, meaning extremists can easily train chatbots to radicalize online communities, the report warned.
Jake Moore, Global Security Advisor at ESET, told ITPro the risk of generative AI being used for harmful means, such as misinformation, radicalization, or cyber crime, should be a key concern for AI developers moving forward.
Moore said developers should focus heavily on “baking in the right level of principles” in AI platforms to reduce long-term risks.
“The majority of AI is still taught by the building blocks it was designed from and therefore, the right tweaks can be adopted to steer the outputs away from becoming a beast,” he said.
“Legislation is difficult with this constantly evolving technology but a basic structure designed to reduce the risk of recruiting extremists doesn’t have to be problematic.”.
To avoid extremists gearing AI towards their own ends, AI needs to be trained against certain forms of interaction on an algorithmic level, Moore added.
The criminal applications of generative AI
AI isn’t just gaining traction as a tool for extremists, either. Over the last year, concerns have been rising over the use of generative AI among cyber criminals, some of whom are using the technology to support operations, fine tune attack methods, and target a growing number of organizations globally.
Generative AI is being used to create various ransomware and malware, as well as to generate fraudulent phishing content.
A few months ago, for example, threat actors used AI to generate deep fake videos of several celebrities in an attempt to lead users into fraudulent purchases.
Cyber criminals are constantly developing new ways to incorporate AI into their criminal processes, and regulators will have to take clear action to curb the increase of AI cyber crime.
“The issue is how to address this issue without stifling innovation,” Josh Boer told ITPro.