7 Ways Terrorists can Exploit AI

7 ways terrorists can exploit AI shows the real-world impact AI may have on terrorist’s ability. Changes include:

  • Spoken passcodes for safety and security critical instructions to counter AI imposters
  • Security vulnerability for risk assessments that use LLM
  • Step change in terrorist propaganda

Click on the slideshow to view the SIRV visualisation

Contents

1. Risk Assessments

Large Language Models (LLM) are increasingly used to assess threats in and around physical assets.

LLMs have a number of security flaws for example, prompt injections. A prompt injection can lead to LLMs ignoring alerts, releasing confidential asset locations etc.

2. Hackbots

Hackbots are AI systems that can autonomously hack:

  • Identify vulnerabilities
  • Select the best tools to use
  • Identify previous successful attacks (a)

“If an enemy nation-state has a highly competent hackbot, that’s a serious national security issue.” Joseph Hacker

3. How to guides

AI can provide step-by-step guides on how to:

  • Build weapons
  • Optimise attack procedures
  • Simulate attacks (b)

4. Misalignment

Developers of AI claim a lot of effort goes into ensuring AI models are aligned with humanity’s best interests. However, with open source models available, these models can be aligned to meet anyone’s aims, including terrorists. (a)

5. Critical systems

AI promises huge opportunities through interoperability of different systems. For example, Large Action Models execute workflows from a spoken instruction. Although critical systems will have high safety standards, linked non-critical systems may not.

6. Propaganda

Terrorists can use AI to:

  • Translate text across languages
  • Produce optical illusions that fool social media moderators
  • Operate human like chat bots that give users the impression there’s support for extreme ideologies
  • Rapidly generate huge amounts of new content (b)

7. AI is a great Imposter

AI enables high quality voice cloning. False instructions can be relayed by terrorists to misdirect and spread panic. (c)

References

(a) https://josephthacker.com/ai/2024/05/19/defining-real-ai-risks.html

(b) https://www.poolre.co.uk/report-terrorist-exploitation-of-artificial-intelligence-current-risks-and-future-applications/

(c) https://www.scientificamerican.com/article/a-safe-word-can-protect-against-ai-impostor-scams/

"SIRV helped us move beyond basic reporting into a system that actively supports decision-making". Les O'Gorman, Director of Facilities, UCB - Pharma and Life Sciences

css.php