7 Ways Terrorists can Exploit AI

7 ways terrorists can exploit AI shows the real-world impact AI may have on terrorist’s ability. Changes include:

  • Spoken passcodes for safety and security critical instructions to counter AI imposters
  • Security vulnerability for risk assessments that use LLM
  • Step change in terrorist propaganda

Click on the slideshow to view the SIRV visualisation

Contents

1. Risk Assessments

Large Language Models (LLM) are increasingly used to assess threats in and around physical assets.

LLMs have a number of security flaws for example, prompt injections. A prompt injection can lead to LLMs ignoring alerts, releasing confidential asset locations etc.

2. Hackbots

Hackbots are AI systems that can autonomously hack:

  • Identify vulnerabilities
  • Select the best tools to use
  • Identify previous successful attacks (a)

“If an enemy nation-state has a highly competent hackbot, that’s a serious national security issue.” Joseph Hacker

3. How to guides

AI can provide step-by-step guides on how to:

  • Build weapons
  • Optimise attack procedures
  • Simulate attacks (b)

4. Misalignment

Developers of AI claim a lot of effort goes into ensuring AI models are aligned with humanity’s best interests. However, with open source models available, these models can be aligned to meet anyone’s aims, including terrorists. (a)

5. Critical systems

AI promises huge opportunities through interoperability of different systems. For example, Large Action Models execute workflows from a spoken instruction. Although critical systems will have high safety standards, linked non-critical systems may not.

6. Propaganda

Terrorists can use AI to:

  • Translate text across languages
  • Produce optical illusions that fool social media moderators
  • Operate human like chat bots that give users the impression there’s support for extreme ideologies
  • Rapidly generate huge amounts of new content (b)

7. AI is a great Imposter

AI enables high quality voice cloning. False instructions can be relayed by terrorists to misdirect and spread panic. (c)

References

(a) https://josephthacker.com/ai/2024/05/19/defining-real-ai-risks.html

(b) https://www.poolre.co.uk/report-terrorist-exploitation-of-artificial-intelligence-current-risks-and-future-applications/

(c) https://www.scientificamerican.com/article/a-safe-word-can-protect-against-ai-impostor-scams/

    Company

    Contact

    About

    Privacy Policy

    Address: SIRV Systems Limited, 85 Great Portland street, First Floor, London, UK W1W 7LT

    Follow

    SIRV Page om Linkedin Logo

     

    SIRV Facebook logo

     

    Twitter X SIRV Logo

     

    Youtube SIRV Logo

    Get in touch

    Text & WhatsApp: (0) 7984 884404

    Email: info@sirv.co.uk

    Web Chat: Use the web chat pop-up

    Awards

    2016 Communication Product (Winner)

    2017 Communication Product (Finalist)

    2018 Start-up of the Year (Finalist)

    Awards

    2019 Innovation of the Year (Finalist)

    2020 Innovation of the Year (Finalist)

    2021 Innovation of the Year (Finalist)

    2022 Innovation of the Year (Finalist)

    2023 Innovation of the Year (Finalist)

    css.php
    SIRV email list subscribe

    Straight to your inbox

    Sign-up to receive updates and thought leadership.

    GDPR Consent

    Terms and Conditions

    You have Successfully Subscribed!