Document Type

Report

Publication Date

3-2025

Abstract

This set of studies sought to examine how large language models (LLMs) are used in the malevolent creativity problem solving process. Terrorist innovation, also referred to as tactical innovation or malevolent innovation, is centered on terrorist’s use of new methods for violence. Following the advent of accessible LLM tools, such as generative AI (e.g., ChatGPT), this current effort examined how individuals use these tools to gather information when generating both novel and harmful ideas. Put differently, LLM tools synthesize large amounts of information, that would previously be a limitation of an individual’s knowledge base, to help them ideate and plan attacks. This set of studies offered several advancements to our understanding of novel, intentionally harmful ideation and LLM tools in terrorist organizations. The findings underscore the role AI constraints play in the generation of harmful and novel ideas in response to a social threat. We found that the constrained AI tool suppressed the generation of harmful ideas, suggesting that ethical safeguards are overall helpful and can prevent attacks from being planned. However, this points to a broader implication on AI’s impact on novelty and harmful ideas is contingent on the extent to which it is constrained by guidelines.

Share

COinS