Malware that leverages artificial intelligence (AI) marks a shift in cyber-threat methodology: rather than purely static payloads, these threats incorporate aspects of generative or adaptive AI (such as large language models) as part of their execution logic. While traditional malware evolution (polymorphism, obfuscation) has progressed incrementally over decades, the embedding of AI into malware reflects a new layer of capability.
AI-driven malware is malicious software that embeds generative or adaptive artificial intelligence logic to dynamically produce or modify code, commands or payloads at runtime, enabling the threat to adapt its behavior, choose targets or evade detection without relying solely on fixed, static components.
Main Characteristics
AI-driven malware typically exhibits the following traits:
- Runtime invocation of large language models (LLMs) or other generative AI mechanisms to create or transform malicious logic instead of using only pre-written code.
- Dynamic decision-making capabilities where the malware assesses its environment (for example OS version, sandbox presence, file types) and alters its actions accordingly.
- Self-modification, polymorphism or โjust-in-timeโ code generation, making each instance of the malware differ in structure or behavior from previous instances.
- Lowered development barrier for attackers because the malware leverages AI to generate variants, obfuscation or commands rather than manually coding each variant.
- Increased difficulty for defenders because signatureโbased detection becomes less effective when the malware footprint changes dynamically, and behavior becomes more adaptive and less predictable.
Typology
We can categorize AI-driven malware by the way it leverages AI within its lifecycle:
- Adaptive/Polymorphic variants: malware that uses AI to rewrite or mutate its own code or obfuscation logic over time to evade detection.
- Runtime script-generation variants: malware that invokes an AI model during execution to generate commands or scripts tailored to the compromised environment rather than relying on static modules.
- Intelligence-driven decision variants: malware that uses AI to analyze the target environment (files, system state, network) and decide whether to exfiltrate data, encrypt files, propagate further or lie dormant.
- Cross-platform and multi-vector variants: because of the flexibility of generated logic, attacks may span multiple operating systems or adapt payloads accordingly, enabled by AI generating different modules for each target.
- Hybrid models: combining multiple of the aboveโe.g., runtime generation + environment decision-making + self-modification.
Variants of AI-Driven Malware
- PROMPTFLUX: This variant is a VBScript dropper discovered in mid-2025 that interacts with the Gemini API to request new VBScript code and obfuscation techniques on demand. It uses a module dubbed โThinking Robotโ which periodically queries the model for new code that evades antivirus detection, writes the regenerated script into the Windows Startup folder for persistence and attempts spread via removable drives and network shares.
- PROMPTSTEAL: This variant is a data-mining malware that uses the Qwen2.5โCoderโ32BโInstruct model via the Hugging Face API to dynamically generate one-line Windows commands rather than hard-coded commands. The malware executes those generated commands to harvest system information or user documents, then exfiltrates the results.
- PROMPTLOCK: This variant is an AI-powered ransomware proof-of-concept written in Golang that uses a locally hosted LLM (e.g., gptโoss:20B via the Ollama API) to generate Lua scripts at runtime for cross-platform execution (Windows, Linux, macOS). The generated scripts perform file system enumeration, analyze which files to encrypt or exfiltrate, and then apply encryption using algorithms such as SPECK 128-bit. Although currently a research-prototype, it illustrates how ransomware logic may shift from static modules to AI-orchestrated runtime generation.
Threat Power and Defensive Measures
Threat Power:
AI-driven malware poses enhanced risk because it can evade traditional defences more easily, vary its footprint across infections, adapt to defensive measures, and execute dynamically generated payloads tailored to the environment. The ability to generate or mutate code in real-time raises the cost, complexity and agility of attack campaigns, and reduces the predictability of malicious behavior. As adversaries adopt these techniques, detection mechanisms oriented around known signatures or static behaviors will increasingly fail.
Defensive Measures:
To defend effectively against AI-driven malware, organizations must deploy layered, technically rigorous controls. Key defensive measures include:
- Behaviour-based endpoint monitoring and EDR/XDR deployment: Use an endpoint detection and response (EDR) or extended detection and response (XDR) platform that continuously tracks process creation, file system changes, memory injection, scripting engine activity (e.g., VBScript, Lua, Python) and outbound LLM-API calls. Integrate behavioral analytics to detect anomalous sequences of API calls or script generation not present in the normal baseline.
- Anomaly detection and AI/MLโdriven telemetry analysis: Leverage machine-learning models to analyze network traffic flows, DNS resolutions, TLS connections, commandโandโcontrol (C2) channels, and large language model API key usage (e.g., unusual โsk-โ keys or tokens embedded in binaries) to surface deviations from expected patterns.
- Strict API access controls and model invocation governance: Implement role-based access controls (RBAC) and network segmentation for endpoints calling out to LLMs or hosting local models. Audit and restrict โcalling of an LLM from endpointโ as a potential indicator of compromise. Use model gateways or proxies to filter, validate, monitor request patterns (prompt lengths, new endpoints).
- Application whitelisting and script execution control: Use application control tools or Windows AppLocker / Linux SELinux policies to restrict execution of scripts from unexpected locations (e.g., Startup folder, removable media, network shares). Block or alert on new script generation, especially when combined with process-hollowing or payload drop.
- Immutable backups and secure data architecture: For ransomware-style AI-driven malware, implement write-once, read-many (WORM) or immutable backup storage, enforce point-in-time restoration capability, and segregate backup access so that encryption or exfiltration endpoints cannot reach backup stores.
- Zero-Trust network architecture and least-privilege enforcement: Employ Zero-Trust network principlesโno implicit trust for any user or device, continuous verification of identity, device posture, and application hygiene. Limit lateral movement via micro-segmentation, restrict use of shared network drives, and control administrative privileges tightly.
- Deception technology / honeypots / canary tokens: Deploy decoy systems (honeynets) and fake credentials or tokens (canary files, dummy environment variables) to lure AI-driven malware into interacting with traps and reveal its logic or prompt invocation behavior.
- Adversarial training and threat-hunting using AI models: For security operations teams, use adversarial machine-learning techniques to harden detection models by injecting simulated AI-driven malware behaviors and retraining classifiers, thereby increasing robustness to model-based evasion.
- Exposure and supply-chain management: Audit and secure third-party vendor access, restrict and monitor API keys for LLM services, enforce secure software-update pipelines, and perform regular penetration testing that simulates AI-driven payloads, to surface weaknesses before adversaries exploit them.
- Incident response with automation (SOAR) and isolation workflows: Ensure that once anomalous behavior (e.g., LLM API access, dynamic script generation) is detected, the incident response process can automatically isolate the infected host, block malicious IPs/domains, terminate rogue processes, revoke compromised credentials, and launch forensic collection, all with minimal delay.
- Continuous monitoring of LLM usage and model-invocation telemetry: Maintain logs of model invocations (endpoint, prompt content metadata, output patterns), inspect binaries for embedded prompts or AI-model calls, and flag unusual prompt patterns or repeated generation of new script modules. This helps identify malware that uses generative logic during runtime.
By combining these technical controlsโendpoint behavioral analytics, network anomaly detection, model-access governance, script control, Zero-Trust segmentation, deception, and incident-response automationโorganizations can build a robust defensive posture against AI-driven malware.
Conclusion
AI-driven malware represents a meaningful evolution in malicious software because the defining characteristic is not simply automation but the embedment of generative or adaptive AI logic within the malwareโs execution workflow. Key characteristics include runtime code generation, adaptive decision-making, self-modification, and reduced reliance on static payloads. The typology spans adaptive/polymorphic variants, runtime script generation, environment-aware decision malware, and hybrid models. The three variants examined โ PROMPTFLUX, PROMPTSTEAL and PROMPTLOCK โ illustrate how attackers are beginning to adopt AI to mutate attack logic, exfiltrate data with generated commands, and perform ransomware tasks via locally hosted models. The threat power of this class lies in its unpredictability, its ability to evade traditional defenses and its lower barrier to entry for attackers. Defenders must adapt with behavior-based detection, model-access monitoring, script-control policies and education around the LLM-enabled threat model. Looking ahead, as AI models become more capable and more widely accessible, we can expect an escalation in scale, automation and sophistication of AI-driven malware โ making preparation now essential.