Threat Intelligence

Real-time monitoring of AI-specific threat landscape

Threat Level: ELEVATED
Last Updated: 21:04:31

12

Active Threat Campaigns

+25% from last week

47

New Vulnerabilities (7d)

8 critical, 15 high severity

23

Tracked Threat Actors

7 nation-state, 16 cybercrime

34

ML-Specific Threats

19 adversarial, 15 poisoning
Active Threat Campaigns Targeting AI Systems
Campaign Name
Threat Actor
Target Sector
Attack Type Severity First Seen
Status
SHADOW SERPENT APT-42 Financial Services
Model Extraction
Critical
2025-10-28
POISON IVY Unknown Healthcare
Data Poisoning
High
2025-11-22
NEURAL STORM Cybercrime Group X Technology
Adversarial Attack
Medium
2025-10-13
Threat Actor Attribution

APT-42

Nation-State | Unknown
Advanced


DARKML

Cybercrime | Eastern Europe
Intermediate


SILENT GRADIENT

Nation-State | Asia-Pacific
Advanced


Attack Pattern Evolution & Trends
020406080 JulAugSepOctNovDec
FGSM Attacks
PGD Attacks
C&W Attacks
Recent Techniques

FGSM (Fast Gradient Sign Method)

PGD (Projected Gradient Descent)

C&W (Carlini & Wagner)

AutoAttack Framework

Recent Vulnerability Disclosures

CVE-2024-1234

Critical vulnerability in TensorFlow allowing arbitrary code execution

CVSS: 9.8 | TensorFlow < 2.14.0
Dec 10

CVE-2024-5678

Model extraction vulnerability in PyTorch serving

CVSS: 7.5 | PyTorch Serve < 0.9.0
Dec 07

CVE-2024-9012

Prompt injection in LangChain framework

CVSS: 6.3 | LangChain < 0.1.5
Dec 05
Industry-Specific Threat Intelligence
Financial
Healthcare
Technology
Government
Other

Financial Services

Model extraction attacks targeting trading algorithms

Healthcare

Data poisoning in medical diagnosis systems

Technology

Supply chain attacks on ML frameworks
Early Warning Indicators & Predictive Analytics

LLM Jailbreak Surge

350% increase in prompt injection attempts detected

Confidence: 85% | Impact: High

New Poisoning Technique

Novel clean-label attack method observed in wild

Confidence: 72% | Impact: Critical

Supply Chain Risk

Suspicious packages in PyPI targeting ML workflows

Confidence: 91% | Impact: Medium
Live Threat Intelligence Feeds
Time
Source
Type Description
IOCs Action
21:02:31 MITRE ATLAS
Adversarial
New evasion technique documented for vision transformers
5
20:59:31 NIST NVD
Vulnerability
Critical RCE in popular ML framework
3
20:54:31 Industry ISAC
Campaign
Coordinated model extraction campaign detected
12
An unhandled error has occurred. Reload 🗙