Explore our findings
Threat Intelligence Reports
Monthly data-driven analysis of the AI threat landscape. Covers attack volume trends, emerging techniques, threat family distribution, and actionable recommendations for security teams and developers.
- Quantitative analysis from real-world telemetry
- Month-over-month trend tracking
- MITRE ATLAS technique mapping
- Audience-specific recommendations
Security Advisories
CVE-tracked vulnerability disclosures for AI frameworks, agents, and model serving infrastructure. Each advisory includes technical analysis, reproduction steps, detection signatures, and mitigation guidance.
- CVE-assigned disclosures
- YAML detection signatures included
- CVSS scoring and severity classification
- Coordinated disclosure with vendors
Research Radar
Bi-weekly digest translating arXiv AI security papers into practitioner summaries. Each paper rated across 4 dimensions and classified as act now, watch, or horizon.
Cold Validation
Independent AI change assurance for CISOs and engineering leaders. Separation of duties, release governance, structured audit evidence, and human risk acceptance before anything ships.
Where we focus
Research organized into four streams, each targeting a distinct category of AI security threats.
Adversarial ML
Jailbreaking, prompt injection, model behaviour manipulation
Agent Security
Tool-call abuse, MCP exploits, agent manipulation
Supply Chain
Framework vulnerabilities, model registry threats
Prompt Injection
Injection taxonomy, encoding tricks, evasion patterns
515+ YAML-based signatures
Every research finding produces detection signatures. Open, auditable, mapped to MITRE ATLAS, and continuously updated.
Upcoming publications
MCP Server Attack Surface Analysis
Comprehensive analysis of Model Context Protocol server vulnerabilities, including tool-call injection, permission escalation, and cross-server data exfiltration.
AI Agent Exploitation Taxonomy
A systematic classification of attack techniques targeting autonomous AI agents, including multi-turn manipulation, sandbox escapes, and inter-agent trust exploitation.
Prompt Injection Evasion Benchmark
Benchmarking prompt injection detection systems against adversarial evasion techniques, including encoding, obfuscation, and multi-language attacks.
LLM Supply Chain Risk Report
Analysis of dependency risks in the AI/ML ecosystem, covering model registries, framework vulnerabilities, and training pipeline integrity.
Stay ahead of AI threats
Our research feeds directly into RAXE platform detections. Deploy the platform to turn every finding into automated protection.