RAXE LABS
Research Radar
Translating academic AI security research into practitioner-focused intelligence. Every paper verified, every claim sourced.
#5
Apr 13, 2026
4 papers
This Week's Signal
- Your LLM API router may be stealing your credentials and rewriting your tool calls.
- Skill documentation is the new attack surface, and it bypasses alignment where explicit instructions fail.
- Even benign, unmodified skills are exploitable through adversarial prompting.
#4
Apr 5, 2026
4 papers
This Week's Signal
- Your agents are contaminating themselves, and no attacker is required.
- MCP server detection is now possible, but the attack surface is worse than expected.
- The gap between "safe model" and "safe agent" is quantified: 40-75% of attacks succeed.
#3
Mar 29, 2026
4 papers
This Week's Signal
- AI agents are vulnerable before the attacker even tries.
- MCP client security is a lottery, and most developers are losing.
- Lightweight LLM judges beat purpose-built guardrails — but ensembling makes them worse.
#2
Mar 22, 2026
7 papers
This Week's Signal
- The agent skill supply chain is broken — and automated scanners cannot tell you how.
- Single-source telemetry has structural limits that no detection tuning can overcome.
- Mechanistic understanding of AI safety failures is catching up to the attacks.
#1
Mar 17, 2026
3 papers
This Week's Signal
- Compound AI systems may inherit the full CVE attack surface.
- Autonomous agent frameworks need execution-layer security, not just prompt filters.
- LLMs automate adversarial attacks against ML classifiers.
From research to runtime protection
Every Research Radar finding informs RAXE detection signatures and platform defences. Deploy the platform that enforces what we discover.
Stay Current
Subscribe to RAXE Labs research digests. New radar issues delivered to your inbox.