RAXE DEFEND

Secure Any
AI Agent Framework

Drop-in protection for LangChain, CrewAI, AutoGen, and more.
2-line integration. <10ms overhead. No code changes to your agent logic.

Agent A
Output
RAXE
Scan
Agent B
Input
RAXE
Scan
LLM

RAXE scans agent inputs, outputs, tool calls, and inter-agent messages—all in <10ms.

Framework Integrations

Native support for the most popular AI agent frameworks

🦜

LangChain

from raxe.sdk.integrations.langchain import create_callback_handler

handler = create_callback_handler()
llm = ChatOpenAI(callbacks=[handler])
  • Chains & Agents
  • Tool call scanning
  • Memory protection
View Documentation →
🚀

CrewAI

from raxe.sdk.integrations import RaxeCrewGuard

guard = RaxeCrewGuard(Raxe())
protected_crew = guard.protect_crew(crew)
  • Multi-agent crews
  • Task handoff scanning
  • Crew-level policies
View Documentation →
🔄

AutoGen

from raxe.sdk.integrations import RaxeConversationGuard

guard = RaxeConversationGuard(Raxe())
guard.register(assistant)
  • Conversational agents
  • Multi-agent chat
  • Function scanning
View Documentation →

LiteLLM

100+ LLMs
from raxe.sdk.integrations import RaxeLiteLLMCallback

litellm.callbacks = [RaxeLiteLLMCallback()]
# All providers now protected
  • 100+ LLM providers
  • Single callback
  • Provider-agnostic
View Documentation →
🦙

LlamaIndex

from raxe.sdk.integrations import RaxeAgentCallback

callback = RaxeAgentCallback(Raxe())
agent = ReActAgent(callbacks=[callback])
  • ReAct agents
  • RAG retrieval
  • Query engines
View Documentation →

More frameworks:

SDK Patterns

Multiple ways to integrate—choose what fits your architecture

🔍

Direct Scan

Full Control
from raxe import Raxe

raxe = Raxe()
result = raxe.scan(user_input)

if result.has_threats:
    block(result.severity)
  • Full control over flow
  • Custom threat handling
  • Access to detections
View Documentation →

Decorator

Zero Code
from raxe import Raxe

raxe = Raxe()

@raxe.protect
def process(user_input: str):
    return llm.generate(user_input)
  • Auto-scan inputs
  • Configurable blocking
  • No code changes
View Documentation →
🤖

OpenAI Wrapper

Drop-in
from raxe import RaxeOpenAI

client = RaxeOpenAI()  # That's it!
response = client.chat.completions.create(...)
  • 1-line migration
  • Chat & Assistants API
  • Blocks before API call
View Documentation →

Same API Everywhere

Consistent scanning interface across all frameworks. Learn once, apply anywhere.

<10ms Overhead

On-device scanning adds minimal latency. Your agents stay fast.

No Code Changes

Wrap your existing agents. No modifications to your agent logic required.

Works With Any LLM

OpenAI, Anthropic, Gemini, local models—RAXE protects them all.

Get Started in 60 Seconds

pip install raxe