RAXE DEFEND
Secure Any
Secure Any
AI Agent Framework
Drop-in protection for LangChain, CrewAI, AutoGen, and more.
2-line integration. <10ms overhead. No code changes to your agent logic.
Agent A
Output
→
RAXE
Scan
→
Agent B
Input
→
RAXE
Scan
→
LLM
RAXE scans agent inputs, outputs, tool calls, and inter-agent messages—all in <10ms.
Framework Integrations
Native support for the most popular AI agent frameworks
LangChain
from raxe.sdk.integrations.langchain import create_callback_handler
handler = create_callback_handler()
llm = ChatOpenAI(callbacks=[handler])
- Chains & Agents
- Tool call scanning
- Memory protection
CrewAI
from raxe.sdk.integrations import RaxeCrewGuard
guard = RaxeCrewGuard(Raxe())
protected_crew = guard.protect_crew(crew)
- Multi-agent crews
- Task handoff scanning
- Crew-level policies
AutoGen
from raxe.sdk.integrations import RaxeConversationGuard
guard = RaxeConversationGuard(Raxe())
guard.register(assistant)
- Conversational agents
- Multi-agent chat
- Function scanning
LiteLLM
100+ LLMsfrom raxe.sdk.integrations import RaxeLiteLLMCallback
litellm.callbacks = [RaxeLiteLLMCallback()]
# All providers now protected
- 100+ LLM providers
- Single callback
- Provider-agnostic
LlamaIndex
from raxe.sdk.integrations import RaxeAgentCallback
callback = RaxeAgentCallback(Raxe())
agent = ReActAgent(callbacks=[callback])
- ReAct agents
- RAG retrieval
- Query engines
More frameworks:
SDK Patterns
Multiple ways to integrate—choose what fits your architecture
Direct Scan
Full Controlfrom raxe import Raxe
raxe = Raxe()
result = raxe.scan(user_input)
if result.has_threats:
block(result.severity)
- Full control over flow
- Custom threat handling
- Access to detections
Decorator
Zero Codefrom raxe import Raxe
raxe = Raxe()
@raxe.protect
def process(user_input: str):
return llm.generate(user_input)
- Auto-scan inputs
- Configurable blocking
- No code changes
OpenAI Wrapper
Drop-infrom raxe import RaxeOpenAI
client = RaxeOpenAI() # That's it!
response = client.chat.completions.create(...)
- 1-line migration
- Chat & Assistants API
- Blocks before API call
Same API Everywhere
Consistent scanning interface across all frameworks. Learn once, apply anywhere.
<10ms Overhead
On-device scanning adds minimal latency. Your agents stay fast.
No Code Changes
Wrap your existing agents. No modifications to your agent logic required.
Works With Any LLM
OpenAI, Anthropic, Gemini, local models—RAXE protects them all.