Integrate Sentinel in 5 Minutes: A Quick Start Guide
Add decision-layer safety to your AI agent in under 5 minutes. Step-by-step examples for Python, TypeScript, LangChain, and the visual builder.
Integrate Sentinel in 5 Minutes: A Quick Start Guide
You have an AI agent. It calls tools, queries databases, maybe even moves money. You want a safety layer that validates decisions before they become actions.
Here's how to add Sentinel to your stack — four ways, all under 5 minutes.
Option 1: Python SDK (pip install)
The fastest path. Works with any Python agent.
pip install sentinelseed
from sentinelseed import Sentinel
sentinel = Sentinel(seed_level="standard")
Validate user input before sending to your LLM
result = sentinel.validate_input(user_message)
if not result.is_safe:
print(f"Blocked: {result.attack_types}")
Don't send to LLM
Validate LLM output before executing actions
result = sentinel.validate_output(llm_response, user_message)
if not result.is_safe:
print(f"Seed failed: {result.gates_failed}")
Don't execute
That's the core loop: validate input, run your agent, validate output.
The heuristic layer (700+ patterns) runs locally with zero API calls. For deeper analysis, enable the semantic layer with any LLM provider:
sentinel = Sentinel(
seed_level="standard",
semantic_provider="openai", # or "anthropic", "openai_compatible"
semantic_api_key="sk-...",
)
Option 2: TypeScript SDK (npm install)
Same concept, TypeScript-native.
npm install @sentinelseed/core
import { validateTHSP } from '@sentinelseed/core';
const result = validateTHSP(userMessage);
if (!result.is_safe) {
console.log('Blocked:', result.violations);
}
The TypeScript SDK includes the full THSP heuristic engine, HARM_PATTERNS for SQL injection detection, and refusal detection utilities.
Option 3: Framework Integration (LangChain, CrewAI, etc.)
If you're using a popular agent framework, Sentinel has native integrations:
LangChain
from sentinelseed.integrations.langchain import SentinelGuard
guard = SentinelGuard()
safe_chain = guard.wrap(your_chain)
Every chain invocation now runs through THSP validation
result = safe_chain.invoke({"input": user_message})
CrewAI
from sentinelseed.integrations.crewai import SentinelCrew
crew = SentinelCrew(agents=[...], tasks=[...])
All agent actions validated automatically
LangGraph
from sentinelseed.integrations.langgraph import add_sentinel_node
graph = add_sentinel_node(graph, before="agent")
Safety checkpoint added to your graph
We support 30+ frameworks including DSPy, LlamaIndex, AutoGPT, Letta, Agno, OpenAI Agents SDK, Anthropic SDK, Solana Agent Kit, and more. Full list at [sentinelseed.dev/integrations](/integrations).
Option 4: Visual Builder (No Code)
Don't want to write code? Use the platform:
5. Add your LLM API key (BYOK — keys are encrypted client-side)
6. Test in sandbox mode
7. Deploy with one click
Your agent gets a live endpoint with Sentinel protection built in.
What Gets Validated?
Every validation checks four gates (the THSP Protocol):
All four must pass. The absence of harm is not enough — there must be genuine purpose.
Validation Layers
The SDK runs up to four layers of protection:
You can enable or disable each layer based on your security needs and cost budget.
Cost
The heuristic layer (L1 + L3) is completely free and runs offline. No API calls, no data leaving your system.
The semantic layer and L4 Observer use LLM calls — you choose the provider and model. With gpt-4o-mini, expect around $0.0005 per validation.
On the platform, execution costs $0.003 per run, paid with credits (SOL, USDC, or $SENTINEL with a 20% bonus).
Next Steps
Pick the integration that fits your stack and start validating.
The Sentinel Team
More from the Blog
Sentinel Platform v3: The Full Picture
Five specialized products, 30+ framework integrations, interactive demos, and a visual agent builder — here's everything in Sentinel Platform v3.
Introducing Sentinel: The Decision Firewall for AI Agents
Today we launch Sentinel, a new approach to AI safety that protects the behavioral layer of autonomous agents. Learn why decision-layer protection is the missing piece in AI security.