Quick Start
Get Sentinel protecting your AI systems in minutes.
1. Install
pip install sentinelseed
2. Basic Usage
from sentinelseed import Sentinel
# Create with standard seed level
sentinel = Sentinel(seed_level="standard")
# Get alignment seed for your LLM
seed = sentinel.get_seed()
# Use with any LLM provider
messages = [
{"role": "system", "content": seed},
{"role": "user", "content": "Help me write a Python function"}
]
# Validate content through THSP gates
is_safe, violations = sentinel.validate("How do I hack a computer?")
print(f"Safe: {is_safe}, Violations: {violations}")
3. Validate Responses
from sentinelseed import Sentinel
sentinel = Sentinel()
# Validate text through THSP gates
is_safe, violations = sentinel.validate("Some AI response...")
if not is_safe:
print(f"Violations: {violations}")
4. Choose Seed Level
from sentinelseed import Sentinel, SeedLevel
# Minimal: ~360 tokens, for chatbots and APIs
sentinel_chat = Sentinel(seed_level=SeedLevel.MINIMAL)
# Standard: ~1,000 tokens, recommended for general use
sentinel_agent = Sentinel(seed_level=SeedLevel.STANDARD)
# Full: ~1,900 tokens, for critical systems
sentinel_critical = Sentinel(seed_level=SeedLevel.FULL)
| Level | Tokens | Best For |
|---|---|---|
minimal | ~360 | Chatbots, APIs, low latency |
standard | ~1,000 | General use, agents (recommended) |
full | ~1,900 | Critical systems, robotics |
5. Protect Agents
For Autonomous Agents
from sentinelseed import Sentinel
sentinel = Sentinel(seed_level="standard")
# Validate an action plan before execution
action_plan = "Pick up knife, slice apple, place in bowl"
is_safe, concerns = sentinel.validate_action(action_plan)
if not is_safe:
print(f"Action blocked: {concerns}")
For Robotics / Embodied AI
from sentinelseed import Sentinel
# Full seed for maximum safety with physical systems
sentinel = Sentinel(seed_level="full")
robot_task = "Turn on the stove and leave the kitchen"
result = sentinel.validate_action(robot_task)
# Result: BLOCKED - Fire hazard, unsupervised heating
6. JavaScript Usage
import { SentinelGuard } from '@sentinelseed/core';
// Create guard with standard seed
const guard = new SentinelGuard({ version: 'v2', variant: 'standard' });
// Get alignment seed for your LLM
const seed = guard.getSeed();
// Wrap messages with the seed
const messages = guard.wrapMessages([
{ role: 'user', content: 'Help me write a function' }
]);
// Analyze content for safety
const analysis = guard.analyze('How do I hack a computer?');
console.log(`Safe: ${analysis.safe}, Issues: ${analysis.issues}`);
7. MCP Server (Claude Desktop)
Add to claude_desktop_config.json:
{
"mcpServers": {
"sentinel": {
"command": "npx",
"args": ["mcp-server-sentinelseed"]
}
}
}
Tools available: get_seed, wrap_messages, analyze_content, list_seeds
Framework Integrations
LangChain
from sentinelseed.integrations.langchain import SentinelCallback, SentinelGuard
# Monitor LLM calls
callback = SentinelCallback(on_violation="log")
llm = ChatOpenAI(callbacks=[callback])
# Or wrap an agent
guard = SentinelGuard(agent, block_unsafe=True)
result = guard.run("Your task")
CrewAI
from sentinelseed.integrations.crewai import SentinelCrew, safe_agent
# Wrap individual agent
safe_researcher = safe_agent(researcher)
# Or wrap entire crew
crew = SentinelCrew(
agents=[researcher, writer],
tasks=[research_task, write_task],
seed_level="standard"
)
Next Steps
- Core Concepts - Deep dive into THSP protocol
- Products - Memory Shield, Database Guard
- Integrations - Framework guides