LangChain Integration
Safety validation for LangChain applications via callbacks, guards, and chain wrappers.
Installation
pip install sentinelseed[langchain]
Components
| Component | Description |
|---|
SentinelCallback | Callback handler for LLM monitoring |
SentinelGuard | Wrapper for agents with validation |
SentinelChain | Chain/LLM wrapper with safety checks |
inject_seed | Add seed to any message list |
wrap_llm | Wrap LLM with safety features |
Quick Start
Option 1: Callback Handler
from langchain_openai import ChatOpenAI
from sentinelseed.integrations.langchain import SentinelCallback
callback = SentinelCallback(
seed_level="standard",
on_violation="log", # log, raise, block, flag
validate_input=True,
validate_output=True,
)
llm = ChatOpenAI(callbacks=[callback])
response = llm.invoke("Your prompt")
print(callback.get_stats())
print(callback.get_violations())
Option 2: Agent Wrapper
from langchain.agents import create_react_agent
from sentinelseed.integrations.langchain import SentinelGuard
agent = create_react_agent(llm, tools, prompt)
guard = SentinelGuard(
agent=agent,
seed_level="standard",
block_unsafe=True,
)
result = guard.invoke({"input": "Your task"})
Option 3: Chain Wrapper
from sentinelseed.integrations.langchain import SentinelChain
chain = SentinelChain(
llm=ChatOpenAI(),
seed_level="standard",
inject_seed=True,
validate_input=True,
validate_output=True,
)
result = chain.invoke("Help me with something")
Configuration
SentinelCallback
SentinelCallback(
seed_level="standard", # minimal, standard, full
on_violation="log", # log, raise, block, flag
validate_input=True,
validate_output=True,
max_violations=1000,
sanitize_logs=False,
max_text_size=50*1024, # 50KB
validation_timeout=30.0,
fail_closed=False,
)
Important: Callbacks MONITOR but do NOT BLOCK execution. Use
SentinelGuard or
SentinelChain for blocking.
Violation Handling
| Mode | Behavior |
|---|
log | Log warning, continue |
raise | Raise SentinelViolationError |
block | Log as blocked for monitoring |
flag | Silent recording only |
Response Format
SentinelGuard.invoke()
# Safe response
{"output": "...", "sentinel_blocked": False}
# Blocked response
{
"output": "Request blocked by Sentinel: [...]",
"sentinel_blocked": True,
"sentinel_reason": [...]
}
Advanced Features
- Streaming Support: Incremental validation during streaming
- Async Operations: Full async support via
ainvoke, astream, abatch - Thread Safety: Thread-safe data structures for all components
- Fail-Closed Mode:
fail_closed=True for stricter security
Links