A security layer for AI agents. Stops prompt injection, tool tampering, and runaway behavior before they happen. Open-source guardrails for AI agents — auditable, self-hosted, and works with any framework. Deterministic security sidecar for LLM agent frameworks. Drop-in protection against prompt injection, tool poisoning, and capability abuse.