LLM guardrails & prompt injection detection for Python. Auto-instruments LangChain, CrewAI, OpenAI, LiteLLM + 8 more frameworks. PII masking, toxicity detection, policy CI/CD. One line, zero code changes.
Aegis governance integration for LangChain — add policy enforcement, risk assessment, and audit logging to any LangChain tool.