Open-source prompt injection detector — 5 layers, 91.7% F1, ~27ms, offline, Apache 2.0
Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.
Offensive AI red-team tool: multi-turn 'innocent question' sequences for system prompt reconstruction.
Universal Prompt Security Standard - Python Implementation
CloakPrompt is a CLI tool that redacts secrets (passwords, API keys, credentials, etc.) before sending data to AI models.