Open-source prompt injection attack console - Test AI systems for prompt injection vulnerabilities
Buzur is an open-source 25-phase scanner that protects AI agents and LLM applications from indirect prompt injection attacks (OWASP LLM Top 10 #1).
Basilisk — Open-source AI red teaming framework with genetic prompt evolution. Automated LLM security testing for GPT-4, Claude, Grok, Gemini. OWASP LLM Top 10 coverage. 32 attack modules.
Vocabulary-Based Adversarial Fuzzing (VB-AF) framework for Large Language Models (LLMs)