PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
uptrain-ai
uptrain

UpTrain is an open-source unified platform to evaluate and improve Generative AI applications. We provide grades for 20+ preconfigured checks (covering language, code, embedding use-cases), perform root cause analysis on failure cases and give insights on how to resolve them.

3K 2K 203
mattijsmoens
sovereign-shield

AI security framework: deterministic Immutable input filtering, adaptive rule learning, optional LLM veto verification. Zero dependencies. Works without an LLM. Patent Pending.

3K 19 7
maheshmakvana
llm-injection-guard

Drop-in prompt injection defense for LLM apps and AI agents — detect, sanitize, block, and audit injection attacks in real time. Includes multi-turn session scanning, allow-lists, rate-abuse detection, multi-layer scanner, FastAPI and Flask middleware.

2K 0 0
killertcell428
pyaigis

The open-source firewall for AI agents. Block prompt injections, jailbreaks, and data leaks before they reach your LLM. Multi-layer defense, agent-era security (MCP/Capability), US/CN/JP/EU compliance. Zero-dependency core.

2K 3 0
stef41
injectionguard

Prompt injection detection for LLM applications and MCP servers. Detects jailbreaks, instruction override, encoded attacks. OWASP LLM #1 defense.

1K 1 0
mattijsmoens
intentshield

Pre-execution intent verification for AI agents

575 19 5
Priyrajsinh
p1-hybrid-jailbreak-detector

Defense-in-depth input safety for LLMs — perplexity gate + FAISS + ModernBERT + LoRA + Llama Guard 3, behind a deterministic policy gate. 99.88% accuracy, 99.47% jailbreak recall, calibrated confidence, ONNX-optimized. Live demo on HF Spaces.

515 0 0
mattijsmoens
sovereign-shield-adaptive

AI security framework: deterministic Immutable input filtering, adaptive rule learning, optional LLM veto verification. Zero dependencies. Works without an LLM. Patent Pending.

364 19 7
Adxzer
pydefend

AI security guardrails for LLM applications — scan inputs and check outputs with Claude, OpenAI, Gemini, Azure, or Ollama.

341 0 0
lockllm
lockllm

Official Python SDK for LockLLM

331 0 0
DmitrL-dev
sentinel-llm-security

AI Security Platform: Defense (61 Rust engines + Micro-Model Swarm) + Offense (39K+ payloads)

314 104 16
vpdeva
blackwall-llm-shield-python

Security middleware for Python LLM apps and services. Blocks prompt injection, masks PII, inspects outputs, and gates agent tools.

306 1 0
DmitrL-dev
rlm-toolkit

Recursive Language Models Toolkit for processing unlimited context

299 104 16
akshaymagapu
aisafeguard

Open-source LLM safety guardrails: prompt injection protection, PII redaction, toxicity filtering, and OpenAI-compatible AI proxy

291 0 0
SoubhikGhosh
soweak

LLM Security & Prompt Injection Detection Library - OWASP Top 10 for LLM Applications 2025 vulnerability scanner for AI/ML pipelines, LangChain, OpenAI, and Google ADK integrations

287 7 0
dronefreak
promptscreen

Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with multiple detection methods, CLI tool, and FastAPI integration.

118 9 4
uptrain-ai
vellum-uptrain-fork

Vellum UpTrain Fork

68 2K 203
    • Data from PyPI, GitHub, ClickHouse, and BigQuery