PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
vstorm-co
pydantic-ai-middleware

Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII filtering, secret redaction, tool permissions, and async guardrails. Built on pydantic-ai's native capabilities API.

139K 59 8
vstorm-co
pydantic-ai-shields

Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII filtering, secret redaction, tool permissions, and async guardrails. Built on pydantic-ai's native capabilities API.

93K 59 8
Lips7
matcher-py

A high-performance matcher designed to solve LOGICAL and TEXT VARIATIONS problems in word matching, implemented in Rust.

92K 18 1
MaxMLang
pytector

Easy to use LLM Prompt Injection Detection and Prompt Input Sanitization / Detector Python Package with support for local models, API-based safeguards, and LangChain guardrails.

6K 40 23
corefrg
lexicont

Policy-driven agent for real-time text moderation

3K 1 1
GLINCKER
glin-profanity

Open-source ML-powered profanity filter with TensorFlow.js toxicity detection, leetspeak & Unicode obfuscation resistance. 21M+ ops/sec, 23 languages, React hooks, LRU caching. npm & PyPI.

3K 51 8
anthalehq
anthale

Anthale's official Python SDK

2K 1 0
Data-ScienceTech
forcefield

Zero-dependency AI security library -- prompt-injection detection, PII redaction, content safety, rate limiting, abuse detection, tool governance, and security evals for LLMs in 3 lines of Python.

927 0 0
stef41
vibesafex

AI safety guardrails - content filtering, toxicity detection, and safety scoring.

645 2 0
Yegmina
toxic-detection

Intelligent AI Agent for Real-time Content Moderation 97.5% accuracy | Multi-stage ML pipeline | Production-ready Zero-tier filtering + Embeddings + Fine-tuned BERT + RAG

554 1 0
FlacSy
badwords-py

High-performance profanity filter

482 13 3
viddexa
moderators

One package to moderate them all

429 5 0
apparebit
shantay

investigating the EU's DSA transparency database

299 3 1
Data-ScienceTech
llama-index-forcefield

ForceField Python SDK -- AI security in 3 lines of code. Prompt injection detection, PII redaction, security evals, tool governance. GitHub Action, pre-commit hook, Homebrew, VS Code extension.

241 0 0
MOB-sys
llm-medical-guard

Guardrails for LLM-generated medical and health content

225 0 0
Data-ScienceTech
langchain-forcefield

ForceField Python SDK -- AI security in 3 lines of code. Prompt injection detection, PII redaction, security evals, tool governance. GitHub Action, pre-commit hook, Homebrew, VS Code extension.

208 0 0
ymrohit
openscenesense

OpenSceneSense is a Python library that harnesses AI for advanced video analysis, offering customizable frame and audio insights for dynamic applications in media, education, and content moderation.

196 22 1
SafeNestSDK
safenest

Official Python SDK for SafeNest - AI-powered child safety API

190 0 0
RAILethicsHub
rail-score

DEPRECATED — use rail-score-sdk instead. This package redirects to rail-score-sdk.

181 2 1
Tuteliq
tuteliq

Official Python SDK for Tuteliq — AI-powered child safety API for detecting bullying, grooming, and unsafe content

169 0 0
isaac-rnd
sentraiq-core

A Python package for configurable content moderation.

108 0 0
safelyx
safelyx

API client for Safelyx.

90 0 0
chigwell
headline-parser

A new package that processes news headlines or short text snippets to generate structured summaries of current events. It uses an LLM to extract key entities, topics, and sentiment, ensuring the outpu

87 1 0
moderyo
moderyo

Official Python SDK for Moderyo Content Moderation API

72 1 0
    • Data from PyPI, GitHub, ClickHouse, and BigQuery