PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
corefrg
lexicont

Policy-driven agent for real-time text moderation

3K 1 1
GLINCKER
glin-profanity

Open-source ML-powered profanity filter with TensorFlow.js toxicity detection, leetspeak & Unicode obfuscation resistance. 21M+ ops/sec, 23 languages, React hooks, LRU caching. npm & PyPI.

3K 51 8
Yegmina
toxic-detection

Intelligent AI Agent for Real-time Content Moderation 97.5% accuracy | Multi-stage ML pipeline | Production-ready Zero-tier filtering + Embeddings + Fine-tuned BERT + RAG

554 1 0
akshaymagapu
aisafeguard

Open-source LLM safety guardrails: prompt injection protection, PII redaction, toxicity filtering, and OpenAI-compatible AI proxy

291 0 0
antrixsh
trusteval-ai

Enterprise LLM Evaluation & Responsible AI Framework — Benchmark bias, hallucination, PII leakage, and toxicity across Healthcare, BFSI, Retail & Legal industries. Supports OpenAI, Anthropic, Gemini & HuggingFace. Python SDK + CLI + Web Dashboard. 191 tests. Compliance-ready reports.

252 7 5
DanielYakubov
rfwc

A keyword-based abuse/hate detection software.

90 0 1
    • Data from PyPI, GitHub, ClickHouse, and BigQuery