PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
unitaryai
detoxify

Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.

85K 1K 142
MilaNLProc
honest

A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.

216 21 4
imdiptanu
mad-fw

Multitask Aggression Detection (MAD)

189 0 0
DanielJDufour
hatebase

Python Version of Andrew Welter's Hatebase Wrapper

144 10 5
Master-Project-Hate-Speech
stitched

No description available

107 0 0
    • Data from PyPI, GitHub, ClickHouse, and BigQuery