PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Bias Detection Python Packages

Python packages with the GitHub topic bias-detection. Sorted by relevance, with stars and monthly downloads.
Trusted-AI
aif360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

41K 3K 914
aria-ml
dataeval

Python library for analyzing data quality and its impact on model performance across classification and object-detection tasks.

4K 17 6
aria-ml
dataeval-plots

Python library for analyzing data quality and its impact on model performance across classification and object-detection tasks.

2K 17 6
lorentzenchr
model-diagnostics

Tools for diagnostics and assessment of (machine learning) models

2K 45 5
cvs-health
langfair

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

2K 257 43
Khanz9664
trustlens

Open-source Python library for evaluating ML model reliability beyond accuracy — with calibration, failure, and fairness diagnostics for informed deployment decisions.

1K 10 12
peremartra
optipfair

Structured pruning and bias visualization for Large Language Models. Tools for LLM optimization and fairness analysis.

1K 38 9
ankurpand3y
judicator

Who evaluates the evaluator? Judicator audits LLM-as-a-Judge systems for 7 documented bias types. Zero config. Works with any LLM.

1K 5 1
NahuelGiudizi
ai-safety-tester

LLM security testing framework with CVE-style severity scoring and multi-model benchmarking

949 0 0
aria-ml
daml

Python library for analyzing data quality and its impact on model performance across classification and object-detection tasks.

715 17 6
dccuchile
wefe

The Word Embedding Fairness Evaluation Framework

691 182 14
SolomonB14D3
rho-eval

Behavioral auditing toolkit for LLMs — audit any model across 8 dimensions (factual, toxicity, bias, sycophancy, reasoning, refusal, deception, over-refusal) using teacher-forced confidence probes.

551 4 0
mishi93999
seatbelt

Responsible AI auditing for LLMs and SLMs — deception, fairness, sociotechnical risk, regulatory compliance

550 0 0
VectorInstitute
fairsense-agentix

An agentic fairness and AI-risk analysis platform for detecting bias in text and images

411 2 1
SolomonB14D3
knowledge-fidelity

Compress LLMs while auditing whether they still know truth vs myths. SVD compression + false-belief detection in one toolkit.

359 4 0
antrixsh
trusteval-ai

Enterprise LLM Evaluation & Responsible AI Framework — Benchmark bias, hallucination, PII leakage, and toxicity across Healthcare, BFSI, Retail & Legal industries. Supports OpenAI, Anthropic, Gemini & HuggingFace. Python SDK + CLI + Web Dashboard. 191 tests. Compliance-ready reports.

254 7 5
VectorInstitute
unbias-plus

Python package that finds biased language in text, explains why, suggests neutral wording, and rewrites the whole text. Use from CLI, API, or Python.

246 2 1
TaimoorKhan10
ai-fairness-toolkit

AI Fairness and Explainability Toolkit (AFET) is an open-source project aimed at providing tools and frameworks to assess, visualize, and mitigate bias in machine learning models. It supports multiple ML frameworks and offers a comprehensive suite of metrics and visualization components to enhance model transparency and fairness.

225 0 2
ethical-spectacle
the-fairly-project

Bias detection Toolkit: Chrome Extension, Python Package, SOTA research paper docs.

209 4 0
IQTLabs
aiscan

Scan your AI/ML models for problems before you put them into production.

207 11 7
jorgeMFS
phenoqc

PhenoQC is a lightweight, efficient, and user-friendly toolkit designed to perform comprehensive quality control (QC) on phenotypic datasets. It ensures that data adheres to standardized formats, maintains consistency, and is harmonized with recognized ontologies.

184 0 0
whis-19
whis-ethical-ai

Ethical AI Validator detects bias and assesses fairness in AI models with statistical parity analysis, real-time monitoring, and automated GDPR/AI Act compliance reporting. Python 2.7+ compatible.

183 0 0
Trusted-AI
aif360-fork2

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

139 3K 914
bws82
biasclear

Structural bias detection engine built on Persistent Influence Theory (PIT). Detect framing, anchoring, false consensus, and 30+ rhetorical distortion patterns.

108 1 0
    • Data from PyPI, GitHub, ClickHouse, and BigQuery