PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Fairness Python Packages

Python packages with the GitHub topic fairness. Sorted by relevance, with stars and monthly downloads.
mlco2
codecarbon

Track emissions from Compute and recommend ways to reduce their impact on the environment.

200K 2K 277
fairlearn
fairlearn

A Python package to assess and improve fairness of machine learning models.

175K 2K 501
microsoft
raiutils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

72K 2K 476
Trusted-AI
aif360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

41K 3K 914
ModelOriented
dalex

moDel Agnostic Language for Exploration and eXplanation

37K 1K 170
microsoft
erroranalysis

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

26K 2K 476
microsoft
responsibleai

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

21K 2K 476
microsoft
raiwidgets

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

9K 2K 476
wearepal
ethicml

Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency

4K 24 3
IBM
infairness

PyTorch package to train and audit ML models for Individual Fairness

3K 67 8
fair-software
howfairis

Command line tool to analyze a GitHub or GitLab repository's compliance with the fair-software.eu recommendations

3K 73 28
apple
dnikit

A Python toolkit for analyzing machine learning models and datasets.

3K 79 8
microsoft
rai-test-utils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

3K 2K 476
WwZzz
flgo

An experimental platform for federated learning.

3K 628 102
WaterFutures
water-futures-battle

Python project for the Battle of the Water Futures, originally developed to be hosted at WDSA/CCWI 2026 conference, May 18-21, Paphos, Cyprus.

2K 16 2
cvs-health
langfair

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

2K 257 43
BAder82t
fairlearn-fhe

Drop-in encrypted Fairlearn metrics over CKKS. Same API surface; ciphertext arithmetic via TenSEAL or OpenFHE.

2K 2 0
microsoft
responsibleai-vision

SDK API to assess image Machine Learning models.

2K 2K 476
burning-cost
insurance-fairness

Proxy discrimination auditing for insurance pricing — FCA EP25/2, Consumer Duty, bias metrics

1K 0 0
oracle-samples
oracle-automlx

This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.

1K 30 6
Khanz9664
trustlens

Open-source Python library for evaluating ML model reliability beyond accuracy — with calibration, failure, and fairness diagnostics for informed deployment decisions.

1K 10 12
socialfoundations
folktexts

Use LLMs to get classification risk scores on tabular tasks.

1K 25 5
peremartra
optipfair

Structured pruning and bias visualization for Large Language Models. Tools for LLM optimization and fairness analysis.

1K 38 9
EFS-OpenSource
thetiscore

Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.

1K 6 1
    • Data from PyPI, GitHub, ClickHouse, and BigQuery