PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Fairness Ai Python Packages

Python packages with the GitHub topic fairness-ai. Sorted by relevance, with stars and monthly downloads.
fairlearn
fairlearn

A Python package to assess and improve fairness of machine learning models.

175K 2K 501
microsoft
raiutils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

72K 2K 476
Trusted-AI
aif360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

41K 3K 914
Giskard-AI
giskard

🐢 Open-Source Evaluation & Testing library for LLM Agents

40K 5K 446
microsoft
erroranalysis

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

26K 2K 476
microsoft
responsibleai

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

21K 2K 476
microsoft
raiwidgets

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

9K 2K 476
fidelity
jurity

[ACM 2024] Jurity: Fairness & Evaluation Library

4K 58 12
wearepal
ethicml

Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency

4K 24 3
IBM
infairness

PyTorch package to train and audit ML models for Individual Fairness

3K 67 8
microsoft
rai-test-utils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

3K 2K 476
cvs-health
langfair

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

2K 257 43
microsoft
responsibleai-vision

SDK API to assess image Machine Learning models.

2K 2K 476
oracle-samples
oracle-automlx

This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.

1K 30 6
EFS-OpenSource
thetiscore

Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.

1K 6 1
credo-ai
credoai-lens

Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.

845 49 12
dccuchile
wefe

The Word Embedding Fairness Evaluation Framework

691 182 14
microsoft
responsibleai-text

SDK API to assess text Machine Learning models.

533 2K 476
microsoft
nlp-feature-extractors

NLP Feature Extractors

500 2K 476
fixouttech
fixout

Algorithmic inspection for trustworthy ML models

463 4 0
RexYuan
fairness-checker

Fairnes checker

356 0 0
matus-pikuliak
genderbench

GenderBench - Evaluation suite for gender biases in LLMs

343 5 1
EFS-OpenSource
thetis

Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.

339 6 1
cylynx
verifyml

Open-source toolkit to help companies implement responsible AI workflows.

268 23 2
    • Data from PyPI, GitHub, ClickHouse, and BigQuery