PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
fairlearn
fairlearn

A Python package to assess and improve fairness of machine learning models.

174K 2K 501
microsoft
raiutils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

72K 2K 476
microsoft
erroranalysis

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

27K 2K 476
microsoft
responsibleai

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

22K 2K 476
microsoft
raiwidgets

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

9K 2K 476
wearepal
ethicml

Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency

4K 24 3
apple
dnikit

A Python toolkit for analyzing machine learning models and datasets.

3K 79 8
microsoft
rai-test-utils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

3K 2K 476
tensorflow
fairness-indicators

Tensorflow's Fairness Evaluation and Visualization Toolkit

2K 357 88
cvs-health
langfair

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

2K 257 43
BAder82t
fairlearn-fhe

Drop-in encrypted Fairlearn metrics over CKKS. Same API surface; ciphertext arithmetic via TenSEAL or OpenFHE.

2K 2 0
microsoft
responsibleai-vision

SDK API to assess image Machine Learning models.

1K 2K 476
EFS-OpenSource
thetiscore

Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.

1K 6 1
Khanz9664
trustlens

Open-source Python library for evaluating ML model reliability beyond accuracy — with calibration, failure, and fairness diagnostics for informed deployment decisions.

995 10 12
feedzai
fairgbm

Train Gradient Boosting models that are both high-performance *and* Fair!

990 108 8
credo-ai
credoai-lens

Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.

874 49 12
dccuchile
wefe

The Word Embedding Fairness Evaluation Framework

662 182 14
tensorflow
tensorboard-plugin-fairness-indicators

Fairness Indicators TensorBoard Plugin

559 357 88
microsoft
responsibleai-text

SDK API to assess text Machine Learning models.

498 2K 476
microsoft
nlp-feature-extractors

NLP Feature Extractors

409 2K 476
EFS-OpenSource
thetis

Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.

347 6 1
mkduong-ai
fairdo

Fairness-Agnostic Data Optimization

284 13 1
jackblandin
research

Utility modules used for research and/or learning

251 7 4
microsoft
genbit

A tool for gender bias identification in text. Part of Microsoft's Responsible AI toolbox.

206 50 13
    • Data from PyPI, GitHub, ClickHouse, and BigQuery