PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
fairlearn
fairlearn

A Python package to assess and improve fairness of machine learning models.

174K 2K 501
microsoft
raiutils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

72K 2K 476
Giskard-AI
giskard

🐢 Open-Source Evaluation & Testing library for LLM Agents

40K 5K 446
Trusted-AI
aif360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

40K 3K 914
microsoft
erroranalysis

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

27K 2K 476
microsoft
responsibleai

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

22K 2K 476
microsoft
raiwidgets

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

9K 2K 476
fidelity
jurity

[ACM 2024] Jurity: Fairness & Evaluation Library

4K 58 12
wearepal
ethicml

Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency

4K 24 3
IBM
infairness

PyTorch package to train and audit ML models for Individual Fairness

3K 67 8
microsoft
rai-test-utils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

3K 2K 476
cvs-health
langfair

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

2K 257 43
microsoft
responsibleai-vision

SDK API to assess image Machine Learning models.

1K 2K 476
oracle-samples
oracle-automlx

This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.

1K 30 6
EFS-OpenSource
thetiscore

Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.

1K 6 1
credo-ai
credoai-lens

Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.

874 49 12
dccuchile
wefe

The Word Embedding Fairness Evaluation Framework

662 182 14
fixouttech
fixout

Algorithmic inspection for trustworthy ML models

534 4 0
microsoft
responsibleai-text

SDK API to assess text Machine Learning models.

498 2K 476
microsoft
nlp-feature-extractors

NLP Feature Extractors

409 2K 476
RexYuan
fairness-checker

Fairnes checker

383 0 0
EFS-OpenSource
thetis

Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.

347 6 1
matus-pikuliak
genderbench

GenderBench - Evaluation suite for gender biases in LLMs

322 5 1
cylynx
verifyml

Open-source toolkit to help companies implement responsible AI workflows.

254 23 2
    • Data from PyPI, GitHub, ClickHouse, and BigQuery