PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
fairlearn
fairlearn

A Python package to assess and improve fairness of machine learning models.

174K 2K 501
microsoft
raiutils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

72K 2K 476
Giskard-AI
giskard

🐢 Open-Source Evaluation & Testing library for LLM Agents

40K 5K 446
ModelOriented
dalex

moDel Agnostic Language for Exploration and eXplanation

36K 1K 170
microsoft
erroranalysis

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

27K 2K 476
microsoft
responsibleai

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

22K 2K 476
microsoft
raiwidgets

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

9K 2K 476
ashutoshrana
regulated-ai-governance

Policy enforcement for AI agents in regulated environments (FERPA, HIPAA, GLBA, GDPR): framework adapters for CrewAI, AutoGen, LangChain, Semantic Kernel, Haystack

9K 0 0
getaxonflow
axonflow

Official Python SDK for AxonFlow — runtime control, MCP policy enforcement, approvals, and audit trails for production AI

8K 1 0
wearepal
ethicml

Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency

4K 24 3
encypherai
encypher-ai

Embed invisible metadata in AI-generated text using zero-width characters.

4K 30 3
IBM
infairness

PyTorch package to train and audit ML models for Individual Fairness

3K 67 8
Pacific-AI-Corp
langtest

Pacific AI provides a library for delivering safe & effective NLP models.

3K 556 49
holistic-ai
holisticai

This is an open-source tool to assess and improve the trustworthiness of AI systems.

3K 104 30
microsoft
rai-test-utils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

3K 2K 476
rhesis-ai
rhesis-sdk

The testing platform for AI teams. Bring engineers, PMs, and domain experts together to generate tests, simulate (adversarial) conversations, and trace every failure to its root cause.

2K 317 24
cvs-health
langfair

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

2K 257 43
JohnSnowLabs
nlptest

Deliver safe & effective language models

2K 556 49
aiexponenthq
riskforge

RiskForge — EU AI Act Article 9 Risk Management File generator. 8-dimension assessment, hash-chained audit trail, 30-min CLI workflow. Apache 2.0.

2K 0 0
hupe1980
aisploit

🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.

2K 26 5
trustyai-explainability
llama-stack-provider-trustyai-garak

Out-Of-Tree Llama Stack Eval Provider for Red Teaming LLM Systems with Garak

1K 1 8
microsoft
responsibleai-vision

SDK API to assess image Machine Learning models.

1K 2K 476
rhesis-ai
rhesis

The testing platform for AI teams. Bring engineers, PMs, and domain experts together to generate tests, simulate (adversarial) conversations, and trace every failure to its root cause.

1K 317 24
CAIIVS
raitap

Fully integrated pipeline to assess the transparency & robustness of AI models

1K 1 0
    • Data from PyPI, GitHub, ClickHouse, and BigQuery