PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
shap
shap

A game theoretic approach to explain the output of any machine learning model.

14.5M 25K 4K
interpretml
interpret-core

Fit interpretable models. Explain blackbox machine learning.

957K 7K 783
interpretml
interpret

Fit interpretable models. Explain blackbox machine learning.

387K 7K 783
wisent-ai
wisent

This is an open-source version of the representation engineering framework for stopping harmful outputs or hallucinations on the level of activations. 100% free, self-hosted and open-source.

167K 341 32
microsoft
raiutils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

72K 2K 476
microsoft
erroranalysis

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

27K 2K 476
mmschlk
shapiq

Shapley Interactions and Shapley Values for Machine Learning

26K 722 58
microsoft
responsibleai

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

22K 2K 476
iancovert
sage-importance

For calculating global feature importance using Shapley values.

13K 287 33
salimamoukou
acv-dev

ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based models.

11K 103 11
microsoft
raiwidgets

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

9K 2K 476
MAIF
shapash

🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models

9K 3K 383
keisen
tf-keras-vis

Neural network visualization toolkit for tf.keras

8K 338 47
givasile
effector

Effector - a Python package for global and regional effect methods

8K 119 2
kgd-al
abrain

ES-HyperNEAT Python implementation with C++ computations for NeuroEvolution, Reinforcement Learning and VfMRI

7K 6 0
snehankekre
streamlit-shap

streamlit-shap provides a wrapper to display SHAP plots in Streamlit.

6K 91 10
hi-paris
xper

A methodology designed to measure the contribution of the features to the predictive performance of any econometric or machine learning model.

5K 18 1
chr5tphr
zennit

Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.

4K 243 35
hinanohart
yuragi

LLM Confidence Fragility Analyzer — Measure how fragile your AI's confidence really is

3K 0 0
salimamoukou
acv-exp

ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based models.

3K 103 11
designer-coderajay
glassbox-mech-interp

Mechanistic interpretability + EU AI Act Annex IV compliance. 21/21 frameworks: ACDC edge-circuit discovery, multi-arch GQA/RMSNorm adapter (Llama-3/Mistral/Phi-3), cross-model comparison, causal scrubbing, DAS, Hessian bounds, BH FDR, folded LayerNorm, SAE polysemanticity, multi-corruption, held-out validation. Dual-licensed (MIT core + BSL 1.1 compliance engine).

3K 1 0
microsoft
rai-test-utils

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

3K 2K 476
csinva
imodelsx

Interpret text data with LLMs (sklearn compatible).

3K 175 27
CodeBoarding
codeboarding

Interactive architecture diagrams for codebases

3K 1K 109
    • Data from PyPI, GitHub, ClickHouse, and BigQuery