A Python package to assess and improve fairness of machine learning models.
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
Algorithmic inspection for trustworthy ML models
Python package that finds biased language in text, explains why, suggests neutral wording, and rewrites the whole text. Use from CLI, API, or Python.