PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
confident-ai
deepeval

The LLM Evaluation Framework

3.5M 15K 1K
cvs-health
langfair

LangFair is a Python library for conducting use-case level LLM bias and fairness assessments

2K 257 43
nhsengland
evalsense

Tools for systematic large language model evaluations

724 4 1
mr-gpt
llmevals

Eval

166 15K 1K
mr-gpt
deepevals

The LLM Evaluation Framework

162 15K 1K
mr-gpt
testllm

Deep eval provides evaluation platform to accelerate development of LLMs and Agents

81 15K 1K
    • Data from PyPI, GitHub, ClickHouse, and BigQuery