16 dependents
| Package | Description | Downloads/month |
|---|---|---|
| A Python framework for performing information retrieval experiments, building on... | 13K | |
| Package for LLM Evaluation | 4K | |
| The lsr-benchmark aims to support holistic evaluations of the lexical sparse ret... | 1K | |
| ENCOURAGE | 1K | |
| Experimaestro common module for IR experiments | 635 | |
| CLIR version of ColBERT | 542 | |
| A tool to quantify the replicability and reproducibility of system-oriented IR e... | 477 | |
| PyTerrier RAG pipelines | 467 | |
| Sorbonne University Master MIND - Large Language Models course plugin | 445 | |
| Your one-stop shop for fine-tuning and running neural ranking models. | 439 | |
| Framework for LLM evaluation, guardrails and security | 393 | |
| Training Neural Retrievers | 284 | |
| library for working with TREC run files | 193 | |
| A Python framework for Technology-Assisted Review experiments. | 146 | |
| a tool for automatically inferring query relevance assessments (qrels) | 90 | |
| Implementation of the measure Probability of Equal Expected Rank | 67 |