58 dependents
| Package | Description | Downloads/month |
|---|---|---|
| SGLang is a high-performance serving framework for large language models and mul... | 287.7M | |
| A high-performance API server that provides OpenAI-compatible endpoints for MLX ... | 22K | |
| Optimizing inference proxy for LLMs | 12K | |
| Curate High Quality Datasets, Train, Evaluate and Ship | 7K | |
| This repository provides a simulation platform based on LLM agents grounded in m... | 5K | |
| LLM-powered security log analyzer: detect threats & anomalies with zero regex — ... | 5K | |
| The official zero-trust, high-throughput kinetic execution engine for the coreas... | 4K | |
| SGLang is a high-performance serving framework for large language models and mul... | 4K | |
| MLX Omni Server is a local inference server powered by Apple's MLX framework, sp... | 3K | |
| Plug-and-play document AI with zero-shot models. | 2K | |
| TruthTorchLM is an open-source library designed to assess truthfulness in langua... | 2K | |
| コミットするごとにテスト実行・ドキュメント生成・AGENTS.md の自動更新を行うパイプライン | 1K | |
| Parse, extract, and analyze documents with ease | 1K | |
| Medical cOmputational Suite for Advanced Intelligent eXtraction | 811 | |
| Guided Infilling Modeling Toolkit | 713 | |
| General Information, model certifications, and benchmarks for nm-vllm enterprise... | 666 | |
| Sorbonne University Master MIND - Large Language Models course plugin | 445 | |
| An Open-Source AGI Server for Open-Source LLMs | 413 | |
| A pipeline and package to implement and evaluate LLM chat bot tutors in educatio... | 413 | |
| Библиотека клиентов для взаимодействия с LLM | 393 | |
| A python package for serving LLM on OpenAI-compatible API endpoints with prompt ... | 364 | |
| Use `outlines` generators with Haystack. | 359 | |
| A simple library for generating instruction tuning datasets locally | 358 | |
| A high-throughput and memory-efficient inference and serving engine for LLMs | 345 | |
| A high-throughput and memory-efficient inference and serving engine for LLMs | 344 | |
| A Python framework designed for both generating and evaluating hints. | 301 | |
| Super fast local inferencing for common NLP tasks on technical text | 296 | |
| A high-throughput and memory-efficient inference and serving engine for LLMs | 273 | |
| tools for detecting bias patterns of LLMs | 234 | |
| A high-throughput and memory-efficient inference and serving engine for LLMs | 209 | |
| SGLang fork for ppc64le with CUDA 12.4 and Torch Triton support | 186 | |
| Local MLX Engine | 183 | |
| A high-throughput and memory-efficient inference and serving engine for LLMs | 176 | |
| A library for Grammatical Error Correction evaluation. | 160 | |
| Structured AI service with Outlines for JSON output | 153 | |
| Probabilistic Generative Model Programming | 138 | |
| A Python package for repairing broken JSON using multiple backends: LLMs and FSM... | 133 | |
| A high-throughput and memory-efficient inference and serving engine for LLMs | 132 | |
| Train small LLMs and deploy them for fast structured extraction on CPU | 127 | |
| Outlines custom provider plugin for LangExtract | 121 | |
| An open-source pipeline for training natural language understanding models | 110 | |
| structre context for code project | 110 | |
| A Python library for structured information extraction with LLMs. | 109 | |
| A cli tool for generating conventional commit messages using LLM models | 107 | |
| A high-throughput and memory-efficient inference and serving engine for LLMs | 107 | |
| structre context for code project | 105 | |
| A production-grade, OpenAI-compatible API layer for local LLMs with guaranteed s... | 87 | |
| Large Scale Topic based Synthetic Data Generation for LLM Fine-Tuning & Training | 83 | |
| A high-throughput and memory-efficient inference and serving engine for LLMs | 80 | |
| Call LLM-powered NPCs from your game, at runtime. | 79 |