PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
sgl-project
sglang

SGLang is a high-performance serving framework for large language models and multimodal models.

287.7M 27K 6K
vllm-project
vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

9.4M 79K 16K
flashinfer-ai
flashinfer-python

FlashInfer: Kernel Library for LLM Serving

4M 6K 948
flashinfer-ai
flashinfer-cubin

FlashInfer: Kernel Library for LLM Serving

2.7M 6K 948
sgl-project
sglang-kernel

SGLang is a high-performance serving framework for large language models and multimodal models.

264K 27K 6K
sgl-project
sgl-kernel

SGLang is a high-performance serving framework for large language models and multimodal models.

256K 27K 6K
modelscope
ms-swift

Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.6, DeepSeek-R1, GLM-5.1, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, GLM4.5v, Gemma4, Llava, Phi4, ...) (AAAI 2025).

171K 14K 1K
vllm-project
vllm-tpu

A high-throughput and memory-efficient inference and serving engine for LLMs

143K 79K 16K
hiyouga
llamafactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

29K 71K 9K
NVIDIA
tensorrt-llm

TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

16K 14K 2K
sgl-project
sglang-kt

SGLang is a high-performance serving framework for large language models and multimodal models.

4K 27K 6K
theoddden
terradev-cli

Cross-Cloud Compute Optimization Platform with Migration & Evaluation - v4.0.12

3K 10 1
inclusionAI
awex

A high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from training to inference in RL workflows

3K 150 17
wuwangzhang1216
abliterix

Automated alignment adjustment for LLMs — direct steering, LoRA, and MoE expert-granular abliteration, optimized via multi-objective Optuna TPE.

2K 215 42
hiyouga
llmtuner

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

2K 71K 9K
szibis
mlx-flash

Run AI models too large for your Mac's memory — expert caching, speculative execution, and 15+ research techniques for MoE inference on Apple Silicon

1K 2 0
uccl-project
uccl

UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g., GPU-driven)

1K 1K 144
kyegomez
switch-transformers

Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity"

755 139 17
SuperInstance
plato-edge

Edge-optimized Cocapn fleet packages for ARM64 — pure Python, zero deps, <100KB

684 2 0
sgl-project
dblcsgen

DBLC Fast Structured Generation

570 27K 6K
vllm-project
vllm-hust

A high-throughput and memory-efficient inference and serving engine for LLMs

437 79K 16K
hiyouga
lazyllm-llamafactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

394 71K 9K
vllm-project
wxy-test

A high-throughput and memory-efficient inference and serving engine for LLMs

375 2K 1K
holdjun
kmoe

Command-line manga downloader for kxx.moe / kzz.moe / koz.moe

347 2 0
    • Data from PyPI, GitHub, ClickHouse, and BigQuery