PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
vllm-project
vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

9.4M 79K 16K
vllm-project
vllm-tpu

A high-throughput and memory-efficient inference and serving engine for LLMs

143K 79K 16K
InternLM
xtuner

A Next-Generation Training Engine Built for Ultra-Large MoE Models

3K 5K 419
vllm-project
vllm-hust

A high-throughput and memory-efficient inference and serving engine for LLMs

437 79K 16K
vllm-project
wxy-test

A high-throughput and memory-efficient inference and serving engine for LLMs

375 2K 1K
vllm-project
vllm-xft

A high-throughput and memory-efficient inference and serving engine for LLMs

345 79K 16K
vllm-project
ai-dynamo-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

344 79K 16K
vllm-project
vllm-acc

A high-throughput and memory-efficient inference and serving engine for LLMs

342 79K 16K
vllm-project
nextai-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

273 79K 16K
vllm-project
vllm-consul

A high-throughput and memory-efficient inference and serving engine for LLMs

219 79K 16K
vllm-project
vllm-npu

A high-throughput and memory-efficient inference and serving engine for LLMs

209 79K 16K
vllm-project
vllm-musa

A high-throughput and memory-efficient inference and serving engine for LLMs

194 79K 16K
vllm-project
vllm-rocm

A high-throughput and memory-efficient inference and serving engine for LLMs

176 79K 16K
vllm-project
vllm-emissary

A high-throughput and memory-efficient inference and serving engine for LLMs

132 79K 16K
vllm-project
vllm-usf

A high-throughput and memory-efficient inference and serving engine for LLMs

115 79K 16K
vllm-project
tilearn-infer

A high-throughput and memory-efficient inference and serving engine for LLMs

107 79K 16K
vllm-project
vllm-online

A high-throughput and memory-efficient inference and serving engine for LLMs

82 79K 16K
vllm-project
vllm-test-tpu

A high-throughput and memory-efficient inference and serving engine for LLMs

80 79K 16K
vllm-project
hive-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

53 79K 16K
vllm-project
tilearn-test01

A high-throughput and memory-efficient inference and serving engine for LLMs

51 79K 16K
vllm-project
vllm-fixed

A high-throughput and memory-efficient inference and serving engine for LLMs

42 79K 16K
    • Data from PyPI, GitHub, ClickHouse, and BigQuery