PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Deepseek V3 Python Packages

Python packages with the GitHub topic deepseek-v3. Sorted by relevance, with stars and monthly downloads.
vllm-project
vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

8.9M 79K 16K
vllm-project
vllm-tpu

A high-throughput and memory-efficient inference and serving engine for LLMs

145K 79K 16K
InternLM
xtuner

A Next-Generation Training Engine Built for Ultra-Large MoE Models

3K 5K 419
vllm-project
vllm-xft

A high-throughput and memory-efficient inference and serving engine for LLMs

485 79K 16K
vllm-project
vllm-acc

A high-throughput and memory-efficient inference and serving engine for LLMs

484 79K 16K
vllm-project
vllm-hust

A high-throughput and memory-efficient inference and serving engine for LLMs

480 79K 16K
vllm-project
ai-dynamo-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

437 79K 16K
vllm-project
wxy-test

A high-throughput and memory-efficient inference and serving engine for LLMs

394 2K 1K
vllm-project
nextai-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

378 79K 16K
vllm-project
vllm-consul

A high-throughput and memory-efficient inference and serving engine for LLMs

306 79K 16K
vllm-project
vllm-npu

A high-throughput and memory-efficient inference and serving engine for LLMs

281 79K 16K
vllm-project
vllm-musa

A high-throughput and memory-efficient inference and serving engine for LLMs

279 79K 16K
vllm-project
vllm-emissary

A high-throughput and memory-efficient inference and serving engine for LLMs

188 79K 16K
vllm-project
vllm-usf

A high-throughput and memory-efficient inference and serving engine for LLMs

166 79K 16K
vllm-project
vllm-rocm

A high-throughput and memory-efficient inference and serving engine for LLMs

138 79K 16K
vllm-project
tilearn-infer

A high-throughput and memory-efficient inference and serving engine for LLMs

118 79K 16K
vllm-project
vllm-test-tpu

A high-throughput and memory-efficient inference and serving engine for LLMs

116 79K 16K
vllm-project
vllm-online

A high-throughput and memory-efficient inference and serving engine for LLMs

110 79K 16K
vllm-project
hive-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

66 79K 16K
vllm-project
tilearn-test01

A high-throughput and memory-efficient inference and serving engine for LLMs

57 79K 16K
vllm-project
vllm-fixed

A high-throughput and memory-efficient inference and serving engine for LLMs

56 79K 16K
    • Data from PyPI, GitHub, ClickHouse, and BigQuery