PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Amd Python Packages

Python packages with the GitHub topic amd. Sorted by relevance, with stars and monthly downloads.
vllm-project
vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

8.9M 79K 16K
inducer
pyopencl

OpenCL integration for Python, plus shiny features

190K 1K 249
dstackai
dstack

Vendor-agnostic orchestration for training, inference and agentic workloads across NVIDIA, AMD, TPU, and Tenstorrent on clouds, Kubernetes, and bare metal.

169K 2K 224
vllm-project
vllm-tpu

A high-throughput and memory-efficient inference and serving engine for LLMs

145K 79K 16K
LMCache
lmcache

Supercharge Your LLM with the Fastest KV Cache Layer

112K 8K 1K
stackav-oss
conch-triton-kernels

A "standard library" of Triton kernels.

45K 24 3
amd
amd-gaia

Build AI agents for your PC

6K 1K 92
mcgillij
amdfan

Updated AMD Fan control utility forked from amdgpu-fan and updated.

2K 38 9
grillcheese-ai
grilly

GPU-accelerated neural network operations using Vulkan compute shaders

1K 26 1
uccl-project
uccl

UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g., GPU-driven)

1K 1K 144
nabla-ml
nabla-ml

Nabla: High-Performance Scientific Computing

1K 335 13
last9
l9gpu

GPU telemetry with workload attribution. One OTLP agent per node ties hardware metrics (NVIDIA, AMD, Intel Gaudi) to the K8s pod or Slurm job burning the GPU — so you know who's paying for that idle H100.

1K 10 2
MedVisBonn
eyepie

A python package to read, analyse and visualize OCT and fundus data from various sources.

1K 88 17
MedVisBonn
eyepy

A python package to read, analyse and visualize OCT and fundus data from various sources.

843 88 17
intexcor
gputop

A simple real-time GPU monitoring tool for NVIDIA, AMD and Intel GPUs.

800 9 0
ssube
onnx-web

web UI for running ONNX models

667 234 30
vllm-project
vllm-xft

A high-throughput and memory-efficient inference and serving engine for LLMs

485 79K 16K
vllm-project
vllm-acc

A high-throughput and memory-efficient inference and serving engine for LLMs

484 79K 16K
vllm-project
vllm-hust

A high-throughput and memory-efficient inference and serving engine for LLMs

480 79K 16K
vllm-project
ai-dynamo-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

437 79K 16K
vllm-project
wxy-test

A high-throughput and memory-efficient inference and serving engine for LLMs

394 2K 1K
vllm-project
nextai-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

378 79K 16K
vllm-project
vllm-consul

A high-throughput and memory-efficient inference and serving engine for LLMs

306 79K 16K
vllm-project
vllm-npu

A high-throughput and memory-efficient inference and serving engine for LLMs

281 79K 16K
    • Data from PyPI, GitHub, ClickHouse, and BigQuery