PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Inference Server Python Packages

Python packages with the GitHub topic inference-server. Sorted by relevance, with stars and monthly downloads.
roboflow
inference-gpu

Turn any computer or edge device into a command center for your computer vision projects.

1M 2K 260
roboflow
inference-cli

Turn any computer or edge device into a command center for your computer vision projects.

827K 2K 260
basetenlabs
truss

The simplest way to serve AI/ML models in production

648K 1K 102
basetenlabs
truss-transfer

The simplest way to serve AI/ML models in production

300K 1K 102
basetenlabs
baseten-performance-client

The simplest way to serve AI/ML models in production

191K 1K 102
roboflow
inference-sdk

Turn any computer or edge device into a command center for your computer vision projects.

161K 2K 260
roboflow
inference

Turn any computer or edge device into a command center for your computer vision projects.

116K 2K 260
containers
ramalama

RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for inference in production, all through the familiar language of containers.

11K 3K 337
roboflow
inference-core

Turn any computer or edge device into a command center for your computer vision projects.

9K 2K 260
roboflow
inference-cpu

Turn any computer or edge device into a command center for your computer vision projects.

7K 2K 260
friendliai
friendli-client

[⛔️ DEPRECATED] Friendli: the fastest serving engine for generative AI

5K 50 7
notAI-tech
fastdeploy

Deploy DL/ ML inference pipelines with minimal extra code.

4K 103 17
coconut-labs
infergrid

Tenant-fair LLM inference orchestration on a single GPU. No Kubernetes.

2K 1 1
basetenlabs
baseten-inference-client

The simplest way to serve AI/ML models in production

2K 1K 102
geniusrise
geniusrise-vision

Huggingface bolts for geniusrise

1K 7 1
coconut-labs
kvwarden

Tenant-fair LLM inference orchestration on a single GPU. No Kubernetes.

1K 2 1
notAI-tech
fdclient

fastDeploy python client

753 103 17
pipeless-ai
pipeless-ai

An open-source computer vision framework to create and deploy computer vision applications that scale in minutes

721 850 52
pipeless-ai
pipeless-ai-cli

An open-source computer vision framework to build and deploy apps in minutes

648 850 52
geniusrise
geniusrise-audio

audio bolts for geniusrise

644 2 1
geniusrise
geniusrise-text

Text components powering LLMs & SLMs for geniusrise framework

563 5 2
underneathall
pinferencia

Python + Inference - Model Deployment library in Python. Simplest model inference server ever.

479 545 83
friendliai
periflow-client

[⛔️ DEPRECATED] Friendli: the fastest serving engine for generative AI

477 50 7
roboflow
smart-reid

Turn any computer or edge device into a command center for your computer vision projects.

298 2K 260
    • Data from PyPI, GitHub, ClickHouse, and BigQuery