PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Kimi Python Packages

Python packages with the GitHub topic kimi. Sorted by relevance, with stars and monthly downloads.
vllm-project
vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

8.9M 79K 16K
vllm-project
vllm-tpu

A high-throughput and memory-efficient inference and serving engine for LLMs

145K 79K 16K
ThreeFish-AI
coding-proxy

A High-Availability, Transparent, and Smart Multi-Vendor Proxy for Claude Code. Support Claude Plans, GitHub Copilot, Google Antigravity, ZAI/GLM, MiniMax, Qwen, Xiaomi, Kimi, Doubao...

14K 15 1
Shelpuk-AI-Technology-Consulting
kitty-bridge

Universal LLM bridge for AI agents. Use Claude Code with MiniMax, Codex with GLM, or Gemini CLI with OpenRouter — one command, any provider. Works with coding agents, OpenClaw, Hermes, and others.

8K 8 2
Amanbig
devorch

A terminal-native, multi-provider intelligent assistant that plans, executes, and tracks developer tasks, not just answers prompts, similar to Claude Code and Gemini CLI.

579 4 0
LLMPages
llm-onesdk

OneSDK is a Python library that provides a unified interface for interacting with various Large Language Model (LLM) providers.

492 2 0
vllm-project
vllm-xft

A high-throughput and memory-efficient inference and serving engine for LLMs

485 79K 16K
vllm-project
vllm-acc

A high-throughput and memory-efficient inference and serving engine for LLMs

484 79K 16K
vllm-project
vllm-hust

A high-throughput and memory-efficient inference and serving engine for LLMs

480 79K 16K
vllm-project
ai-dynamo-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

437 79K 16K
vllm-project
wxy-test

A high-throughput and memory-efficient inference and serving engine for LLMs

394 2K 1K
vllm-project
nextai-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

378 79K 16K
vllm-project
vllm-consul

A high-throughput and memory-efficient inference and serving engine for LLMs

306 79K 16K
vllm-project
vllm-npu

A high-throughput and memory-efficient inference and serving engine for LLMs

281 79K 16K
vllm-project
vllm-musa

A high-throughput and memory-efficient inference and serving engine for LLMs

279 79K 16K
SertraFurr
kimi4free

Simple API Wrapper for Kimi

215 4 1
vllm-project
vllm-emissary

A high-throughput and memory-efficient inference and serving engine for LLMs

188 79K 16K
vllm-project
vllm-usf

A high-throughput and memory-efficient inference and serving engine for LLMs

166 79K 16K
shibing624
chatpilot

ChatPilot: Chat Agent Web UI,实现Chat对话前端,支持Google搜索、文件网址对话(RAG)、代码解释器功能,复现了Kimi Chat(文件,拖进来;网址,发出来)。

158 599 59
vllm-project
vllm-rocm

A high-throughput and memory-efficient inference and serving engine for LLMs

138 79K 16K
AirTouch666
aether-cli

A command-line interface for interacting with various AI models.

121 0 0
vllm-project
tilearn-infer

A high-throughput and memory-efficient inference and serving engine for LLMs

118 79K 16K
vllm-project
vllm-test-tpu

A high-throughput and memory-efficient inference and serving engine for LLMs

116 79K 16K
vllm-project
vllm-online

A high-throughput and memory-efficient inference and serving engine for LLMs

110 79K 16K
    • Data from PyPI, GitHub, ClickHouse, and BigQuery