PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
arogozhnikov
einops

Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

23.2M 9K 396
ml-explore
mlx-lm

Run LLMs with MLX

1.5M 5K 641
ml-explore
mlx

MLX: An array framework for Apple silicon

927K 26K 2K
ml-explore
mlx-metal

MLX: An array framework for Apple silicon

686K 26K 2K
Blaizzy
mlx-vlm

MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.

349K 5K 506
ml-explore
mlx-whisper

Examples in the MLX framework

117K 9K 1K
Blaizzy
mlx-audio

A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX framework, providing efficient speech analysis on Apple Silicon.

78K 7K 578
transformerlab
transformerlab

The open source research environment for AI researchers to seamlessly train, evaluate, and scale models from local hardware to GPU clusters.

54K 5K 510
filipstrand
mflux

MLX native implementations of state-of-the-art generative image models

40K 2K 141
jjang-ai
vmlx

vMLX - Home of JANG_Q - Cont Batch, Prefix, Paged, KV Cache Quant, VL - Powers MLX Studio. Image gen/edit, OpenAI/Anth

36K 441 54
ml-explore
mlx-cpu

MLX: An array framework for Apple silicon

24K 26K 2K
AlexsJones
llmfit

Hundreds of models & providers. One command to find what runs on your hardware.

23K 25K 1K
cubist38
mlx-openai-server

A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI framework, it provides an efficient, scalable, and user-friendly solution for running MLX-based vision and language models locally with an OpenAI-compatible interface.

22K 325 58
raullenchai
rapid-mlx

The fastest local AI engine for Apple Silicon. 4.2x faster than Ollama, 0.08s cached TTFT, 100% tool calling. 17 tool parsers, prompt cache, reasoning separation, cloud routing. Drop-in OpenAI replacement. Works with Claude Code, Cursor, Aider.

18K 635 74
ml-explore
mlx-cuda-13

MLX: An array framework for Apple silicon

15K 26K 2K
transformerlab
transformerlab-cli

The open source research environment for AI researchers to seamlessly train, evaluate, and scale models from local hardware to GPU clusters.

14K 5K 510
ARahim3
mlx-tune

Fine-tune LLMs on your Mac with Apple Silicon. SFT, DPO, GRPO, Vision, TTS, STT, Embedding, and OCR fine-tuning — natively on MLX. Unsloth-compatible API.

14K 1K 79
jjang-ai
jang

JANG — GGUF for MLX. YOU MUST USE JANG_Q RUNTIME. Adaptive Mixed-Precision Quantization + Runtime for Apple Silicon

9K 142 20
tanavc1
llm-autotune

Zero-config local LLM optimization for Ollama, LM Studio, and Apple Silicon MLX. Reduces TTFT by 40%, wall time for local agents by 46%, and RAM usage by 3x.

7K 24 1
ml-explore
mlx-cuda-12

MLX: An array framework for Apple silicon

7K 26K 2K
lucasnewman
f5-tts-mlx

Implementation of F5-TTS in MLX

5K 626 62
lucasnewman
vocos-mlx

Implementation of 'Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis', in MLX

4K 24 2
ml-explore
mlx-cuda

A framework for machine learning on Apple silicon.

4K 26K 2K
madroidmaq
mlx-omni-server

MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.

3K 713 87
    • Data from PyPI, GitHub, ClickHouse, and BigQuery