PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
huggingface
peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

10.8M 21K 2K
modelscope
ms-swift

Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.6, DeepSeek-R1, GLM-5.1, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, GLM4.5v, Gemma4, Llava, Phi4, ...) (AAAI 2025).

171K 14K 1K
ModelCloud
gptqmodel

LLM model quantization (compression) toolkit with HW acceleration support for Nvidia, AMD, Intel GPU and Intel/AMD/Apple CPU via HF, vLLM, and SGLang.

38K 1K 185
hiyouga
llamafactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

29K 71K 9K
ARahim3
mlx-tune

Fine-tune LLMs on your Mac with Apple Silicon. SFT, DPO, GRPO, Vision, TTS, STT, Embedding, and OCR fine-tuning — natively on MLX. Unsloth-compatible API.

14K 1K 79
modelscope
mcore-bridge

MCore-Bridge: Providing Megatron-Core model definitions for state-of-the-art large models and making Megatron training as simple as Transformers.

13K 55 11
MakazhanAlpamys
soup-cli

Soup turns the pain of LLM fine-tuning into a simple workflow. One config, one command, done.

11K 53 7
wuwangzhang1216
abliterix

Automated alignment adjustment for LLMs — direct steering, LoRA, and MoE expert-granular abliteration, optimized via multi-objective Optuna TPE.

2K 215 42
hiyouga
llmtuner

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

2K 71K 9K
TUDB-Labs
moe-peft

An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT

1K 139 18
shuhulx
finetunecheck

Automated base vs fine-tuned LLM comparison with forgetting detection, capability retention scoring, and visual diff reports.

884 0 0
stochasticai
xturing

Build, personalize and control your own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

627 3K 212
recursia-lab
anchor-vision

Python client for Anchor — PaliGemma2 multi-LoRA vision inference

559 0 0
tenseleyFlow
document-language-model

Document-first local LLM training, preference mining, retraining, and multi-target export from .dlm docs, codebases, and multimodal sources.

533 0 0
ARahim3
unsloth-mlx

Fine-tune LLMs on your Mac with Apple Silicon. SFT, DPO, GRPO, Vision, TTS, STT, Embedding, and OCR fine-tuning — natively on MLX. Unsloth-compatible API.

528 1K 79
aletheiaprotocol-ai
aletheia-lora

Gradient-guided layer selection for efficient LoRA fine-tuning across architectures

474 0 0
Aradhye2002
selective-optimizers

Official implementation of the paper "Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language Models"

416 9 1
ShadowLLM
shadow-peft

ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning

408 35 3
hiyouga
lazyllm-llamafactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

394 71K 9K
hiyouga
glmtuner

Fine-tuning ChatGLM-6B with PEFT

328 4K 464
simplifine-llm
simplifine-alpha

An easy to use, open-source LLM finetuning library that handles all the complexities of the process for you.

281 96 4
jackyoung27
s0-tuning

Tune the initial recurrent state of hybrid models. Zero inference overhead.

280 4 1
alexsuw
easylora

Batteries-included toolkit for LoRA / QLoRA fine-tuning with Hugging Face Transformers

234 0 0
TUDB-Labs
mlora-cli

The cli tools for mLoRA system.

228 376 66
    • Data from PyPI, GitHub, ClickHouse, and BigQuery