PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Peft Python Packages

Python packages with the GitHub topic peft. Sorted by relevance, with stars and monthly downloads.
huggingface
peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

10.9M 21K 2K
modelscope
ms-swift

Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.6, DeepSeek-R1, GLM-5.1, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, GLM4.5v, Gemma4, Llava, Phi4, ...) (AAAI 2025).

176K 14K 1K
ModelCloud
gptqmodel

LLM model quantization (compression) toolkit with HW acceleration support for Nvidia, AMD, Intel GPU and Intel/AMD/Apple CPU via HF, vLLM, and SGLang.

39K 1K 185
hiyouga
llamafactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

28K 71K 9K
modelscope
mcore-bridge

MCore-Bridge: Providing Megatron-Core model definitions for state-of-the-art large models and making Megatron training as simple as Transformers.

14K 55 11
ARahim3
mlx-tune

Fine-tune LLMs on your Mac with Apple Silicon. SFT, DPO, GRPO, Vision, TTS, STT, Embedding, and OCR fine-tuning — natively on MLX. Unsloth-compatible API.

13K 1K 79
MakazhanAlpamys
soup-cli

Soup turns the pain of LLM fine-tuning into a simple workflow. One config, one command, done.

10K 53 7
wuwangzhang1216
abliterix

Automated alignment adjustment for LLMs — direct steering, LoRA, and MoE expert-granular abliteration, optimized via multi-objective Optuna TPE.

2K 215 42
hiyouga
llmtuner

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

2K 71K 9K
TUDB-Labs
moe-peft

An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT

1K 139 18
shuhulx
finetunecheck

Automated base vs fine-tuned LLM comparison with forgetting detection, capability retention scoring, and visual diff reports.

800 0 0
stochasticai
xturing

Build, personalize and control your own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

675 3K 212
recursia-lab
anchor-vision

Python client for Anchor — PaliGemma2 multi-LoRA vision inference

631 0 0
tenseleyFlow
document-language-model

Document-first local LLM training, preference mining, retraining, and multi-target export from .dlm docs, codebases, and multimodal sources.

608 0 0
ARahim3
unsloth-mlx

Fine-tune LLMs on your Mac with Apple Silicon. SFT, DPO, GRPO, Vision, TTS, STT, Embedding, and OCR fine-tuning — natively on MLX. Unsloth-compatible API.

547 1K 79
hiyouga
lazyllm-llamafactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

501 71K 9K
aletheiaprotocol-ai
aletheia-lora

Gradient-guided layer selection for efficient LoRA fine-tuning across architectures

498 0 0
ShadowLLM
shadow-peft

ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning

434 35 3
Aradhye2002
selective-optimizers

Official implementation of the paper "Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language Models"

413 9 1
hiyouga
glmtuner

Fine-tuning ChatGLM-6B with PEFT

316 4K 464
simplifine-llm
simplifine-alpha

An easy to use, open-source LLM finetuning library that handles all the complexities of the process for you.

271 96 4
jackyoung27
s0-tuning

Tune the initial recurrent state of hybrid models. Zero inference overhead.

248 4 1
TUDB-Labs
mlora-cli

The cli tools for mLoRA system.

235 376 66
alexsuw
easylora

Batteries-included toolkit for LoRA / QLoRA fine-tuning with Hugging Face Transformers

233 0 0
    • Data from PyPI, GitHub, ClickHouse, and BigQuery