PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
intel
neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

22K 3K 304
yzhao062
suod

(MLSys' 21) An Acceleration System for Large-scare Unsupervised Heterogeneous Outlier Detection (Anomaly Detection)

16K 394 46
PaddlePaddle
paddleclas

A treasure chest for visual classification and recognition powered by PaddlePaddle

5K 6K 1K
wpferrell
resonance-layer

Emotional intelligence layer for AI - detects emotion, tracks wellbeing, and gives any LLM the context to respond to how people actually feel.

4K 0 0
langformers
langformers

🚀 Unified NLP Pipelines for Language Models

2K 19 1
yoshitomo-matsubara
torchdistill

A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. 🏆26 knowledge distillation methods presented at TPAMI, CVPR, ICLR, ECCV, NeurIPS, ICCV, AAAI, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.

2K 2K 144
intel
neural-compressor-pt

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

1K 3K 304
MedAliAdlouni
ssondo

Official SSONDO implementation and trained model's weights (ICASSP 2026)

1K 4 0
GeoffreyWang1117
uni-layer

30+ layer contribution metrics from 7 theoretical categories for PyTorch model compression. Bridges for Torch-Pruning and PEFT/LoRA.

996 0 0
open-mmlab
mmrazor

OpenMMLab Model Compression Toolbox and Benchmark.

901 2K 243
intel
neural-compressor-tf

Repository of Intel® Neural Compressor

739 3K 304
aquvitae
aquvitae

The easiest Knowledge Distillation library for Light Weight DeepLearning

629 88 10
Zakk-Yang
ollama-rag

A programming framework for knowledge management

503 7 4
intel
neural-compressor-full

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

476 3K 304
waking95
easy-zh-bert

easy-bert是一个中文NLP工具,提供诸多bert变体调用和调参方法,极速上手;清晰的设计和代码注释,也很适合学习

456 83 14
SforAiDL
kd-lib

A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.

450 649 61
burning-cost
insurance-distill

GBM-to-GLM distillation for insurance pricing - surrogate factor tables for Radar/Emblem rating engines

418 0 0
intel
neural-solution

Repository of Intel® Neural Compressor

384 3K 304
intel
lpot

Repository of Intel® Low Precision Optimization Tool

384 3K 302
intel
neural-insights

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

340 3K 304
yoshitomo-matsubara
sc2bench

[TMLR] "SC2 Benchmark: Supervised Compression for Split Computing"

335 35 11
alibaba
easytransfer

EasyTransfer is designed to make the development of transfer learning in NLP applications easier.

276 862 161
zhangyikaii
zhijian

ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model Reuse

136 48 2
intel
ilit

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

94 3K 304
    • Data from PyPI, GitHub, ClickHouse, and BigQuery