PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
thu-ml
sageattention

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

156K 3K 403
lucidrains
colt5-attention

Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch

110K 231 14
lucidrains
ring-attention-pytorch

Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch

5K 548 35
    • Data from PyPI, GitHub, ClickHouse, and BigQuery