[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
Measure and optimize the energy consumption of your AI applications!
NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference
A scalable & efficient active learning/data selection system for everyone.
FedScale is a scalable and extensible open-source federated learning (FL) platform.