Various LoRA adapters. One shared basis. Up to 122× compression at scale.
This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.12410).
BiDoRA: Bi-Level Optimization for Parameter-Efficient Fine-Tuning of LLMs - Optimized for 3D Code Generation