GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning Paper • 2505.20355 • Published May 26 • 36
QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference Paper • 2402.10076 • Published Feb 15, 2024 • 2