When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models Paper • 2406.07368 • Published Jun 11 • 2
ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization Paper • 2406.05981 • Published Jun 10 • 11