Attention System 2 Attention (is something you might need too) Paper • 2311.11829 • Published Nov 20, 2023 • 39 Transformers are Multi-State RNNs Paper • 2401.06104 • Published Jan 11 • 35 The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Paper • 2402.17764 • Published Feb 27 • 602
System 2 Attention (is something you might need too) Paper • 2311.11829 • Published Nov 20, 2023 • 39
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Paper • 2402.17764 • Published Feb 27 • 602
Mamba+Transformers Jamba: A Hybrid Transformer-Mamba Language Model Paper • 2403.19887 • Published Mar 28 • 104