Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient Paper • 2502.05172 • Published Feb 7 • 1
Mixture of Tokens: Efficient LLMs through Cross-Example Aggregation Paper • 2310.15961 • Published Oct 24, 2023 • 1
Writing in the Margins: Better Inference Pattern for Long Context Retrieval Paper • 2408.14906 • Published Aug 27, 2024 • 143
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts Paper • 2401.04081 • Published Jan 8, 2024 • 72
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts Paper • 2401.04081 • Published Jan 8, 2024 • 72