Extending LLMs' Context Window with 100 Samples Paper • 2401.07004 • Published Jan 13, 2024 • 16
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 258