GroveMoE-Base

πŸ€— Models   |    πŸ“‘ Paper    |    πŸ”— Github  

Highlights

We introduce GroveMoE, a new sparse architecture using adjugate experts for dynamic computation allocation, featuring the following key highlights:

  • Architecture: Novel adjugate experts grouped with ordinary experts; shared computation is executed once, then reused, cutting FLOPs.
  • Sparse Activation: 33 B params total, only 3.14–3.28 B active per token.
  • Traning: Mid-training + SFT, up-cycled from Qwen3-30B-A3B-Base; preserves prior knowledge while adding new capabilities.

Model Downloads

Model #Total Params #Activated Params Download
GroveMoE-Base 33B 3.14~3.28B πŸ€— HuggingFace
GroveMoE-Inst 33B 3.14~3.28B πŸ€— HuggingFace

Citation

@article{GroveMoE,
title = {GroveMoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts},
author = {Wu, Haoyuan and Chen, Haoxing and Chen, Xiaodong and Zhou, Zhanchao and Chen, Tieyuan and Zhuang, Yihong and Lu, Guoshan and Zhao, Junbo and Liu, Lin and Huang, Zenan and Lan, Zhenzhong and Yu, Bei and Li, Jianguo},
journal = {arXiv preprint arXiv:2508.07785},
year = {2025}
}
Downloads last month
74
Safetensors
Model size
32.9B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including inclusionAI/GroveMoE-Base