Marco-MoE: Open Multilingual Mixture-of-Expert Language Models with Efficient Upcycling
📰 ArXiv cs.AI
arXiv:2604.25578v1 Announce Type: cross Abstract: We present Marco-MoE, a suite of fully open multilingual sparse Mixture-of-Experts (MoE) models. Marco-MoE features a highly sparse design in which only around 5\% of the total parameters are activated per input token. This extreme sparsity, combined with upcycling from dense models, enables efficient pre-training on 5T tokens. Our models surpass similarly-sized competitors on English and multilingual benchmarks, achieving a best-in-class perform
DeepCamp AI