LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language Models
Abstract
Mixture of Experts (MoEs) plays an important role in the development of more efficient and effective large language models (LLMs). Due to the enormous resource requirements, studying large scale MoE algorithms remain in-accessible to many researchers. This work develops LibMoE, a comprehensive and modular framework to streamline the research, training, and evaluation of MoE algorithms. Built upon three core principles: (i) modular design, (ii) efficient training; (iii) comprehensive evaluation, LibMoE brings MoE in LLMs more accessible to a wide range of researchers by standardizing the training and evaluation pipelines. Using LibMoE, we extensively benchmarked five state-of-the-art MoE algorithms over three different LLMs and 11 datasets under the zero-shot setting. The results show that despite the unique characteristics, all MoE algorithms perform roughly similar when averaged across a wide range of tasks. With the modular design and extensive evaluation, we believe LibMoE will be invaluable for researchers to make meaningful progress towards the next generation of MoE and LLMs. Project page: https://fsoft-aic.github.io/fsoft-LibMoE.github.io.
Community
GitHub link: https://github.com/Fsoft-AIC/LibMoE
We have noticed new activity in the field of Mixture of Experts (MoE) and are excited to share the LibMoE framework.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- $\gamma-$MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models (2024)
- Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging (2024)
- MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router (2024)
- POINTS: Improving Your Vision-language Model with Affordable Strategies (2024)
- MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper