-
Scaling Vision with Sparse Mixture of Experts
Paper • 2106.05974 • Published • 3 -
Routers in Vision Mixture of Experts: An Empirical Study
Paper • 2401.15969 • Published • 2 -
Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Paper • 2206.02770 • Published • 3 -
Experts Weights Averaging: A New General Training Scheme for Vision Transformers
Paper • 2308.06093 • Published • 2
Collections
Discover the best community collections!
Collections including paper arxiv:2308.06093
-
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Paper • 2308.10110 • Published • 2 -
Experts Weights Averaging: A New General Training Scheme for Vision Transformers
Paper • 2308.06093 • Published • 2 -
ConstitutionalExperts: Training a Mixture of Principle-based Prompts
Paper • 2403.04894 • Published • 2 -
Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models
Paper • 2403.03432 • Published • 1
-
Adaptive sequential Monte Carlo by means of mixture of experts
Paper • 1108.2836 • Published • 2 -
Convergence Rates for Mixture-of-Experts
Paper • 1110.2058 • Published • 2 -
Multi-view Contrastive Learning for Entity Typing over Knowledge Graphs
Paper • 2310.12008 • Published • 2 -
Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts
Paper • 2308.11793 • Published • 2
-
Non-asymptotic oracle inequalities for the Lasso in high-dimensional mixture of experts
Paper • 2009.10622 • Published • 1 -
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Paper • 2401.15947 • Published • 48 -
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts
Paper • 2401.04081 • Published • 70 -
MoE-Infinity: Activation-Aware Expert Offloading for Efficient MoE Serving
Paper • 2401.14361 • Published • 2
-
Mixtral of Experts
Paper • 2401.04088 • Published • 157 -
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Paper • 2401.15947 • Published • 48 -
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts
Paper • 2401.04081 • Published • 70 -
EdgeMoE: Fast On-Device Inference of MoE-based Large Language Models
Paper • 2308.14352 • Published
-
What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning
Paper • 2312.15685 • Published • 17 -
mistralai/Mixtral-8x7B-Instruct-v0.1
Text Generation • Updated • 706k • • 4.19k -
microsoft/phi-2
Text Generation • Updated • 235k • 3.24k -
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Text Generation • Updated • 1.25M • 1.09k
-
QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models
Paper • 2310.16795 • Published • 26 -
Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference
Paper • 2308.12066 • Published • 4 -
Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference
Paper • 2303.06182 • Published • 1 -
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via Dense-To-Sparse Gate
Paper • 2112.14397 • Published • 1
-
Experts Weights Averaging: A New General Training Scheme for Vision Transformers
Paper • 2308.06093 • Published • 2 -
Platypus: Quick, Cheap, and Powerful Refinement of LLMs
Paper • 2308.07317 • Published • 23 -
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers
Paper • 2211.11315 • Published • 1 -
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
Paper • 2307.13269 • Published • 31
-
Experts Weights Averaging: A New General Training Scheme for Vision Transformers
Paper • 2308.06093 • Published • 2 -
Weight Averaging Improves Knowledge Distillation under Domain Shift
Paper • 2309.11446 • Published • 1 -
SWAMP: Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Paper • 2305.14852 • Published • 1 -
Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging
Paper • 2306.16788 • Published • 1