OrionZheng commited on
Commit
201c930
1 Parent(s): 9077ad5

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -13
README.md CHANGED
@@ -8,36 +8,44 @@ license: apache-2.0
8
  </p>
9
  <hr>
10
 
11
- # OpenMoE-8B-Chat
12
  OpenMoE is a project aimed at igniting the open-source MoE community! We are releasing a family of open-sourced Mixture-of-Experts (MoE) Large Language Models.
13
 
14
  Our project began in the summer of 2023. On August 22, 2023, we released the first batch of intermediate checkpoints (OpenMoE-base&8B), along with the data and code [[Twitter]](https://twitter.com/xuefz/status/1693696988611739947?s=61&t=Xc2k2W7vU_hlpNizGDCmOw). Subsequently, the OpenMoE-8B training was completed in November, 2023. After that, we embarked on explorations on 34B scale model, which is still ongoing.
15
 
16
  As a small student team, instead of pursuing the best model with better data, computation, and human power, we devote to fully sharing our training data, strategies, model architecture, weights, and everything we have with the community. We hope this project will promote research on this promising field and invite more contributors to work on open-sourced MoE projects together!
17
 
18
- [2024.01.12] The paper for the project and more evaluations are underway. For latest information about the model, training, and evaluations, please visit our GitHub [repository](https://github.com/XueFuzhao/OpenMoE/tree/main).
19
 
20
 
21
  ## Model Weights
22
- Currently, three models are released in total: OpenMoE-base, OpenMoE-8B(and its chat version), and OpenMoE-34B(intermediate checkpoint at 200B tokens).
23
-
24
- We provide all these checkpoints on Huggingface(in pytorch) and Google Cloud Storage(in Jax).
25
 
 
26
 
27
  | Model Name | Description | #Param |Huggingface |
28
  |----------------|-------------------------------------------------|----------|-------------|
29
- | OpenMoE-base | A small MoE model for debugging only |637M |[Link](https://huggingface.co/OrionZheng/openmoe-base) |
30
- | OpenLLaMA-base | A dense counter-part of OpenMoE-base |310M |[Link](https://huggingface.co/fuzhao/OpenLLaMA_Base) |
31
- | OpenMoE-8B-200B | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-200B/tree/main) |
32
- | OpenMoE-8B-890B | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-890B) |
33
- | **OpenMoE-8B-1.1T** | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b) |
34
  | **OpenMoE-8B-Chat (1.1T+SFT)** | OpenMoE-8B-1.1T supervised finetuned on the [WildChat GPT-4 Subset](https://huggingface.co/datasets/allenai/WildChat-nontoxic) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-chat) |
35
- | **OpenMoE-34B/32E (200B)** | 34B MoE with comparable FLOPs of a 7B LLaMA(No SFT) |34B |[Link](https://huggingface.co/OrionZheng/openmoe-34b-200B) |
36
-
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  The base model, which were trained using 128 billion tokens, served primarily for debugging purposes. After validating the effectiveness of our model architecture, we did not pursue further training. Consequently, their performance might not be very well, and the checkpoint are not suitable for practical applications. Better performence can be oberved from our 8B or 34B versions.
39
 
40
- The OpenMoE-8B with 4 MoE layers and 32 experts has been trained by 1.1T tokens. The SFT version has also been released after we finetuned the OpenMoE-8B-1.1T on the [wildchat]((https://huggingface.co/datasets/allenai/WildChat-nontoxic)) dataset's GPT-4 subset. Besides, we also provide some intermediate checkpoints at 200B and 890B tokens for research purposes.
41
 
42
  We are still training our OpenMoE-34B, which is a MoE model with 8 MoE layer and 32 experts. We released the intermediate checkpoint trained on 200B tokens on huggingface. If you are interested in the latest checkpoint, please feel free to drop Fuzhao an email ([email protected]).
43
 
 
8
  </p>
9
  <hr>
10
 
11
+ # OpenMoE-8B(890B tokens)
12
  OpenMoE is a project aimed at igniting the open-source MoE community! We are releasing a family of open-sourced Mixture-of-Experts (MoE) Large Language Models.
13
 
14
  Our project began in the summer of 2023. On August 22, 2023, we released the first batch of intermediate checkpoints (OpenMoE-base&8B), along with the data and code [[Twitter]](https://twitter.com/xuefz/status/1693696988611739947?s=61&t=Xc2k2W7vU_hlpNizGDCmOw). Subsequently, the OpenMoE-8B training was completed in November, 2023. After that, we embarked on explorations on 34B scale model, which is still ongoing.
15
 
16
  As a small student team, instead of pursuing the best model with better data, computation, and human power, we devote to fully sharing our training data, strategies, model architecture, weights, and everything we have with the community. We hope this project will promote research on this promising field and invite more contributors to work on open-sourced MoE projects together!
17
 
18
+ [2024.01.12] The paper for the project and more evaluations are underway. For more information about the model, training, and evaluations, please visit our GitHub [repository](https://github.com/XueFuzhao/OpenMoE/tree/main).
19
 
20
 
21
  ## Model Weights
22
+ Currently, three models are released in total: OpenMoE-base, OpenMoE-8B/8B-Chat, and OpenMoE-34B(at 200B tokens).
 
 
23
 
24
+ The table below lists the 8B/8B-Chat model that has completed training on 1.1T tokens.
25
 
26
  | Model Name | Description | #Param |Huggingface |
27
  |----------------|-------------------------------------------------|----------|-------------|
28
+ | **OpenMoE-8B(1.1T)** | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b) |
 
 
 
 
29
  | **OpenMoE-8B-Chat (1.1T+SFT)** | OpenMoE-8B-1.1T supervised finetuned on the [WildChat GPT-4 Subset](https://huggingface.co/datasets/allenai/WildChat-nontoxic) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-chat) |
30
+
31
+
32
+ Besides, we also provide all our intermediate checkpoints(base, 8B, 34B) for research purposes.
33
+
34
+ | Model Name | Description | #Param |Huggingface |
35
+ |----------------|-------------------------------------------------|----------|-------------|
36
+ | **OpenMoE-34B-200B** | 34B MoE with comparable FLOPs of a 7B LLaMA(No SFT) |34B |[Link](https://huggingface.co/OrionZheng/openmoe-34b-200B) |
37
+ | OpenMoE-8B-200B | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-200B) |
38
+ | OpenMoE-8B-400B | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-400B) |
39
+ | OpenMoE-8B-600B | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-600B) |
40
+ | OpenMoE-8B-800B | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-800B) |
41
+ | OpenMoE-8B-1T | 8B MoE with comparable FLOPs of a 1.6B LLaMA(No SFT) |8B |[Link](https://huggingface.co/OrionZheng/openmoe-8b-1T) |
42
+ | OpenMoE-base(128B) | A small MoE model for debugging only |637M |[Link](https://huggingface.co/OrionZheng/openmoe-base) |
43
+ | OpenLLaMA-base(128B) | A dense counter-part of OpenMoE-base |310M |[Link](https://huggingface.co/fuzhao/OpenLLaMA_Base) |
44
+
45
 
46
  The base model, which were trained using 128 billion tokens, served primarily for debugging purposes. After validating the effectiveness of our model architecture, we did not pursue further training. Consequently, their performance might not be very well, and the checkpoint are not suitable for practical applications. Better performence can be oberved from our 8B or 34B versions.
47
 
48
+ The OpenMoE-8B with 4 MoE layers and 32 experts has been trained by 1.1T tokens. The SFT version has also been released after we finetuned the OpenMoE-8B-1.1T on the [wildchat]((https://huggingface.co/datasets/allenai/WildChat-nontoxic)) dataset's GPT-4 subset. The intermediate checkpoints at 200B, 400B, 600B, 800B, 1T tokens can be used to study the training dynamics of MoE architexture.
49
 
50
  We are still training our OpenMoE-34B, which is a MoE model with 8 MoE layer and 32 experts. We released the intermediate checkpoint trained on 200B tokens on huggingface. If you are interested in the latest checkpoint, please feel free to drop Fuzhao an email ([email protected]).
51