Liangtai Sun commited on
Commit
0cd0d00
1 Parent(s): d89aafa

upload files

Browse files
Files changed (2) hide show
  1. README.md +88 -167
  2. smiles.model +3 -0
README.md CHANGED
@@ -1,4 +1,4 @@
1
- ---
2
  license: agpl-3.0
3
  language:
4
  - en
@@ -6,198 +6,119 @@ language:
6
  tags:
7
  - AI4S
8
  - MoE
9
- ---
10
- # Model Card for Model ID
11
 
12
- <!-- Provide a quick summary of what the model is/does. -->
13
 
14
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
 
15
 
16
  ## Model Details
17
 
18
- ### Model Description
19
-
20
- <!-- Provide a longer summary of what this model is. -->
21
-
22
-
23
-
24
- - **Developed by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
-
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
-
40
- ## Uses
41
-
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
-
44
- ### Direct Use
45
-
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
-
48
- [More Information Needed]
49
-
50
- ### Downstream Use [optional]
51
-
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
-
54
- [More Information Needed]
55
-
56
- ### Out-of-Scope Use
57
-
58
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
-
60
- [More Information Needed]
61
-
62
- ## Bias, Risks, and Limitations
63
-
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
-
66
- [More Information Needed]
67
-
68
- ### Recommendations
69
-
70
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
-
72
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
-
74
- ## How to Get Started with the Model
75
-
76
- Use the code below to get started with the model.
77
-
78
- [More Information Needed]
79
 
80
  ## Training Details
81
 
82
- ### Training Data
83
-
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
-
86
- [More Information Needed]
87
-
88
- ### Training Procedure
89
-
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
-
92
- #### Preprocessing [optional]
93
-
94
- [More Information Needed]
95
-
96
 
97
- #### Training Hyperparameters
98
 
99
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
 
101
- #### Speeds, Sizes, Times [optional]
102
 
103
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
-
105
- [More Information Needed]
106
-
107
- ## Evaluation
108
-
109
- <!-- This section describes the evaluation protocols and provides the results. -->
110
-
111
- ### Testing Data, Factors & Metrics
112
-
113
- #### Testing Data
114
-
115
- <!-- This should link to a Dataset Card if possible. -->
116
-
117
- [More Information Needed]
118
-
119
- #### Factors
120
-
121
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
-
123
- [More Information Needed]
124
-
125
- #### Metrics
126
-
127
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
-
129
- [More Information Needed]
130
-
131
- ### Results
132
-
133
- [More Information Needed]
134
-
135
- #### Summary
136
 
 
 
 
137
 
 
 
 
138
 
139
- ## Model Examination [optional]
 
 
 
 
 
 
 
 
140
 
141
- <!-- Relevant interpretability work for the model goes here -->
 
 
 
142
 
143
- [More Information Needed]
144
 
145
- ## Environmental Impact
 
 
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
 
149
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
 
151
- - **Hardware Type:** [More Information Needed]
152
- - **Hours used:** [More Information Needed]
153
- - **Cloud Provider:** [More Information Needed]
154
- - **Compute Region:** [More Information Needed]
155
- - **Carbon Emitted:** [More Information Needed]
156
 
157
- ## Technical Specifications [optional]
158
 
159
- ### Model Architecture and Objective
 
 
 
160
 
161
- [More Information Needed]
 
162
 
163
- ### Compute Infrastructure
164
 
165
- [More Information Needed]
 
 
 
166
 
167
- #### Hardware
 
168
 
169
- [More Information Needed]
170
-
171
- #### Software
172
-
173
- [More Information Needed]
174
-
175
- ## Citation [optional]
176
-
177
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
-
179
- **BibTeX:**
180
-
181
- [More Information Needed]
182
-
183
- **APA:**
184
-
185
- [More Information Needed]
186
-
187
- ## Glossary [optional]
188
-
189
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
-
191
- [More Information Needed]
192
-
193
- ## More Information [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Authors [optional]
198
-
199
- [More Information Needed]
200
-
201
- ## Model Card Contact
202
 
203
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ----
2
  license: agpl-3.0
3
  language:
4
  - en
 
6
  tags:
7
  - AI4S
8
  - MoE
9
+ ----
10
+ # SciDFM: Dialogue Foundation Model for Science
11
 
12
+ SciDFM is the pioneering open-sourced dialogue foundation model tailored for science, which integrates a mixture-of-experts architecture into a transformer-based framework, aiming at enhancing its sophisticated scientific reasoning and understanding capabilities. SciDFM achieves strong performance on general scientific benchmarks such as SciEval and SciQ, and it reachs a SOTA performance on domain-specific benchmark among models of similar size.
13
 
14
+ ## News
15
+ * **2024-06-28** The parameter of SciDFM-MoE-A5.6B-v1.0 is open-soursed! Technical report is coming soon.
16
 
17
  ## Model Details
18
 
19
+ SciDFM is based on a transformer architecture, and follows modifications of Llama, i.e. RMSNorm, RoPE and SwiGLU. SciDFM use the same hyper-parameters of OpenLLaMa-3B. And in order to better model knowledge of different disciplines, we replace the feed-forward block with Mixture-of-Expert (MoE) layers.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
  ## Training Details
22
 
23
+ SciDFM is pre-trained on a large corpus containing ~300B science tokens and ~270B general tokens for two epochs, resulting in about 1.1T tokens consuming. And we further fine-tune SciDFM using ~9.3M instruction-following samples for 5 epochs to improve the performances on the downstream benchmarks.
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
+ ## Usage Details
26
 
27
+ ### Local Inference
28
 
29
+ To load and run SciDFM locally, here is an example:
30
 
31
+ ```python
32
+ import torch
33
+ from transformers import LlamaTokenizer, AutoModelForCausalLM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
+ model_name_or_id = "OpenDFM/SciDFM-MoE-A5.6B-v1.0"
36
+ tokenizer = LlamaTokenizer.from_pretrained(model_name_or_id, use_fast=False)
37
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
38
 
39
+ chat_template = "<|user|>:{instruction}<|assistant|>:"
40
+ input_text = "What is Mixture-of-Experts (MoE) in computer science?"
41
+ input_text = chat_template.format(instruction=input_text)
42
 
43
+ inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
44
+ generation_config = GenerationConfig(
45
+ do_sample=True,
46
+ top_k=20,
47
+ top_p=0.9,
48
+ temperature=0.9,
49
+ max_new_tokens=1024,
50
+ eos_token_id=tokenizer.eos_token_id
51
+ )
52
 
53
+ outputs = model.generate(**inputs, generation_config=generation_config)
54
+ generated_text = tokenizer.decode(outputs, skip_special_tokens=True)[0][len(input_text):]
55
+ print(generated_text.strip())
56
+ ```
57
 
58
+ ### SMILES preprocess
59
 
60
+ When there involves SMILES notation in your input, we recommend to preprocess the SMILES with the `rdkit` package to canonicalize the SMILES. Here is an example:
61
+ ```python
62
+ from rdkit import Chem
63
+ def canonicalize_smiles(smiles):
64
+ mol = Chem.MolFromSmiles(smiles)
65
+ if mol is None:
66
+ return None
67
+ return Chem.MolToSmiles(mol, isomericSmiles=True, kekuleSmiles=False)
68
+ ```
69
+ or directly:
70
+ ```python
71
+ from rdkit import Chem
72
+ def canonicalize_smiles(smiles):
73
+ return Chem.CanonSmiles(smiles, useChiral=True)
74
+ ```
75
 
76
+ ### Special Tokens preprocess
77
 
78
+ If there is SMILES expression in your input, please first process it with the following function:
79
 
80
+ ```python
81
+ import sentencepiece as spm
 
 
 
82
 
83
+ smiles_model = spm.SentencePieceProcessor(model_file="smiles.model")
84
 
85
+ def convert_smiles(smiles_str):
86
+ pieces = smiles_model.encode_as_pieces(smiles_str)[1:]
87
+ smiles = "".join([f"[ChemDFM_Start_SMILES_Unit]{piece}[ChemDFM_End_SMILES_Unit]" for piece in pieces])
88
+ return smiles
89
 
90
+ convert_smiles("C(C(=O)O)N")
91
+ ```
92
 
93
+ And if there is protein sequece in your input, please first process it with the following function:
94
 
95
+ ```python
96
+ def convert_protein(p_str):
97
+ res = [f"<<protein>>{s}" for s in p_str]
98
+ return "".join(res)
99
 
100
+ convert_protein("MIRLGAPQTL")
101
+ ```
102
 
103
+ ## Evaluation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
 
105
+ We briefly compare SciDFM-MoE-A5.6B-v1.0 with similar-sized instruction-tuned LLMs on scientific evaluation benchmarks. The results are shown below:
106
+ | Model | SciEval | SciQ | ARC\_c | ARC\_e | GSM8K | MATH | MedQA | MMCQA | PMQA | Avg |
107
+ |--------------------|---------|-------|--------|--------|-------|-------|-------|-------|-------|-------|
108
+ | LLaMa2-7B | 27.06 | 57.00 | 36.43 | 46.59 | 3.94 | 3.96 | 26.32 | 29.84 | 66.80 | 32.95 |
109
+ | Galactica-6.7B | 46.28 | 74.20 | 44.28 | 61.83 | 2.80 | 6.32 | 30.48 | 36.46 | 48.80 | 38.91 |
110
+ | LLaMa2-13B | 33.88 | 78.10 | 56.66 | 72.35 | 22.82 | 3.90 | 32.68 | 34.28 | 77.80 | 45.45 |
111
+ | ChatGLM2-6B | 54.25 | 75.80 | 57.08 | 73.57 | 25.09 | 7.18 | 27.42 | 34.21 | 60.40 | 45.94 |
112
+ | Galactica-30B | 54.24 | 83.10 | 57.85 | 75.04 | 13.65 | 8.66 | 37.71 | 48.43 | 58.80 | 48.35 |
113
+ | LLaMa3-8B | 59.70 | 90.00 | 71.16 | 84.05 | 5.91 | 7.00 | 48.78 | 52.74 | 26.60 | 49.59 |
114
+ | ChatGLM3-6B | 51.13 | 77.60 | 60.84 | 75.97 | 60.27 | 23.52 | 24.59 | 31.39 | 51.80 | 50.53 |
115
+ | SciGLM-6B | 61.22 | 88.70 | 77.47 | 86.57 | 42.23 | 16.40 | 42.81 | 44.94 | 73.60 | 59.12 |
116
+ | SciDFM | 62.48 | 88.00 | 64.76 | 81.48 | 59.14 | 27.28 | 44.54 | 53.10 | 78.00 | 61.56 |
117
+ | ChatGLM3-6B-base | 60.34 | 89.00 | 78.58 | 87.37 | 59.82 | 22.64 | 42.73 | 45.14 | 74.40 | 61.96 |
118
+ | Llama3-8B-Instruct | 64.91 | 91.60 | 76.45 | 87.33 | 76.57 | 26.26 | 56.48 | 59.31 | 72.00 | 67.44 |
119
+
120
+ ## Citation
121
+
122
+ ```
123
+ comming soon...
124
+ ```
smiles.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b405d2e3ec0b31e44daa3831a1345e80ca7aa7362f6f99e02f118cc0b46468d6
3
+ size 6641