Update README.md
Browse files
README.md
CHANGED
@@ -22,15 +22,32 @@ widget:
|
|
22 |
|
23 |
---
|
24 |
|
25 |
-
# Wenzhong-GPT2-110M
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
Wenzhong-GPT2-110M is one of the Wenzhong series, which has smaller parameters. Wenzhong-GPT2-110M Is the base version of gpt2。
|
30 |
|
31 |
-
##
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
```python
|
36 |
from transformers import GPT2Tokenizer,GPT2LMHeadModel
|
@@ -39,7 +56,7 @@ tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path)
|
|
39 |
model = GPT2LMHeadModel.from_pretrained(hf_model_path)
|
40 |
```
|
41 |
|
42 |
-
###
|
43 |
|
44 |
```python
|
45 |
question = "北京是中国的"
|
@@ -62,13 +79,31 @@ for idx,sentence in enumerate(generation_output.sequences):
|
|
62 |
print('*'*40)
|
63 |
```
|
64 |
|
65 |
-
## Citation
|
66 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
@misc{Fengshenbang-LM,
|
69 |
title={Fengshenbang-LM},
|
70 |
author={IDEA-CCNL},
|
71 |
year={2021},
|
72 |
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
|
73 |
}
|
74 |
-
```
|
|
|
22 |
|
23 |
---
|
24 |
|
25 |
+
# Wenzhong-GPT2-110M
|
26 |
|
27 |
+
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
|
28 |
+
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
|
|
|
29 |
|
30 |
+
## 简介 Brief Introduction
|
31 |
|
32 |
+
善于处理NLG任务,中文版的GPT2-Small。
|
33 |
+
|
34 |
+
Focused on handling NLG tasks, Chinese GPT2-Small.
|
35 |
+
|
36 |
+
## 模型分类 Model Taxonomy
|
37 |
+
|
38 |
+
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
|
39 |
+
| :----: | :----: | :----: | :----: | :----: | :----: |
|
40 |
+
| 通用 General | 自然语言生成 NLG | 闻仲 Wenzhong | GPT2 | 110M | - |
|
41 |
+
|
42 |
+
## 模型信息 Model Information
|
43 |
+
|
44 |
+
类似于Wenzhong2.0-GPT2-3.5B-chinese,我们实现了一个small版本的12层的Wenzhong-GPT2-110M,并且在悟道(300G版本)上面进行预训练。
|
45 |
+
|
46 |
+
Similar to Wenzhong2.0-GPT2-3.5B-chinese, we implement a small size Wenzhong-GPT2-110M with 12 layers, which is pre-trained on Wudao Corpus (300G version).
|
47 |
+
|
48 |
+
## 使用 Usage
|
49 |
+
|
50 |
+
### 加载模型 Loading Models
|
51 |
|
52 |
```python
|
53 |
from transformers import GPT2Tokenizer,GPT2LMHeadModel
|
|
|
56 |
model = GPT2LMHeadModel.from_pretrained(hf_model_path)
|
57 |
```
|
58 |
|
59 |
+
### 使用示例 Usage Examples
|
60 |
|
61 |
```python
|
62 |
question = "北京是中国的"
|
|
|
79 |
print('*'*40)
|
80 |
```
|
81 |
|
82 |
+
## 引用 Citation
|
83 |
+
|
84 |
+
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
|
85 |
+
|
86 |
+
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
|
87 |
+
|
88 |
+
```text
|
89 |
+
@article{fengshenbang,
|
90 |
+
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
|
91 |
+
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
|
92 |
+
journal = {CoRR},
|
93 |
+
volume = {abs/2209.02970},
|
94 |
+
year = {2022}
|
95 |
+
}
|
96 |
```
|
97 |
+
|
98 |
+
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
|
99 |
+
|
100 |
+
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
|
101 |
+
|
102 |
+
```text
|
103 |
@misc{Fengshenbang-LM,
|
104 |
title={Fengshenbang-LM},
|
105 |
author={IDEA-CCNL},
|
106 |
year={2021},
|
107 |
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
|
108 |
}
|
109 |
+
```
|