Waterhorse commited on
Commit
00d6848
1 Parent(s): 03a376c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -138
README.md CHANGED
@@ -3,28 +3,24 @@ license: apache-2.0
3
  language:
4
  - en
5
  datasets:
6
- - togethercomputer/RedPajama-Data-1T
 
 
 
7
  ---
8
 
9
- # RedPajama-INCITE-Base-3B-v1
10
 
11
- RedPajama-INCITE-Base-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
12
- The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.
13
 
14
- - Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1)
15
- - Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1)
16
- - Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1)
17
 
18
  ## Model Details
19
- - **Developed by**: Together Computer.
20
  - **Model type**: Language Model
21
  - **Language(s)**: English
22
  - **License**: Apache 2.0
23
- - **Model Description**: A 2.8B parameter pretrained language model.
24
-
25
- # Quick Start
26
-
27
- Please note that the model requires `transformers` version >= 4.25.1.
28
 
29
  ## GPU Inference
30
 
@@ -41,8 +37,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
41
  assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
42
 
43
  # init
44
- tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
45
- model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.float16)
46
  model = model.to('cuda:0')
47
 
48
  # infer
@@ -61,136 +57,18 @@ Turing’s contributions to the development of the modern computer were made in
61
  """
62
  ```
63
 
64
- ## GPU Inference in Int8
65
-
66
- To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
67
-
68
- ```bash
69
- pip install accelerate
70
- pip install bitsandbytes
71
- ```
72
-
73
- Then you can run inference with int8 as follows:
74
-
75
- ```python
76
- import torch
77
- import transformers
78
- from transformers import AutoTokenizer, AutoModelForCausalLM
79
-
80
- MIN_TRANSFORMERS_VERSION = '4.25.1'
81
-
82
- # check transformers version
83
- assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
84
-
85
- # init
86
- tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
87
- model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
88
-
89
- # infer
90
- prompt = "Alan Turing is"
91
- inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
92
- input_length = inputs.input_ids.shape[1]
93
- outputs = model.generate(
94
- **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
95
- )
96
- token = outputs.sequences[0, input_length:]
97
- output_str = tokenizer.decode(token)
98
- print(output_str)
99
- """
100
- the man who cracked the Enigma code during World War II, and who was later convicted of homosexual acts. He was a brilliant mathematician, and a visionary who foresaw the computer age....
101
- """
102
- ```
103
-
104
- ## CPU Inference
105
-
106
- You can run inference on CPU as follows:
107
-
108
- ```python
109
- import torch
110
- import transformers
111
- from transformers import AutoTokenizer, AutoModelForCausalLM
112
-
113
- MIN_TRANSFORMERS_VERSION = '4.25.1'
114
-
115
- # check transformers version
116
- assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
117
-
118
- # init
119
- tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
120
- model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.bfloat16)
121
- # infer
122
- prompt = "Alan Turing is"
123
- inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
124
- input_length = inputs.input_ids.shape[1]
125
- outputs = model.generate(
126
- **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
127
- )
128
- token = outputs.sequences[0, input_length:]
129
- output_str = tokenizer.decode(token)
130
- print(output_str)
131
- """
132
- a name that is synonymous with the history of computer science. As the man who invented the Turing machine, the mathematical model that defines the limits of what can be computed, Turing is credited with the invention of the modern computer. Turing was also a mathematician and logician, and his work in these fields led to the development of the field of artificial intelligence...
133
- """
134
- ```
135
-
136
- Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
137
-
138
  # Uses
139
 
140
  Excluded uses are described below.
141
 
142
- ### Misuse, Malicious Use, and Out-of-Scope Use
143
 
144
- It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
145
 
146
  #### Out-of-Scope Use
147
 
148
- `RedPajama-INCITE-Base-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope.
149
- For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
150
- It is important to consider the limitations of the model and to only use it for its intended purpose.
151
-
152
- #### Misuse and Malicious Use
153
-
154
- `RedPajama-INCITE-Base-3B-v1` is designed for language modeling.
155
- Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
156
-
157
- Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
158
-
159
- - Generating fake news, misinformation, or propaganda
160
- - Promoting hate speech, discrimination, or violence against individuals or groups
161
- - Impersonating individuals or organizations without their consent
162
- - Engaging in cyberbullying or harassment
163
- - Defamatory content
164
- - Spamming or scamming
165
- - Sharing confidential or sensitive information without proper authorization
166
- - Violating the terms of use of the model or the data used to train it
167
- - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
168
-
169
- ## Limitations
170
-
171
- `RedPajama-INCITE-Base-3B-v1`, like other language models, has limitations that should be taken into consideration.
172
- For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
173
- We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
174
-
175
- ## Training
176
-
177
- **Training Data**
178
-
179
- Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
180
-
181
- **Training Procedure**
182
-
183
- - **Hardware:** 256 nodes of 6xV100 (IBM Power9), on the OLCF Summit cluster
184
- - **Optimizer:** Apex FusedAdam
185
- - **Parallelism:** Pipeline parallel 6, tensor parallel 2
186
- - **Gradient Accumulations**: 8 (global batch size 4M tokens)
187
- - **Num of Tokens:** 800B Tokens
188
- - **Learning rate:** 0.00016
189
-
190
- ## Benchmark
191
-
192
- Please refer to our [blog post](https://together.xyz) for benchmark results.
193
 
194
- ## Community
195
 
196
- Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
 
3
  language:
4
  - en
5
  datasets:
6
+ - Waterhorse/chess_data
7
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
8
+ - OpenAssistant/oasst1
9
+ - vicgalle/alpaca-gpt4
10
  ---
11
 
12
+ # Chessgpt-Chat-v1
13
 
14
+ Chessgpt-Chat-v1 is the sft-tuned model of Chessgpt-Base-v1.
 
15
 
16
+ - Base Model: [Chessgpt-Base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1)
17
+ - Chat Version: [Chessgpt-Chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1)
 
18
 
19
  ## Model Details
 
20
  - **Model type**: Language Model
21
  - **Language(s)**: English
22
  - **License**: Apache 2.0
23
+ - **Model Description**: A 2.8B parameter pretrained language model in Chess.
 
 
 
 
24
 
25
  ## GPU Inference
26
 
 
37
  assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
38
 
39
  # init
40
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/chessgpt-chat-v1")
41
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/chessgpt-chat-v1", torch_dtype=torch.float16)
42
  model = model.to('cuda:0')
43
 
44
  # infer
 
57
  """
58
  ```
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  # Uses
61
 
62
  Excluded uses are described below.
63
 
64
+ ### Direct Use
65
 
66
+ `chessgpt-chat-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling.
67
 
68
  #### Out-of-Scope Use
69
 
70
+ `chessgpt-chat-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
+ #### Bias, Risks, and Limitations
73
 
74
+ Just as with any language model, chessgpt-chat-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases.