Update README.md
Browse files
README.md
CHANGED
@@ -30,10 +30,6 @@ img {
|
|
30 |
|
31 |
Palmyra was primarily pretrained with English text, there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra is a member of the same family of models that only contain a decoder. As a result, it was pretrained utilizing the objective of self-supervised causal language modeling. Palmyra uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation in accordance with GPT-3. Read the official paper if you want more information about this.
|
32 |
|
33 |
-
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
|
34 |
-
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
|
35 |
-
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
|
36 |
-
GPT-2/GPT-3.
|
37 |
|
38 |
## Training data
|
39 |
|
@@ -48,10 +44,24 @@ Palmyra-small learns an inner representation of the English language that can be
|
|
48 |
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
|
49 |
|
50 |
```python
|
51 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
52 |
|
53 |
-
|
54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
```
|
56 |
|
57 |
### Limitations and Biases
|
|
|
30 |
|
31 |
Palmyra was primarily pretrained with English text, there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra is a member of the same family of models that only contain a decoder. As a result, it was pretrained utilizing the objective of self-supervised causal language modeling. Palmyra uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation in accordance with GPT-3. Read the official paper if you want more information about this.
|
32 |
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## Training data
|
35 |
|
|
|
44 |
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
|
45 |
|
46 |
```python
|
|
|
47 |
|
48 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
49 |
+
import torch
|
50 |
+
|
51 |
+
model = AutoModelForCausalLM.from_pretrained("Writer/palmyra-small", torch_dtype=torch.float16).cuda()
|
52 |
+
|
53 |
+
# the fast tokenizer currently does not work correctly
|
54 |
+
tokenizer = AutoTokenizer.from_pretrained("Writer/palmyra-small", use_fast=False)
|
55 |
+
|
56 |
+
prompt = "What is the color of a carrot?\nA:"
|
57 |
+
|
58 |
+
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
|
59 |
+
|
60 |
+
generated_ids = model.generate(input_ids)
|
61 |
+
|
62 |
+
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
|
63 |
+
|
64 |
+
|
65 |
```
|
66 |
|
67 |
### Limitations and Biases
|