Upload folder using huggingface_hub (#1)
Browse files- 513296088ff7a2a404fe129761109533f9475d1633e25da26d4f14bd41eb46fc (eb209e4c798d98e95f4aeee68236cf6a108486f0)
- README.md +85 -0
- config.json +48 -0
- configuration_stablelm_epoch.py +110 -0
- generation_config.json +6 -0
- model.safetensors +3 -0
- modeling_stablelm_epoch.py +687 -0
- smash_config.json +31 -0
- special_tokens_map.json +23 -0
- tokenizer.json +0 -0
- tokenizer_config.json +215 -0
README.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
|
3 |
+
base_model: llmware/slim-extract
|
4 |
+
metrics:
|
5 |
+
- memory_disk
|
6 |
+
- memory_inference
|
7 |
+
- inference_latency
|
8 |
+
- inference_throughput
|
9 |
+
- inference_CO2_emissions
|
10 |
+
- inference_energy_consumption
|
11 |
+
tags:
|
12 |
+
- pruna-ai
|
13 |
+
---
|
14 |
+
<!-- header start -->
|
15 |
+
<!-- 200823 -->
|
16 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
17 |
+
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
|
18 |
+
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
19 |
+
</a>
|
20 |
+
</div>
|
21 |
+
<!-- header end -->
|
22 |
+
|
23 |
+
[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
|
24 |
+
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
|
25 |
+
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
|
26 |
+
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck)
|
27 |
+
|
28 |
+
# Simply make AI models cheaper, smaller, faster, and greener!
|
29 |
+
|
30 |
+
- Give a thumbs up if you like this model!
|
31 |
+
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
|
32 |
+
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
33 |
+
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
|
34 |
+
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
|
35 |
+
|
36 |
+
## Results
|
37 |
+
|
38 |
+
![image info](./plots.png)
|
39 |
+
|
40 |
+
**Frequently Asked Questions**
|
41 |
+
- ***How does the compression work?*** The model is compressed with llm-int8.
|
42 |
+
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
|
43 |
+
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
|
44 |
+
- ***What is the model format?*** We use safetensors.
|
45 |
+
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
|
46 |
+
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
|
47 |
+
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
48 |
+
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
|
49 |
+
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
|
50 |
+
|
51 |
+
## Setup
|
52 |
+
|
53 |
+
You can run the smashed model with these steps:
|
54 |
+
|
55 |
+
0. Check requirements from the original repo llmware/slim-extract installed. In particular, check python, cuda, and transformers versions.
|
56 |
+
1. Make sure that you have installed quantization related packages.
|
57 |
+
```bash
|
58 |
+
pip install transformers accelerate bitsandbytes>0.37.0
|
59 |
+
```
|
60 |
+
2. Load & run the model.
|
61 |
+
```python
|
62 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
63 |
+
|
64 |
+
|
65 |
+
model = AutoModelForCausalLM.from_pretrained("PrunaAI/llmware-slim-extract-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
|
66 |
+
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-extract")
|
67 |
+
|
68 |
+
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
|
69 |
+
|
70 |
+
outputs = model.generate(input_ids, max_new_tokens=216)
|
71 |
+
tokenizer.decode(outputs[0])
|
72 |
+
```
|
73 |
+
|
74 |
+
## Configurations
|
75 |
+
|
76 |
+
The configuration info are in `smash_config.json`.
|
77 |
+
|
78 |
+
## Credits & License
|
79 |
+
|
80 |
+
The license of the smashed model follows the license of the original model. Please check the license of the original model llmware/slim-extract before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
|
81 |
+
|
82 |
+
## Want to compress other models?
|
83 |
+
|
84 |
+
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
|
85 |
+
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
config.json
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "/ceph/hdd/staff/charpent/.cache/modelsxw87df0vsgqc68n2",
|
3 |
+
"architectures": [
|
4 |
+
"StableLMEpochForCausalLM"
|
5 |
+
],
|
6 |
+
"auto_map": {
|
7 |
+
"AutoConfig": "configuration_stablelm_epoch.StableLMEpochConfig",
|
8 |
+
"AutoModelForCausalLM": "modeling_stablelm_epoch.StableLMEpochForCausalLM"
|
9 |
+
},
|
10 |
+
"bos_token_id": 0,
|
11 |
+
"eos_token_id": 0,
|
12 |
+
"hidden_act": "silu",
|
13 |
+
"hidden_size": 2560,
|
14 |
+
"initializer_range": 0.02,
|
15 |
+
"intermediate_size": 6912,
|
16 |
+
"max_position_embeddings": 4096,
|
17 |
+
"model_type": "stablelm_epoch",
|
18 |
+
"norm_eps": 1e-05,
|
19 |
+
"num_attention_heads": 32,
|
20 |
+
"num_heads": 32,
|
21 |
+
"num_hidden_layers": 32,
|
22 |
+
"num_key_value_heads": 32,
|
23 |
+
"quantization_config": {
|
24 |
+
"_load_in_4bit": true,
|
25 |
+
"_load_in_8bit": false,
|
26 |
+
"bnb_4bit_compute_dtype": "bfloat16",
|
27 |
+
"bnb_4bit_quant_storage": "uint8",
|
28 |
+
"bnb_4bit_quant_type": "fp4",
|
29 |
+
"bnb_4bit_use_double_quant": false,
|
30 |
+
"llm_int8_enable_fp32_cpu_offload": false,
|
31 |
+
"llm_int8_has_fp16_weight": false,
|
32 |
+
"llm_int8_skip_modules": [
|
33 |
+
"lm_head"
|
34 |
+
],
|
35 |
+
"llm_int8_threshold": 6.0,
|
36 |
+
"load_in_4bit": true,
|
37 |
+
"load_in_8bit": false,
|
38 |
+
"quant_method": "bitsandbytes"
|
39 |
+
},
|
40 |
+
"rope_pct": 0.25,
|
41 |
+
"rope_theta": 10000,
|
42 |
+
"rotary_scaling_factor": 1.0,
|
43 |
+
"tie_word_embeddings": false,
|
44 |
+
"torch_dtype": "float16",
|
45 |
+
"transformers_version": "4.40.0",
|
46 |
+
"use_cache": true,
|
47 |
+
"vocab_size": 50304
|
48 |
+
}
|
configuration_stablelm_epoch.py
ADDED
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2023 Stability and The HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
""" StableLM Epoch model configuration"""
|
16 |
+
from transformers import PretrainedConfig
|
17 |
+
from transformers.utils import logging
|
18 |
+
|
19 |
+
|
20 |
+
logger = logging.get_logger(__name__)
|
21 |
+
|
22 |
+
|
23 |
+
class StableLMEpochConfig(PretrainedConfig):
|
24 |
+
r"""
|
25 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
26 |
+
documentation from [`PretrainedConfig`] for more information.
|
27 |
+
|
28 |
+
Args:
|
29 |
+
vocab_size (`int`, *optional*, defaults to 50_304):
|
30 |
+
Vocabulary size of the StableLM model. Defines the number of different tokens that
|
31 |
+
can be represented by the `inputs_ids` passed when calling [`StableLMEpochModel`].
|
32 |
+
intermediate_size (`int`, *optional*, defaults to 6912):
|
33 |
+
Dimension of the MLP representations.
|
34 |
+
hidden_size (`int`, *optional*, defaults to 2560):
|
35 |
+
Dimension of the decoder layers and the pooler layer.
|
36 |
+
num_hidden_layers (`int`, *optional*, defaults to 32):
|
37 |
+
Number of hidden layers in the Transformer decoder.
|
38 |
+
num_attention_heads (`int`, *optional*, defaults to 32):
|
39 |
+
Number of attention heads for each attention layer in the Transformer encoder.
|
40 |
+
num_key_value_heads (`int`, *optional*):
|
41 |
+
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
42 |
+
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
43 |
+
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
44 |
+
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
45 |
+
by meanpooling all the original heads within that group. For more details checkout [this
|
46 |
+
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
|
47 |
+
`num_attention_heads`.
|
48 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
49 |
+
The non-linear activation function (function or string).
|
50 |
+
rope_pct (`float`, *optional*, defaults to 1.0):
|
51 |
+
Percentage of hidden dimensions to allocate to rotary embeddings.
|
52 |
+
rope_theta (`float`, *optional*, defaults to 10000.0):
|
53 |
+
The base period of the RoPE embeddings.
|
54 |
+
max_position_embeddings (`int`, *optional*, defaults to 2048):
|
55 |
+
The maximum sequence length that this model might ever be used with.
|
56 |
+
Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
|
57 |
+
initializer_range (`float`, *optional*, defaults to 1e-5):
|
58 |
+
The standard deviation of the truncated_normal_initializer for initializing
|
59 |
+
all weight matrices.
|
60 |
+
norm_eps (`float`, *optional*, defaults to 1e-8):
|
61 |
+
The epsilon used by the normalization layers.
|
62 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
63 |
+
Whether or not the model should return the last key/values attentions
|
64 |
+
(not used by all models). Only relevant if `config.is_decoder=True`.
|
65 |
+
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
|
66 |
+
Whether to tie weight embeddings
|
67 |
+
"""
|
68 |
+
model_type = "stablelm_epoch"
|
69 |
+
keys_to_ignore_at_inference = ["past_key_values"]
|
70 |
+
|
71 |
+
def __init__(
|
72 |
+
self,
|
73 |
+
vocab_size=50_304,
|
74 |
+
intermediate_size=6912,
|
75 |
+
hidden_size=2560,
|
76 |
+
num_hidden_layers=32,
|
77 |
+
num_attention_heads=32,
|
78 |
+
num_key_value_heads=32,
|
79 |
+
hidden_act="silu",
|
80 |
+
rope_pct=0.25,
|
81 |
+
rope_theta=10_000,
|
82 |
+
max_position_embeddings=4096,
|
83 |
+
initializer_range=0.02,
|
84 |
+
norm_eps=1.0e-5,
|
85 |
+
use_cache=True,
|
86 |
+
bos_token_id=0,
|
87 |
+
eos_token_id=2,
|
88 |
+
tie_word_embeddings=False,
|
89 |
+
**kwargs,
|
90 |
+
):
|
91 |
+
self.vocab_size = vocab_size
|
92 |
+
self.max_position_embeddings = max_position_embeddings
|
93 |
+
self.intermediate_size = intermediate_size
|
94 |
+
self.hidden_size = hidden_size
|
95 |
+
self.num_hidden_layers = num_hidden_layers
|
96 |
+
self.num_attention_heads = num_attention_heads
|
97 |
+
self.num_key_value_heads = num_key_value_heads
|
98 |
+
self.hidden_act = hidden_act
|
99 |
+
self.rope_pct = rope_pct
|
100 |
+
self.rope_theta = rope_theta
|
101 |
+
self.initializer_range = initializer_range
|
102 |
+
self.norm_eps = norm_eps
|
103 |
+
self.use_cache = use_cache
|
104 |
+
self.tie_word_embeddings = tie_word_embeddings
|
105 |
+
super().__init__(
|
106 |
+
bos_token_id=bos_token_id,
|
107 |
+
eos_token_id=eos_token_id,
|
108 |
+
tie_word_embeddings=tie_word_embeddings,
|
109 |
+
**kwargs,
|
110 |
+
)
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 0,
|
4 |
+
"eos_token_id": 0,
|
5 |
+
"transformers_version": "4.40.0"
|
6 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:42aacf7df9732022690a356eeaadc6cd757efb41e1f975dd8151750cb577961a
|
3 |
+
size 1943307560
|
modeling_stablelm_epoch.py
ADDED
@@ -0,0 +1,687 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2023 Stability AI, EleutherAI, and The HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
#
|
16 |
+
# This code is based off the following work:
|
17 |
+
# https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py
|
18 |
+
# https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py
|
19 |
+
""" PyTorch StableLM Epoch model. """
|
20 |
+
from typing import Optional, Tuple, Union
|
21 |
+
import math
|
22 |
+
|
23 |
+
import torch
|
24 |
+
import torch.utils.checkpoint
|
25 |
+
from torch import nn
|
26 |
+
from torch.nn import CrossEntropyLoss
|
27 |
+
from transformers.modeling_outputs import (
|
28 |
+
BaseModelOutputWithPast,
|
29 |
+
CausalLMOutputWithPast,
|
30 |
+
)
|
31 |
+
from transformers.modeling_utils import PreTrainedModel
|
32 |
+
from transformers.utils import logging
|
33 |
+
from .configuration_stablelm_epoch import StableLMEpochConfig
|
34 |
+
|
35 |
+
|
36 |
+
logger = logging.get_logger(__name__)
|
37 |
+
|
38 |
+
|
39 |
+
# Copied from transformers.models.bart.modeling_bart._make_causal_mask
|
40 |
+
def _make_causal_mask(
|
41 |
+
input_ids_shape: torch.Size,
|
42 |
+
dtype: torch.dtype,
|
43 |
+
device: torch.device,
|
44 |
+
past_key_values_length: int = 0,
|
45 |
+
):
|
46 |
+
"""Make causal mask used for bi-directional self-attention."""
|
47 |
+
batch_size, tgt_len = input_ids_shape
|
48 |
+
mask = torch.full((tgt_len, tgt_len), torch.finfo(torch.float16).min, device=device)
|
49 |
+
mask_cond = torch.arange(mask.size(-1), device=device)
|
50 |
+
mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
|
51 |
+
mask = mask.to(dtype)
|
52 |
+
if past_key_values_length > 0:
|
53 |
+
mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
|
54 |
+
return mask[None, None, :, :].expand(batch_size, 1, tgt_len, tgt_len + past_key_values_length)
|
55 |
+
|
56 |
+
|
57 |
+
# Copied from transformers.models.bart.modeling_bart._expand_mask
|
58 |
+
def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
|
59 |
+
"""Expands attention_mask from `[batch_size, seq_len]` to `[batch_size, 1, tgt_seq_len, src_seq_len]`."""
|
60 |
+
batch_size, src_len = mask.size()
|
61 |
+
tgt_len = tgt_len if tgt_len is not None else src_len
|
62 |
+
|
63 |
+
expanded_mask = mask[:, None, None, :].expand(batch_size, 1, tgt_len, src_len).to(dtype)
|
64 |
+
inverted_mask = 1.0 - expanded_mask
|
65 |
+
|
66 |
+
return inverted_mask.masked_fill(
|
67 |
+
inverted_mask.to(torch.bool), torch.finfo(dtype).min
|
68 |
+
)
|
69 |
+
|
70 |
+
|
71 |
+
class RotaryEmbedding(nn.Module):
|
72 |
+
def __init__(
|
73 |
+
self,
|
74 |
+
dim: int,
|
75 |
+
max_position_embeddings: int,
|
76 |
+
base: int = 10_000,
|
77 |
+
device: Optional[torch.device] = None,
|
78 |
+
):
|
79 |
+
super().__init__()
|
80 |
+
|
81 |
+
self.dim = dim
|
82 |
+
self.max_position_embeddings = max_position_embeddings
|
83 |
+
self.base = base
|
84 |
+
inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2, device=device, dtype=torch.float32) / self.dim))
|
85 |
+
self.register_buffer("inv_freq", inv_freq, persistent=False)
|
86 |
+
|
87 |
+
# Build here to make `torch.jit.trace` work.
|
88 |
+
self._set_cos_sin_cache(
|
89 |
+
seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype(),
|
90 |
+
)
|
91 |
+
|
92 |
+
def _set_cos_sin_cache(self, seq_len: int, device: torch.device, dtype: torch.dtype):
|
93 |
+
self.max_seq_len_cached = seq_len
|
94 |
+
t = torch.arange(self.max_seq_len_cached, device=device, dtype=torch.float32)
|
95 |
+
|
96 |
+
# Don't do einsum, it converts fp32 to fp16 under AMP
|
97 |
+
# freqs = torch.einsum("i,j->ij", t, self.inv_freq)
|
98 |
+
freqs = torch.outer(t, self.inv_freq)
|
99 |
+
# Different from paper, but it uses a different permutation in order to obtain the same calculation
|
100 |
+
emb = torch.cat((freqs, freqs), dim=-1)
|
101 |
+
self.register_buffer("cos_cached", emb.cos()[None, None, :, :].to(dtype), persistent=False)
|
102 |
+
self.register_buffer("sin_cached", emb.sin()[None, None, :, :].to(dtype), persistent=False)
|
103 |
+
|
104 |
+
def forward(self, x: torch.Tensor, seq_len: Optional[int] = None):
|
105 |
+
# x: [batch_size, num_heads, seq_len, head_size]
|
106 |
+
if seq_len > self.max_seq_len_cached:
|
107 |
+
self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=torch.get_default_dtype())
|
108 |
+
return (
|
109 |
+
self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
110 |
+
self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
111 |
+
)
|
112 |
+
|
113 |
+
|
114 |
+
def rotate_half(x: torch.Tensor):
|
115 |
+
"""Rotates half the hidden dims of the input."""
|
116 |
+
x1, x2 = torch.chunk(x, 2, dim=-1)
|
117 |
+
return torch.cat((-x2, x1), dim=-1)
|
118 |
+
|
119 |
+
|
120 |
+
def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
|
121 |
+
# The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
|
122 |
+
cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
|
123 |
+
sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
|
124 |
+
cos = cos[position_ids].unsqueeze(1) # [batch_size, 1, seq_len, dim]
|
125 |
+
sin = sin[position_ids].unsqueeze(1) # [batch_size, 1, seq_len, dim]
|
126 |
+
q_embed = (q * cos) + (rotate_half(q) * sin)
|
127 |
+
k_embed = (k * cos) + (rotate_half(k) * sin)
|
128 |
+
return q_embed, k_embed
|
129 |
+
|
130 |
+
|
131 |
+
class MLP(nn.Module):
|
132 |
+
def __init__(self, config: StableLMEpochConfig):
|
133 |
+
super().__init__()
|
134 |
+
self.config = config
|
135 |
+
self.hidden_size = config.hidden_size
|
136 |
+
self.intermediate_size = config.intermediate_size
|
137 |
+
self.gate_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)
|
138 |
+
self.up_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)
|
139 |
+
self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False)
|
140 |
+
self.act_fn = nn.SiLU()
|
141 |
+
|
142 |
+
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
143 |
+
return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
|
144 |
+
|
145 |
+
|
146 |
+
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
|
147 |
+
"""
|
148 |
+
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
|
149 |
+
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
|
150 |
+
"""
|
151 |
+
batch, num_key_value_heads, slen, head_dim = hidden_states.shape
|
152 |
+
if n_rep == 1:
|
153 |
+
return hidden_states
|
154 |
+
hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
|
155 |
+
return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
|
156 |
+
|
157 |
+
|
158 |
+
class Attention(nn.Module):
|
159 |
+
def __init__(self, config: StableLMEpochConfig):
|
160 |
+
super().__init__()
|
161 |
+
self.config = config
|
162 |
+
self.hidden_size = config.hidden_size
|
163 |
+
self.num_heads = config.num_attention_heads
|
164 |
+
self.head_dim = self.hidden_size // self.num_heads
|
165 |
+
self.num_key_value_heads = config.num_key_value_heads
|
166 |
+
self.num_key_value_groups = self.num_heads // self.num_key_value_heads
|
167 |
+
self.max_position_embeddings = config.max_position_embeddings
|
168 |
+
|
169 |
+
if (self.head_dim * self.num_heads) != self.hidden_size:
|
170 |
+
raise ValueError(
|
171 |
+
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
|
172 |
+
f" and `num_heads`: {self.num_heads})."
|
173 |
+
)
|
174 |
+
self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
|
175 |
+
self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
|
176 |
+
self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
|
177 |
+
self.o_proj = nn.Linear(self.hidden_size, self.hidden_size, bias=False)
|
178 |
+
|
179 |
+
self._init_rope()
|
180 |
+
|
181 |
+
def _init_rope(self):
|
182 |
+
self.rotary_ndims = int(self.head_dim * self.config.rope_pct)
|
183 |
+
self.rotary_emb = RotaryEmbedding(
|
184 |
+
self.rotary_ndims,
|
185 |
+
max_position_embeddings=self.config.max_position_embeddings,
|
186 |
+
base=self.config.rope_theta,
|
187 |
+
)
|
188 |
+
|
189 |
+
def forward(
|
190 |
+
self,
|
191 |
+
hidden_states: torch.FloatTensor,
|
192 |
+
attention_mask: torch.FloatTensor,
|
193 |
+
position_ids: torch.LongTensor,
|
194 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
195 |
+
output_attentions: Optional[bool] = False,
|
196 |
+
use_cache: Optional[bool] = False,
|
197 |
+
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
|
198 |
+
bsz, q_len, _ = hidden_states.size()
|
199 |
+
|
200 |
+
query_states = self.q_proj(hidden_states)
|
201 |
+
key_states = self.k_proj(hidden_states)
|
202 |
+
value_states = self.v_proj(hidden_states)
|
203 |
+
|
204 |
+
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
205 |
+
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
|
206 |
+
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
|
207 |
+
|
208 |
+
query_rot = query_states[..., : self.rotary_ndims]
|
209 |
+
query_pass = query_states[..., self.rotary_ndims :]
|
210 |
+
key_rot = key_states[..., : self.rotary_ndims]
|
211 |
+
key_pass = key_states[..., self.rotary_ndims :]
|
212 |
+
|
213 |
+
kv_seq_len = key_states.shape[-2]
|
214 |
+
if past_key_value is not None:
|
215 |
+
kv_seq_len += past_key_value[0].shape[-2]
|
216 |
+
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
|
217 |
+
query_states, key_states = apply_rotary_pos_emb(query_rot, key_rot, cos, sin, position_ids)
|
218 |
+
|
219 |
+
# [batch_size, num_heads, seq_len, head_dim]
|
220 |
+
query_states = torch.cat((query_states, query_pass), dim=-1)
|
221 |
+
key_states = torch.cat((key_states, key_pass), dim=-1)
|
222 |
+
|
223 |
+
if past_key_value is not None:
|
224 |
+
# Reuse k, v, self_attention
|
225 |
+
key_states = torch.cat((past_key_value[0], key_states), dim=2)
|
226 |
+
value_states = torch.cat((past_key_value[1], value_states), dim=2)
|
227 |
+
|
228 |
+
past_key_value = (key_states, value_states) if use_cache else None
|
229 |
+
|
230 |
+
# Repeat k/v heads if n_kv_heads < n_heads
|
231 |
+
key_states = repeat_kv(key_states, self.num_key_value_groups)
|
232 |
+
value_states = repeat_kv(value_states, self.num_key_value_groups)
|
233 |
+
|
234 |
+
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
|
235 |
+
|
236 |
+
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
|
237 |
+
raise ValueError(
|
238 |
+
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
|
239 |
+
f" {attn_weights.size()}"
|
240 |
+
)
|
241 |
+
|
242 |
+
if attention_mask is not None:
|
243 |
+
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
|
244 |
+
raise ValueError(
|
245 |
+
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
|
246 |
+
)
|
247 |
+
attn_weights = attn_weights + attention_mask
|
248 |
+
|
249 |
+
# Upcast attention to fp32
|
250 |
+
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
|
251 |
+
attn_output = torch.matmul(attn_weights, value_states)
|
252 |
+
|
253 |
+
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
|
254 |
+
raise ValueError(
|
255 |
+
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
|
256 |
+
f" {attn_output.size()}"
|
257 |
+
)
|
258 |
+
|
259 |
+
# Merge heads
|
260 |
+
attn_output = attn_output.transpose(1, 2).contiguous()
|
261 |
+
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
|
262 |
+
|
263 |
+
# Final linear projection
|
264 |
+
attn_output = self.o_proj(attn_output)
|
265 |
+
|
266 |
+
if not output_attentions:
|
267 |
+
attn_weights = None
|
268 |
+
|
269 |
+
return attn_output, attn_weights, past_key_value
|
270 |
+
|
271 |
+
|
272 |
+
class DecoderLayer(nn.Module):
|
273 |
+
def __init__(self, config: StableLMEpochConfig):
|
274 |
+
super().__init__()
|
275 |
+
self.self_attn = Attention(config)
|
276 |
+
self.mlp = MLP(config)
|
277 |
+
self.input_layernorm = nn.LayerNorm(config.hidden_size, eps=config.norm_eps)
|
278 |
+
self.post_attention_layernorm = nn.LayerNorm(config.hidden_size, eps=config.norm_eps)
|
279 |
+
|
280 |
+
def forward(
|
281 |
+
self,
|
282 |
+
hidden_states: Optional[torch.FloatTensor],
|
283 |
+
attention_mask: Optional[torch.FloatTensor] = None,
|
284 |
+
position_ids: Optional[torch.LongTensor] = None,
|
285 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
286 |
+
output_attentions: Optional[bool] = False,
|
287 |
+
use_cache: Optional[bool] = False,
|
288 |
+
) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]:
|
289 |
+
residual = hidden_states
|
290 |
+
|
291 |
+
hidden_states = self.input_layernorm(hidden_states)
|
292 |
+
|
293 |
+
# Self Attention
|
294 |
+
hidden_states, self_attn_weights, present_key_value = self.self_attn(
|
295 |
+
hidden_states=hidden_states,
|
296 |
+
attention_mask=attention_mask,
|
297 |
+
position_ids=position_ids,
|
298 |
+
past_key_value=past_key_value,
|
299 |
+
output_attentions=output_attentions,
|
300 |
+
use_cache=use_cache,
|
301 |
+
)
|
302 |
+
hidden_states = residual + hidden_states
|
303 |
+
|
304 |
+
# Fully Connected
|
305 |
+
residual = hidden_states
|
306 |
+
hidden_states = self.post_attention_layernorm(hidden_states)
|
307 |
+
hidden_states = self.mlp(hidden_states)
|
308 |
+
hidden_states = residual + hidden_states
|
309 |
+
|
310 |
+
outputs = (hidden_states,)
|
311 |
+
|
312 |
+
if output_attentions:
|
313 |
+
outputs += (self_attn_weights,)
|
314 |
+
|
315 |
+
if use_cache:
|
316 |
+
outputs += (present_key_value,)
|
317 |
+
|
318 |
+
return outputs
|
319 |
+
|
320 |
+
|
321 |
+
class StableLMEpochPreTrainedModel(PreTrainedModel):
|
322 |
+
"""An abstract class to handle weights initialization and a simple interface
|
323 |
+
for downloading and loading pretrained models.
|
324 |
+
"""
|
325 |
+
|
326 |
+
config_class = StableLMEpochConfig
|
327 |
+
base_model_prefix = "transformer"
|
328 |
+
supports_gradient_checkpointing = True
|
329 |
+
_no_split_modules = ["DecoderLayer"]
|
330 |
+
_skip_keys_device_placement = "past_key_values"
|
331 |
+
|
332 |
+
def _init_weights(self, module: nn.Module):
|
333 |
+
"""Initialize the weights"""
|
334 |
+
if isinstance(module, nn.Linear):
|
335 |
+
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
|
336 |
+
if module.bias is not None:
|
337 |
+
module.bias.data.zero_()
|
338 |
+
elif isinstance(module, nn.Embedding):
|
339 |
+
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
|
340 |
+
if module.padding_idx is not None:
|
341 |
+
module.weight.data[module.padding_idx].zero_()
|
342 |
+
elif isinstance(module, nn.LayerNorm):
|
343 |
+
module.bias.data.zero_()
|
344 |
+
module.weight.data.fill_(1.0)
|
345 |
+
|
346 |
+
def _set_gradient_checkpointing(self, module: nn.Module, value=False):
|
347 |
+
if isinstance(module, StableLMEpochModel):
|
348 |
+
module.gradient_checkpointing = value
|
349 |
+
|
350 |
+
|
351 |
+
class StableLMEpochModel(StableLMEpochPreTrainedModel):
|
352 |
+
def __init__(self, config: StableLMEpochConfig):
|
353 |
+
super().__init__(config)
|
354 |
+
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, config.pad_token_id)
|
355 |
+
self.layers = nn.ModuleList([DecoderLayer(config) for _ in range(config.num_hidden_layers)])
|
356 |
+
self.norm = nn.LayerNorm(config.hidden_size, eps=config.norm_eps)
|
357 |
+
|
358 |
+
self.gradient_checkpointing = False
|
359 |
+
# Initialize weights and apply final processing
|
360 |
+
self.post_init()
|
361 |
+
|
362 |
+
def get_input_embeddings(self):
|
363 |
+
return self.embed_tokens
|
364 |
+
|
365 |
+
def set_input_embeddings(self, value: nn.Module):
|
366 |
+
self.embed_tokens = value
|
367 |
+
|
368 |
+
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
|
369 |
+
def _prepare_decoder_attention_mask(
|
370 |
+
self,
|
371 |
+
attention_mask: torch.Tensor,
|
372 |
+
input_shape: torch.Size,
|
373 |
+
inputs_embeds: torch.Tensor,
|
374 |
+
past_key_values_length: int,
|
375 |
+
):
|
376 |
+
# Create causal mask
|
377 |
+
# [batch_size, seq_len] -> [batch_size, 1, tgt_seq_len, src_seq_len]
|
378 |
+
combined_attention_mask = None
|
379 |
+
if input_shape[-1] > 1:
|
380 |
+
combined_attention_mask = _make_causal_mask(
|
381 |
+
input_shape,
|
382 |
+
inputs_embeds.dtype,
|
383 |
+
device=inputs_embeds.device,
|
384 |
+
past_key_values_length=past_key_values_length,
|
385 |
+
)
|
386 |
+
|
387 |
+
if attention_mask is not None:
|
388 |
+
# [batch_size, seq_len] -> [batch_size, 1, tgt_seq_len, src_seq_len]
|
389 |
+
expanded_attn_mask = _expand_mask(
|
390 |
+
attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]
|
391 |
+
).to(inputs_embeds.device)
|
392 |
+
combined_attention_mask = expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
|
393 |
+
|
394 |
+
return combined_attention_mask
|
395 |
+
|
396 |
+
def forward(
|
397 |
+
self,
|
398 |
+
input_ids: Optional[torch.LongTensor] = None,
|
399 |
+
attention_mask: Optional[torch.FloatTensor] = None,
|
400 |
+
position_ids: Optional[torch.LongTensor] = None,
|
401 |
+
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
|
402 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
403 |
+
use_cache: Optional[bool] = None,
|
404 |
+
output_attentions: Optional[bool] = None,
|
405 |
+
output_hidden_states: Optional[bool] = None,
|
406 |
+
return_dict: Optional[bool] = None,
|
407 |
+
) -> Union[Tuple, BaseModelOutputWithPast]:
|
408 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
409 |
+
output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
410 |
+
use_cache = use_cache if use_cache is not None else self.config.use_cache
|
411 |
+
|
412 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
413 |
+
|
414 |
+
# Retrieve input_ids and inputs_embeds
|
415 |
+
if input_ids is not None and inputs_embeds is not None:
|
416 |
+
raise ValueError(
|
417 |
+
"You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time"
|
418 |
+
)
|
419 |
+
elif input_ids is not None:
|
420 |
+
batch_size, seq_length = input_ids.shape
|
421 |
+
elif inputs_embeds is not None:
|
422 |
+
batch_size, seq_length, _ = inputs_embeds.shape
|
423 |
+
else:
|
424 |
+
raise ValueError(
|
425 |
+
"You have to specify either decoder_input_ids or decoder_inputs_embeds"
|
426 |
+
)
|
427 |
+
|
428 |
+
seq_length_with_past = seq_length
|
429 |
+
past_key_values_length = 0
|
430 |
+
|
431 |
+
if past_key_values is not None:
|
432 |
+
past_key_values_length = past_key_values[0][0].shape[2]
|
433 |
+
seq_length_with_past = seq_length_with_past + past_key_values_length
|
434 |
+
|
435 |
+
if position_ids is None:
|
436 |
+
device = input_ids.device if input_ids is not None else inputs_embeds.device
|
437 |
+
position_ids = torch.arange(
|
438 |
+
past_key_values_length,
|
439 |
+
seq_length + past_key_values_length,
|
440 |
+
dtype=torch.long,
|
441 |
+
device=device,
|
442 |
+
)
|
443 |
+
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
|
444 |
+
else:
|
445 |
+
position_ids = position_ids.view(-1, seq_length).long()
|
446 |
+
|
447 |
+
if inputs_embeds is None:
|
448 |
+
inputs_embeds = self.embed_tokens(input_ids)
|
449 |
+
# Embed positions
|
450 |
+
if attention_mask is None:
|
451 |
+
attention_mask = torch.ones(
|
452 |
+
(batch_size, seq_length_with_past),
|
453 |
+
dtype=torch.bool,
|
454 |
+
device=inputs_embeds.device,
|
455 |
+
)
|
456 |
+
attention_mask = self._prepare_decoder_attention_mask(
|
457 |
+
attention_mask,
|
458 |
+
(batch_size, seq_length),
|
459 |
+
inputs_embeds,
|
460 |
+
past_key_values_length,
|
461 |
+
)
|
462 |
+
|
463 |
+
hidden_states = inputs_embeds
|
464 |
+
|
465 |
+
if self.gradient_checkpointing and self.training:
|
466 |
+
if use_cache:
|
467 |
+
logger.warning(
|
468 |
+
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
|
469 |
+
)
|
470 |
+
use_cache = False
|
471 |
+
|
472 |
+
# Decoder layers
|
473 |
+
all_hidden_states = () if output_hidden_states else None
|
474 |
+
all_self_attns = () if output_attentions else None
|
475 |
+
next_decoder_cache = () if use_cache else None
|
476 |
+
|
477 |
+
for idx, decoder_layer in enumerate(self.layers):
|
478 |
+
if output_hidden_states:
|
479 |
+
all_hidden_states += (hidden_states,)
|
480 |
+
|
481 |
+
past_key_value = (
|
482 |
+
past_key_values[idx] if past_key_values is not None else None
|
483 |
+
)
|
484 |
+
|
485 |
+
if self.gradient_checkpointing and self.training:
|
486 |
+
|
487 |
+
def create_custom_forward(module):
|
488 |
+
def custom_forward(*inputs):
|
489 |
+
# None for past_key_value
|
490 |
+
return module(*inputs, past_key_value, output_attentions)
|
491 |
+
|
492 |
+
return custom_forward
|
493 |
+
|
494 |
+
layer_outputs = torch.utils.checkpoint.checkpoint(
|
495 |
+
create_custom_forward(decoder_layer),
|
496 |
+
hidden_states,
|
497 |
+
attention_mask,
|
498 |
+
position_ids,
|
499 |
+
)
|
500 |
+
else:
|
501 |
+
layer_outputs = decoder_layer(
|
502 |
+
hidden_states,
|
503 |
+
attention_mask=attention_mask,
|
504 |
+
position_ids=position_ids,
|
505 |
+
past_key_value=past_key_value,
|
506 |
+
output_attentions=output_attentions,
|
507 |
+
use_cache=use_cache,
|
508 |
+
)
|
509 |
+
|
510 |
+
hidden_states = layer_outputs[0]
|
511 |
+
|
512 |
+
if use_cache:
|
513 |
+
next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
|
514 |
+
|
515 |
+
if output_attentions:
|
516 |
+
all_self_attns += (layer_outputs[1],)
|
517 |
+
|
518 |
+
hidden_states = self.norm(hidden_states)
|
519 |
+
|
520 |
+
# Add hidden states from the last decoder layer
|
521 |
+
if output_hidden_states:
|
522 |
+
all_hidden_states += (hidden_states,)
|
523 |
+
|
524 |
+
next_cache = next_decoder_cache if use_cache else None
|
525 |
+
if not return_dict:
|
526 |
+
return tuple(
|
527 |
+
v
|
528 |
+
for v in [hidden_states, next_cache, all_hidden_states, all_self_attns]
|
529 |
+
if v is not None
|
530 |
+
)
|
531 |
+
return BaseModelOutputWithPast(
|
532 |
+
last_hidden_state=hidden_states,
|
533 |
+
past_key_values=next_cache,
|
534 |
+
hidden_states=all_hidden_states,
|
535 |
+
attentions=all_self_attns,
|
536 |
+
)
|
537 |
+
|
538 |
+
|
539 |
+
class StableLMEpochForCausalLM(StableLMEpochPreTrainedModel):
|
540 |
+
_tied_weights_keys = ["lm_head.weight"]
|
541 |
+
|
542 |
+
def __init__(self, config: StableLMEpochConfig):
|
543 |
+
super().__init__(config)
|
544 |
+
|
545 |
+
self.model = StableLMEpochModel(config)
|
546 |
+
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
|
547 |
+
|
548 |
+
# Initialize weights and apply final processing
|
549 |
+
self.post_init()
|
550 |
+
|
551 |
+
def get_input_embeddings(self):
|
552 |
+
return self.model.embed_tokens
|
553 |
+
|
554 |
+
def set_input_embeddings(self, value):
|
555 |
+
self.model.embed_tokens = value
|
556 |
+
|
557 |
+
def get_output_embeddings(self):
|
558 |
+
return self.lm_head
|
559 |
+
|
560 |
+
def set_output_embeddings(self, new_embeddings: nn.Module):
|
561 |
+
self.lm_head = new_embeddings
|
562 |
+
|
563 |
+
def get_decoder(self):
|
564 |
+
return self.transformer
|
565 |
+
|
566 |
+
def set_decoder(self, decoder):
|
567 |
+
self.transformer = decoder
|
568 |
+
|
569 |
+
def forward(
|
570 |
+
self,
|
571 |
+
input_ids: Optional[torch.LongTensor] = None,
|
572 |
+
attention_mask: Optional[torch.FloatTensor] = None,
|
573 |
+
position_ids: Optional[torch.LongTensor] = None,
|
574 |
+
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
|
575 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
576 |
+
labels: Optional[torch.LongTensor] = None,
|
577 |
+
use_cache: Optional[bool] = None,
|
578 |
+
output_attentions: Optional[bool] = None,
|
579 |
+
output_hidden_states: Optional[bool] = None,
|
580 |
+
return_dict: Optional[bool] = None,
|
581 |
+
) -> Union[Tuple, CausalLMOutputWithPast]:
|
582 |
+
output_attentions = (
|
583 |
+
output_attentions
|
584 |
+
if output_attentions is not None
|
585 |
+
else self.config.output_attentions
|
586 |
+
)
|
587 |
+
output_hidden_states = (
|
588 |
+
output_hidden_states
|
589 |
+
if output_hidden_states is not None
|
590 |
+
else self.config.output_hidden_states
|
591 |
+
)
|
592 |
+
return_dict = (
|
593 |
+
return_dict if return_dict is not None else self.config.use_return_dict
|
594 |
+
)
|
595 |
+
|
596 |
+
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
|
597 |
+
outputs = self.model(
|
598 |
+
input_ids,
|
599 |
+
attention_mask=attention_mask,
|
600 |
+
position_ids=position_ids,
|
601 |
+
past_key_values=past_key_values,
|
602 |
+
inputs_embeds=inputs_embeds,
|
603 |
+
use_cache=use_cache,
|
604 |
+
output_attentions=output_attentions,
|
605 |
+
output_hidden_states=output_hidden_states,
|
606 |
+
return_dict=return_dict,
|
607 |
+
)
|
608 |
+
|
609 |
+
hidden_states = outputs[0]
|
610 |
+
logits = self.lm_head(hidden_states).float()
|
611 |
+
|
612 |
+
loss = None
|
613 |
+
if labels is not None:
|
614 |
+
# Shift so that tokens < n predict n
|
615 |
+
shift_logits = logits[..., :-1, :].contiguous()
|
616 |
+
shift_labels = labels[..., 1:].contiguous()
|
617 |
+
# Flatten the tokens
|
618 |
+
loss_fct = CrossEntropyLoss()
|
619 |
+
shift_logits = shift_logits.view(-1, self.config.vocab_size)
|
620 |
+
shift_labels = shift_labels.view(-1)
|
621 |
+
# Enable model parallelism
|
622 |
+
shift_labels = shift_labels.to(shift_logits.device)
|
623 |
+
loss = loss_fct(shift_logits, shift_labels)
|
624 |
+
|
625 |
+
if not return_dict:
|
626 |
+
output = (logits,) + outputs[1:]
|
627 |
+
return (loss,) + output if loss is not None else output
|
628 |
+
|
629 |
+
return CausalLMOutputWithPast(
|
630 |
+
loss=loss,
|
631 |
+
logits=logits,
|
632 |
+
past_key_values=outputs.past_key_values,
|
633 |
+
hidden_states=outputs.hidden_states,
|
634 |
+
attentions=outputs.attentions,
|
635 |
+
)
|
636 |
+
|
637 |
+
def prepare_inputs_for_generation(
|
638 |
+
self,
|
639 |
+
input_ids,
|
640 |
+
past_key_values: Optional[torch.Tensor] = None,
|
641 |
+
attention_mask: Optional[torch.Tensor] = None,
|
642 |
+
inputs_embeds: Optional[torch.Tensor] = None,
|
643 |
+
**kwargs,
|
644 |
+
):
|
645 |
+
# Trim decoder_input_ids if past is used
|
646 |
+
if past_key_values and past_key_values[0] is not None:
|
647 |
+
input_ids = input_ids[:, -1:]
|
648 |
+
|
649 |
+
position_ids = kwargs.get("position_ids", None)
|
650 |
+
if attention_mask is not None and position_ids is None:
|
651 |
+
# Create position_ids on the fly for batch generation
|
652 |
+
position_ids = attention_mask.long().cumsum(-1) - 1
|
653 |
+
position_ids.masked_fill_(attention_mask == 0, 1)
|
654 |
+
if past_key_values:
|
655 |
+
position_ids = position_ids[:, -1].unsqueeze(-1)
|
656 |
+
|
657 |
+
# If `inputs_embeds` are passed, we only want to use them in the 1st generation step
|
658 |
+
if inputs_embeds is not None and past_key_values is None:
|
659 |
+
model_inputs = {"inputs_embeds": inputs_embeds}
|
660 |
+
else:
|
661 |
+
model_inputs = {"input_ids": input_ids}
|
662 |
+
|
663 |
+
model_inputs.update(
|
664 |
+
{
|
665 |
+
"attention_mask": attention_mask,
|
666 |
+
"past_key_values": past_key_values,
|
667 |
+
"use_cache": kwargs.get("use_cache"),
|
668 |
+
"position_ids": position_ids,
|
669 |
+
}
|
670 |
+
)
|
671 |
+
return model_inputs
|
672 |
+
|
673 |
+
@staticmethod
|
674 |
+
def _reorder_cache(past_key_values, beam_idx):
|
675 |
+
reordered_past = ()
|
676 |
+
for layer_past in past_key_values:
|
677 |
+
reordered_past += (
|
678 |
+
tuple(
|
679 |
+
past_state.index_select(0, beam_idx.to(past_state.device))
|
680 |
+
for past_state in layer_past
|
681 |
+
),
|
682 |
+
)
|
683 |
+
return reordered_past
|
684 |
+
|
685 |
+
|
686 |
+
StableLMEpochConfig.register_for_auto_class()
|
687 |
+
StableLMEpochForCausalLM.register_for_auto_class("AutoModelForCausalLM")
|
smash_config.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"api_key": null,
|
3 |
+
"verify_url": "http://johnrachwan.pythonanywhere.com",
|
4 |
+
"smash_config": {
|
5 |
+
"pruners": "None",
|
6 |
+
"pruning_ratio": 0.0,
|
7 |
+
"factorizers": "None",
|
8 |
+
"quantizers": "['llm-int8']",
|
9 |
+
"weight_quantization_bits": 4,
|
10 |
+
"output_deviation": 0.005,
|
11 |
+
"compilers": "None",
|
12 |
+
"static_batch": true,
|
13 |
+
"static_shape": true,
|
14 |
+
"controlnet": "None",
|
15 |
+
"unet_dim": 4,
|
16 |
+
"device": "cuda",
|
17 |
+
"cache_dir": "/ceph/hdd/staff/charpent/.cache/modelsxw87df0v",
|
18 |
+
"batch_size": 1,
|
19 |
+
"model_name": "llmware/slim-extract",
|
20 |
+
"task": "text_text_generation",
|
21 |
+
"max_batch_size": 1,
|
22 |
+
"qtype_weight": "torch.qint8",
|
23 |
+
"qtype_activation": "torch.quint8",
|
24 |
+
"qobserver": "<class 'torch.ao.quantization.observer.MinMaxObserver'>",
|
25 |
+
"qscheme": "torch.per_tensor_symmetric",
|
26 |
+
"qconfig": "x86",
|
27 |
+
"group_size": 128,
|
28 |
+
"damp_percent": 0.1,
|
29 |
+
"save_load_fn": "bitsandbytes"
|
30 |
+
}
|
31 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<|endoftext|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"unk_token": {
|
17 |
+
"content": "<|endoftext|>",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
}
|
23 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_eos_token": false,
|
4 |
+
"add_prefix_space": false,
|
5 |
+
"added_tokens_decoder": {
|
6 |
+
"0": {
|
7 |
+
"content": "<|endoftext|>",
|
8 |
+
"lstrip": false,
|
9 |
+
"normalized": false,
|
10 |
+
"rstrip": false,
|
11 |
+
"single_word": false,
|
12 |
+
"special": true
|
13 |
+
},
|
14 |
+
"1": {
|
15 |
+
"content": "<|padding|>",
|
16 |
+
"lstrip": false,
|
17 |
+
"normalized": false,
|
18 |
+
"rstrip": false,
|
19 |
+
"single_word": false,
|
20 |
+
"special": true
|
21 |
+
},
|
22 |
+
"50254": {
|
23 |
+
"content": " ",
|
24 |
+
"lstrip": false,
|
25 |
+
"normalized": true,
|
26 |
+
"rstrip": false,
|
27 |
+
"single_word": false,
|
28 |
+
"special": false
|
29 |
+
},
|
30 |
+
"50255": {
|
31 |
+
"content": " ",
|
32 |
+
"lstrip": false,
|
33 |
+
"normalized": true,
|
34 |
+
"rstrip": false,
|
35 |
+
"single_word": false,
|
36 |
+
"special": false
|
37 |
+
},
|
38 |
+
"50256": {
|
39 |
+
"content": " ",
|
40 |
+
"lstrip": false,
|
41 |
+
"normalized": true,
|
42 |
+
"rstrip": false,
|
43 |
+
"single_word": false,
|
44 |
+
"special": false
|
45 |
+
},
|
46 |
+
"50257": {
|
47 |
+
"content": " ",
|
48 |
+
"lstrip": false,
|
49 |
+
"normalized": true,
|
50 |
+
"rstrip": false,
|
51 |
+
"single_word": false,
|
52 |
+
"special": false
|
53 |
+
},
|
54 |
+
"50258": {
|
55 |
+
"content": " ",
|
56 |
+
"lstrip": false,
|
57 |
+
"normalized": true,
|
58 |
+
"rstrip": false,
|
59 |
+
"single_word": false,
|
60 |
+
"special": false
|
61 |
+
},
|
62 |
+
"50259": {
|
63 |
+
"content": " ",
|
64 |
+
"lstrip": false,
|
65 |
+
"normalized": true,
|
66 |
+
"rstrip": false,
|
67 |
+
"single_word": false,
|
68 |
+
"special": false
|
69 |
+
},
|
70 |
+
"50260": {
|
71 |
+
"content": " ",
|
72 |
+
"lstrip": false,
|
73 |
+
"normalized": true,
|
74 |
+
"rstrip": false,
|
75 |
+
"single_word": false,
|
76 |
+
"special": false
|
77 |
+
},
|
78 |
+
"50261": {
|
79 |
+
"content": " ",
|
80 |
+
"lstrip": false,
|
81 |
+
"normalized": true,
|
82 |
+
"rstrip": false,
|
83 |
+
"single_word": false,
|
84 |
+
"special": false
|
85 |
+
},
|
86 |
+
"50262": {
|
87 |
+
"content": " ",
|
88 |
+
"lstrip": false,
|
89 |
+
"normalized": true,
|
90 |
+
"rstrip": false,
|
91 |
+
"single_word": false,
|
92 |
+
"special": false
|
93 |
+
},
|
94 |
+
"50263": {
|
95 |
+
"content": " ",
|
96 |
+
"lstrip": false,
|
97 |
+
"normalized": true,
|
98 |
+
"rstrip": false,
|
99 |
+
"single_word": false,
|
100 |
+
"special": false
|
101 |
+
},
|
102 |
+
"50264": {
|
103 |
+
"content": " ",
|
104 |
+
"lstrip": false,
|
105 |
+
"normalized": true,
|
106 |
+
"rstrip": false,
|
107 |
+
"single_word": false,
|
108 |
+
"special": false
|
109 |
+
},
|
110 |
+
"50265": {
|
111 |
+
"content": " ",
|
112 |
+
"lstrip": false,
|
113 |
+
"normalized": true,
|
114 |
+
"rstrip": false,
|
115 |
+
"single_word": false,
|
116 |
+
"special": false
|
117 |
+
},
|
118 |
+
"50266": {
|
119 |
+
"content": " ",
|
120 |
+
"lstrip": false,
|
121 |
+
"normalized": true,
|
122 |
+
"rstrip": false,
|
123 |
+
"single_word": false,
|
124 |
+
"special": false
|
125 |
+
},
|
126 |
+
"50267": {
|
127 |
+
"content": " ",
|
128 |
+
"lstrip": false,
|
129 |
+
"normalized": true,
|
130 |
+
"rstrip": false,
|
131 |
+
"single_word": false,
|
132 |
+
"special": false
|
133 |
+
},
|
134 |
+
"50268": {
|
135 |
+
"content": " ",
|
136 |
+
"lstrip": false,
|
137 |
+
"normalized": true,
|
138 |
+
"rstrip": false,
|
139 |
+
"single_word": false,
|
140 |
+
"special": false
|
141 |
+
},
|
142 |
+
"50269": {
|
143 |
+
"content": " ",
|
144 |
+
"lstrip": false,
|
145 |
+
"normalized": true,
|
146 |
+
"rstrip": false,
|
147 |
+
"single_word": false,
|
148 |
+
"special": false
|
149 |
+
},
|
150 |
+
"50270": {
|
151 |
+
"content": " ",
|
152 |
+
"lstrip": false,
|
153 |
+
"normalized": true,
|
154 |
+
"rstrip": false,
|
155 |
+
"single_word": false,
|
156 |
+
"special": false
|
157 |
+
},
|
158 |
+
"50271": {
|
159 |
+
"content": " ",
|
160 |
+
"lstrip": false,
|
161 |
+
"normalized": true,
|
162 |
+
"rstrip": false,
|
163 |
+
"single_word": false,
|
164 |
+
"special": false
|
165 |
+
},
|
166 |
+
"50272": {
|
167 |
+
"content": " ",
|
168 |
+
"lstrip": false,
|
169 |
+
"normalized": true,
|
170 |
+
"rstrip": false,
|
171 |
+
"single_word": false,
|
172 |
+
"special": false
|
173 |
+
},
|
174 |
+
"50273": {
|
175 |
+
"content": " ",
|
176 |
+
"lstrip": false,
|
177 |
+
"normalized": true,
|
178 |
+
"rstrip": false,
|
179 |
+
"single_word": false,
|
180 |
+
"special": false
|
181 |
+
},
|
182 |
+
"50274": {
|
183 |
+
"content": " ",
|
184 |
+
"lstrip": false,
|
185 |
+
"normalized": true,
|
186 |
+
"rstrip": false,
|
187 |
+
"single_word": false,
|
188 |
+
"special": false
|
189 |
+
},
|
190 |
+
"50275": {
|
191 |
+
"content": " ",
|
192 |
+
"lstrip": false,
|
193 |
+
"normalized": true,
|
194 |
+
"rstrip": false,
|
195 |
+
"single_word": false,
|
196 |
+
"special": false
|
197 |
+
},
|
198 |
+
"50276": {
|
199 |
+
"content": " ",
|
200 |
+
"lstrip": false,
|
201 |
+
"normalized": true,
|
202 |
+
"rstrip": false,
|
203 |
+
"single_word": false,
|
204 |
+
"special": false
|
205 |
+
}
|
206 |
+
},
|
207 |
+
"bos_token": "<|endoftext|>",
|
208 |
+
"clean_up_tokenization_spaces": true,
|
209 |
+
"eos_token": "<|endoftext|>",
|
210 |
+
"legacy": false,
|
211 |
+
"model_max_length": 1000000000000000019884624838656,
|
212 |
+
"pad_token": null,
|
213 |
+
"tokenizer_class": "GPTNeoXTokenizer",
|
214 |
+
"unk_token": "<|endoftext|>"
|
215 |
+
}
|