LiyuanLucasLiu
commited on
Commit
•
4d29616
1
Parent(s):
8dbc2fc
initial commit
Browse files- CODE_OF_CONDUCT.md +9 -0
- LICENSE +22 -0
- NOTICE.md +2 -0
- README.md +139 -3
- SECURITY.md +41 -0
- added_tokens.json +13 -0
- config.json +39 -0
- configuration_grinmoe.py +181 -0
- generation_config.json +12 -0
- model-00001-of-00017.safetensors +3 -0
- model-00002-of-00017.safetensors +3 -0
- model-00003-of-00017.safetensors +3 -0
- model-00004-of-00017.safetensors +3 -0
- model-00005-of-00017.safetensors +3 -0
- model-00006-of-00017.safetensors +3 -0
- model-00007-of-00017.safetensors +3 -0
- model-00008-of-00017.safetensors +3 -0
- model-00009-of-00017.safetensors +3 -0
- model-00010-of-00017.safetensors +3 -0
- model-00011-of-00017.safetensors +3 -0
- model-00012-of-00017.safetensors +3 -0
- model-00013-of-00017.safetensors +3 -0
- model-00014-of-00017.safetensors +3 -0
- model-00015-of-00017.safetensors +3 -0
- model-00016-of-00017.safetensors +3 -0
- model-00017-of-00017.safetensors +3 -0
- model.safetensors.index.json +0 -0
- modeling_grinmoe.py +1703 -0
- special_tokens_map.json +30 -0
- tokenizer.json +0 -0
- tokenizer_config.json +130 -0
CODE_OF_CONDUCT.md
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Microsoft Open Source Code of Conduct
|
2 |
+
|
3 |
+
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
|
4 |
+
|
5 |
+
Resources:
|
6 |
+
|
7 |
+
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
|
8 |
+
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
|
9 |
+
- Contact [[email protected]](mailto:[email protected]) with questions or concerns
|
LICENSE
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Microsoft.
|
2 |
+
Copyright (c) Microsoft Corporation.
|
3 |
+
|
4 |
+
MIT License
|
5 |
+
|
6 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
7 |
+
of this software and associated documentation files (the "Software"), to deal
|
8 |
+
in the Software without restriction, including without limitation the rights
|
9 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
10 |
+
copies of the Software, and to permit persons to whom the Software is
|
11 |
+
furnished to do so, subject to the following conditions:
|
12 |
+
|
13 |
+
The above copyright notice and this permission notice shall be included in all
|
14 |
+
copies or substantial portions of the Software.
|
15 |
+
|
16 |
+
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
17 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
18 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
19 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
20 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
21 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
22 |
+
SOFTWARE.
|
NOTICE.md
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
NOTICES AND INFORMATION
|
2 |
+
Do Not Translate or Localize
|
README.md
CHANGED
@@ -1,3 +1,139 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<h1 align="center"> 😁 MoE</h1>
|
2 |
+
<h4 align="center">GRIN: <em>GR</em>adient-<em>IN</em>formed MoE</h4>
|
3 |
+
<p align="center">
|
4 |
+
<a href="https://huggingface.co/microsoft/GRIN-MoE">Hugging Face</a>  |   <a href="https://arxiv.org/abs/2304.08612"> Tech Report</a>  |   <a href="https://github.com/microsoft/GRIN-MoE/blob/main/LICENSE">License</a>  |   <a href="https://github.com/microsoft/GRIN-MoE">Github</a>   |   <a href="https://github.com/microsoft/GRIN-MoE/tree/main#usage">Get Started</a> 
|
5 |
+
<br>
|
6 |
+
|
7 |
+
GRIN MoE is a top2 16x3.8B MoE model.
|
8 |
+
It achieves exceptionally good performance across a diverse set of tasks, particularly in coding and mathematics tasks.
|
9 |
+
Comparing to conventional MoE training, GRIN MoE differs in mostly two ways:
|
10 |
+
|
11 |
+
- GRIN uses SparseMixer-v2 to estimate the gradient related to expert routing, while the conventional MoE training treats expert gating as a proxy for the gradient estimation.
|
12 |
+
|
13 |
+
- GRIN scales MoE training with neither expert parallelism nor token dropping, while the conventional MoE training employs expert parallelism and deploys token dropping.
|
14 |
+
|
15 |
+
## Intended Uses
|
16 |
+
|
17 |
+
### Primary Use Cases
|
18 |
+
|
19 |
+
The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
|
20 |
+
|
21 |
+
1) Memory/compute constrained environments
|
22 |
+
2) Latency bound scenarios
|
23 |
+
3) Strong reasoning (especially code, math and logic)
|
24 |
+
|
25 |
+
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
|
26 |
+
|
27 |
+
### Use Case Considerations
|
28 |
+
|
29 |
+
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
|
30 |
+
|
31 |
+
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
|
32 |
+
|
33 |
+
## Usage
|
34 |
+
|
35 |
+
### Command-line Demo
|
36 |
+
|
37 |
+
The simpliest way to inference with GRIN-MoE is to run the demo as below, which would setup environment, download model weight, and run inference for a math question.
|
38 |
+
|
39 |
+
```bash
|
40 |
+
# This script is available at `https://github.com/microsoft/GRIN-MoE/blob/main/demo/demo.sh` and requires docker to run.
|
41 |
+
curl -s https://raw.githubusercontent.com/microsoft/GRIN-MoE/main/demo/demo.sh | bash -s
|
42 |
+
```
|
43 |
+
|
44 |
+
### Interactive Demo
|
45 |
+
|
46 |
+
Run the following command to play with the model with more questions and customized inputs, which would launch a jupyter notebook at `localhost:8887`.
|
47 |
+
```bash
|
48 |
+
# This script requires docker to run.
|
49 |
+
docker run --gpus all -p 8887:8887 --rm -it nvcr.io/nvidia/pytorch:24.08-py3 /bin/bash -c 'git clone https://github.com/microsoft/GRIN-MoE.git && jupyter notebook --port 8887 --notebook-dir GRIN-MoE/demo'
|
50 |
+
```
|
51 |
+
|
52 |
+
## Benchmarks
|
53 |
+
|
54 |
+
To understand the capabilities, we compare GRIN MoE with a set of models over a variety of benchmarks using our internal benchmark platform. At the high-level overview of the model quality on representative benchmarks:
|
55 |
+
|
56 |
+
### Popular Benchmarks
|
57 |
+
|
58 |
+
| | GRIN MoE (16x3.8B) | Mixtral (8x7B) | Mixtral (8x22B) | Llama3 (8B) | Llama3 (70B) | GPT3.5 | GPT4o |
|
59 |
+
|---------------|-----------|---------|---------|--------|--------|--------|-------|
|
60 |
+
| MMLU | 79.4 | 70.5 | 76.2 | 66.5 | 80.2 | 71.4 | 86.9 |
|
61 |
+
| HellaSwag | 83.7 | 70.4 | 79.0 | 71.1 | 82.6 | 78.8 | 91.7 |
|
62 |
+
| ANLI | 60.6 | 55.2 | 65.2 | 57.3 | 68.3 | 58.1 | 75.7 |
|
63 |
+
| GSM-8K | 90.4 | 64.7 | 83.8 | 77.4 | 93.5 | 78.1 | 93.8 |
|
64 |
+
| Math | 58.9 | 11.1 | 41.8 | 28.2 | 51.2 | 45.3 | 67.8 |
|
65 |
+
| MedQA | 70.4 | 62.2 | 67.9 | 60.5 | 78.5 | 63.4 | 88.9 |
|
66 |
+
| AGIEval | 48.2 | 45.2 | 54.0 | 42.0 | 56.9 | 48.4 | 37.6 |
|
67 |
+
| TriviaQA | 73.9 | 78.5 | 82.2 | 67.7 | 84.5 | 85.8 | 66.0 |
|
68 |
+
| Arc-C | 92.0 | 87.3 | 91.3 | 82.8 | 93.0 | 87.4 | 97.0 |
|
69 |
+
| Arc-E | 98.0 | 95.6 | 96.9 | 93.4 | 98.2 | 96.3 | 99.0 |
|
70 |
+
| PIQA | 89.0 | 86.0 | 85.0 | 75.7 | 85.3 | 86.6 | 92.9 |
|
71 |
+
| SociQA | 79.5 | 75.9 | 78.2 | 73.9 | 81.1 | 68.3 | 81.4 |
|
72 |
+
| BigBench-Hard | 81.4 | 69.7 | 81.8 | 51.5 | 80.2 | 68.3 | 81.2 |
|
73 |
+
| WinoGrande | 81.4 | 62.0 | 75.3 | 65.0 | 83.3 | 68.8 | 89.3 |
|
74 |
+
| OpenBookQA | 89.8 | 85.8 | 88.6 | 82.6 | 91.8 | 86.0 | 95.2 |
|
75 |
+
| BoolQ | 83.4 | 77.6 | 82.7 | 80.9 | 89.1 | 79.1 | 90.6 |
|
76 |
+
| CommonSenseQA | 81.8 | 78.1 | 82.0 | 79.0 | 84.4 | 79.6 | 88.5 |
|
77 |
+
| TruthfulQA | 74.5 | 60.1 | 67.4 | 63.2 | 81.9 | 85.8 | 85.6 |
|
78 |
+
| HumanEval | 74.4 | 37.8 | 39.6 | 60.4 | 78.7 | 62.2 | 92.1 |
|
79 |
+
| MBPP | 80.3 | 60.2 | 70.7 | 67.7 | 81.3 | 77.8 | 90.4 |
|
80 |
+
| Average | 78.6 | 66.7 | 74.5 | 67.3 | 81.2 | 73.8 | 84.8 |
|
81 |
+
|
82 |
+
### Livebench
|
83 |
+
Performance on LiveBench-2024-07-25. Models are ranked by their average score (AVG). *Baseline results are referenced from the official benchmark.
|
84 |
+
|
85 |
+
| | Reasoning | Coding | Mathematics | Data Analysis | Language | IF | AVG |
|
86 |
+
|------------------------------|-----------|----------|--------------|---------------|----------|----------|----------|
|
87 |
+
| Claude-3-haiku* | 29.3 | 24.5 | 25.7 | 41.5 | 30.1 | 64.0 | 35.9 |
|
88 |
+
| Mixtral-8x22B-instruct-v0.1* | 29.3 | 32.0 | 28.3 | 31.7 | 26.5 | 63.1 | 35.2 |
|
89 |
+
| GPT-3.5-turbo-0125* | 26.7 | 27.7 | 26.9 | 41.2 | 24.2 | 60.5 | 34.5 |
|
90 |
+
| **GRIN MoE** | **35.3** | **23.7** | **29.8** | **32.0** | **16.9** | **57.6** | **32.5** |
|
91 |
+
| Mistral-small-2402* | 26.0 | 21.2 | 28.2 | 31.9 | 22.1 | 63.9 | 32.2 |
|
92 |
+
| Command-r-plus* | 28.7 | 19.5 | 24.9 | 24.6 | 23.9 | 71.5 | 32.2 |
|
93 |
+
| Gemma-2-9B-it* | 17.3 | 22.5 | 24.0 | 35.1 | 27.6 | 61.6 | 31.3 |
|
94 |
+
|
95 |
+
|
96 |
+
## Training
|
97 |
+
|
98 |
+
### Model
|
99 |
+
| | |
|
100 |
+
|---------------------|-----|
|
101 |
+
| Developer | Microsoft |
|
102 |
+
| Architecture | GRIN MoE has 16x3.8B parameters with **6.6B active parameters** when using 2 experts. The model is a mixture-of-expert decoder-only Transformer model using the tokenizer with vocabulary size of 32,064. |
|
103 |
+
| Inputs | Text. It is best suited for prompts using chat format. |
|
104 |
+
| Context length | 4K tokens |
|
105 |
+
| GPUs | 512 H100-80G |
|
106 |
+
| Training time | 18 days |
|
107 |
+
| Training data | 4.0T tokens |
|
108 |
+
| Outputs | Generated text in response to the input |
|
109 |
+
| Dates | Trained between April and June 2024 |
|
110 |
+
| Status | This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models. |
|
111 |
+
| Supported languages | English |
|
112 |
+
| Release date | Sep 2024 |
|
113 |
+
| License | MIT |
|
114 |
+
|
115 |
+
### Training Datasets
|
116 |
+
Our training data includes a wide variety of sources, totaling 4 trillion tokens, and is a combination of 1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219).
|
117 |
+
|
118 |
+
## Responsible AI Considerations
|
119 |
+
Like other language models, Gradient Informed (GRIN) MoE model can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
|
120 |
+
* Quality of Service: GRIN MoE is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
|
121 |
+
* Representation of Harms & Perpetuation of Stereotypes: This model can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
|
122 |
+
* Inappropriate or Offensive Content: This model may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
|
123 |
+
* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
|
124 |
+
* Limited Scope for Code: Majority of the training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
|
125 |
+
|
126 |
+
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use-case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
|
127 |
+
* Allocation: The model may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
|
128 |
+
* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
|
129 |
+
* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
|
130 |
+
* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
|
131 |
+
* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
|
132 |
+
* Copyrighted content: The model might generate content that infringes on copyright protections. Developers should implement measures to detect and filter copyrighted material, and end-users should be informed about the potential for unintended copyright violations and the importance of verifying original sources to avoid legal complications.
|
133 |
+
* Election Misinformation: Developers should ensure robust verification mechanisms are in place to detect and correct false information regarding elections and should inform users of the need for critical evaluation of AI-generated election-related content to mitigate the spread of misinformation.
|
134 |
+
|
135 |
+
## License
|
136 |
+
The model is licensed under the [MIT license](./LICENSE).
|
137 |
+
|
138 |
+
## Trademarks
|
139 |
+
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
SECURITY.md
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.9 BLOCK -->
|
2 |
+
|
3 |
+
## Security
|
4 |
+
|
5 |
+
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin).
|
6 |
+
|
7 |
+
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
|
8 |
+
|
9 |
+
## Reporting Security Issues
|
10 |
+
|
11 |
+
**Please do not report security vulnerabilities through public GitHub issues.**
|
12 |
+
|
13 |
+
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report).
|
14 |
+
|
15 |
+
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp).
|
16 |
+
|
17 |
+
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
|
18 |
+
|
19 |
+
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
|
20 |
+
|
21 |
+
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
|
22 |
+
* Full paths of source file(s) related to the manifestation of the issue
|
23 |
+
* The location of the affected source code (tag/branch/commit or direct URL)
|
24 |
+
* Any special configuration required to reproduce the issue
|
25 |
+
* Step-by-step instructions to reproduce the issue
|
26 |
+
* Proof-of-concept or exploit code (if possible)
|
27 |
+
* Impact of the issue, including how an attacker might exploit the issue
|
28 |
+
|
29 |
+
This information will help us triage your report more quickly.
|
30 |
+
|
31 |
+
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
|
32 |
+
|
33 |
+
## Preferred Languages
|
34 |
+
|
35 |
+
We prefer all communications to be in English.
|
36 |
+
|
37 |
+
## Policy
|
38 |
+
|
39 |
+
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd).
|
40 |
+
|
41 |
+
<!-- END MICROSOFT SECURITY.MD BLOCK -->
|
added_tokens.json
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"<|endoftext|>": 32000,
|
3 |
+
"<|assistant|>": 32001,
|
4 |
+
"<|placeholder1|>": 32002,
|
5 |
+
"<|placeholder2|>": 32003,
|
6 |
+
"<|placeholder3|>": 32004,
|
7 |
+
"<|placeholder4|>": 32005,
|
8 |
+
"<|system|>": 32006,
|
9 |
+
"<|end|>": 32007,
|
10 |
+
"<|placeholder5|>": 32008,
|
11 |
+
"<|placeholder6|>": 32009,
|
12 |
+
"<|user|>": 32010
|
13 |
+
}
|
config.json
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "GRIN-MoE",
|
3 |
+
"architectures": [
|
4 |
+
"GRIN-MoE"
|
5 |
+
],
|
6 |
+
"attention_bias": true,
|
7 |
+
"attention_dropout": 0.0,
|
8 |
+
"auto_map": {
|
9 |
+
"AutoConfig": "configuration_grinmoe.GRINMoEConfig",
|
10 |
+
"AutoModelForCausalLM": "modeling_grinmoe.GRINMoEForCausalLM"
|
11 |
+
},
|
12 |
+
"bos_token_id": 1,
|
13 |
+
"eos_token_id": 32000,
|
14 |
+
"hidden_act": "silu",
|
15 |
+
"hidden_dropout": 0.0,
|
16 |
+
"hidden_size": 4096,
|
17 |
+
"initializer_range": 0.02,
|
18 |
+
"input_jitter_noise": 0.01,
|
19 |
+
"intermediate_size": 6400,
|
20 |
+
"lm_head_bias": true,
|
21 |
+
"max_position_embeddings": 4096,
|
22 |
+
"model_type": "grinmoe",
|
23 |
+
"num_attention_heads": 32,
|
24 |
+
"num_experts_per_tok": 2,
|
25 |
+
"num_hidden_layers": 32,
|
26 |
+
"num_key_value_heads": 8,
|
27 |
+
"num_local_experts": 16,
|
28 |
+
"output_router_logits": false,
|
29 |
+
"rms_norm_eps": 1e-05,
|
30 |
+
"rope_theta": 10000.0,
|
31 |
+
"router_aux_loss_coef": 0.0,
|
32 |
+
"router_jitter_noise": 0.01,
|
33 |
+
"sliding_window": 2047,
|
34 |
+
"tie_word_embeddings": false,
|
35 |
+
"torch_dtype": "bfloat16",
|
36 |
+
"transformers_version": "4.40.2",
|
37 |
+
"use_cache": true,
|
38 |
+
"vocab_size": 32064
|
39 |
+
}
|
configuration_grinmoe.py
ADDED
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
""" PyTorch GRINMoE model"""
|
16 |
+
|
17 |
+
from transformers.configuration_utils import PretrainedConfig
|
18 |
+
from transformers.utils import logging
|
19 |
+
|
20 |
+
|
21 |
+
logger = logging.get_logger(__name__)
|
22 |
+
|
23 |
+
#from transformers.models.deprecated._archive_maps import PHIMOE_PRETRAINED_CONFIG_ARCHIVE_MAP # noqa: F401, E402
|
24 |
+
PHIMOE_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
25 |
+
"microsoft/GRIN-MoE": "https://huggingface.co/microsoft/GRIN-MoE/resolve/main/config.json"
|
26 |
+
}
|
27 |
+
|
28 |
+
class GRINMoEConfig(PretrainedConfig):
|
29 |
+
r"""
|
30 |
+
This is the configuration class to store the configuration of a [`GRINMoE`]. It is used to instantiate an
|
31 |
+
PhiMoE model according to the specified arguments, defining the model architecture. Instantiating a configuration
|
32 |
+
with the defaults will yield a similar configuration to that of the
|
33 |
+
[microsoft/GRIN-MoE](https://huggingface.co/microsoft/GRIN-MoE).
|
34 |
+
|
35 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
36 |
+
documentation from [`PretrainedConfig`] for more information.
|
37 |
+
|
38 |
+
|
39 |
+
Args:
|
40 |
+
vocab_size (`int`, *optional*, defaults to 32000):
|
41 |
+
Vocabulary size of the PhiMoE model. Defines the number of different tokens that can be represented by the
|
42 |
+
`inputs_ids` passed when calling [`GRINMoE`]
|
43 |
+
hidden_size (`int`, *optional*, defaults to 4096):
|
44 |
+
Dimension of the hidden representations.
|
45 |
+
intermediate_size (`int`, *optional*, defaults to 14336):
|
46 |
+
Dimension of the MLP representations.
|
47 |
+
num_hidden_layers (`int`, *optional*, defaults to 32):
|
48 |
+
Number of hidden layers in the Transformer encoder.
|
49 |
+
num_attention_heads (`int`, *optional*, defaults to 32):
|
50 |
+
Number of attention heads for each attention layer in the Transformer encoder.
|
51 |
+
num_key_value_heads (`int`, *optional*, defaults to 8):
|
52 |
+
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
53 |
+
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
54 |
+
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
55 |
+
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
56 |
+
by meanpooling all the original heads within that group. For more details checkout [this
|
57 |
+
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`.
|
58 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
59 |
+
The non-linear activation function (function or string) in the decoder.
|
60 |
+
max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
|
61 |
+
The maximum sequence length that this model might ever be used with. PhiMoE's sliding window attention
|
62 |
+
allows sequence of up to 4096*32 tokens.
|
63 |
+
initializer_range (`float`, *optional*, defaults to 0.02):
|
64 |
+
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
65 |
+
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
|
66 |
+
The epsilon used by the rms normalization layers.
|
67 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
68 |
+
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
69 |
+
relevant if `config.is_decoder=True`.
|
70 |
+
pad_token_id (`int`, *optional*):
|
71 |
+
The id of the padding token.
|
72 |
+
bos_token_id (`int`, *optional*, defaults to 1):
|
73 |
+
The id of the "beginning-of-sequence" token.
|
74 |
+
eos_token_id (`int`, *optional*, defaults to 2):
|
75 |
+
The id of the "end-of-sequence" token.
|
76 |
+
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
|
77 |
+
Whether the model's input and output word embeddings should be tied.
|
78 |
+
rope_theta (`float`, *optional*, defaults to 1000000.0):
|
79 |
+
The base period of the RoPE embeddings.
|
80 |
+
sliding_window (`int`, *optional*):
|
81 |
+
Sliding window attention window size. If not specified, will default to `4096`.
|
82 |
+
attention_dropout (`float`, *optional*, defaults to 0.0):
|
83 |
+
The dropout ratio for the attention probabilities.
|
84 |
+
num_experts_per_tok (`int`, *optional*, defaults to 2):
|
85 |
+
The number of experts to root per-token, can be also interpreted as the `top-p` routing
|
86 |
+
parameter
|
87 |
+
num_local_experts (`int`, *optional*, defaults to 8):
|
88 |
+
Number of experts per Sparse MLP layer.
|
89 |
+
output_router_logits (`bool`, *optional*, defaults to `False`):
|
90 |
+
Whether or not the router logits should be returned by the model. Enabeling this will also
|
91 |
+
allow the model to output the auxiliary loss. See [here]() for more details
|
92 |
+
router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
|
93 |
+
The aux loss factor for the total loss.
|
94 |
+
router_jitter_noise (`float`, *optional*, defaults to 0.0):
|
95 |
+
Amount of noise to add to the router.
|
96 |
+
|
97 |
+
```python
|
98 |
+
>>> from transformers import GRINMoE, GRINMoEConfig
|
99 |
+
|
100 |
+
>>> # Initializing a GRIN-MoE style configuration
|
101 |
+
>>> configuration = GRINMoEConfig()
|
102 |
+
|
103 |
+
>>> # Initializing a model from the GRIN-MoE style configuration
|
104 |
+
>>> model = GRINMoE(configuration)
|
105 |
+
|
106 |
+
>>> # Accessing the model configuration
|
107 |
+
>>> configuration = model.config
|
108 |
+
```"""
|
109 |
+
|
110 |
+
model_type = "grinmoe"
|
111 |
+
keys_to_ignore_at_inference = ["past_key_values"]
|
112 |
+
|
113 |
+
# _attn_implementation = 'eager'
|
114 |
+
_attn_implementation = 'sdpa'
|
115 |
+
# _attn_implementation = 'flash_attention_2'
|
116 |
+
|
117 |
+
def __init__(
|
118 |
+
self,
|
119 |
+
vocab_size=32000,
|
120 |
+
hidden_size=4096,
|
121 |
+
intermediate_size=6400,
|
122 |
+
num_hidden_layers=32,
|
123 |
+
num_attention_heads=32,
|
124 |
+
num_key_value_heads=8,
|
125 |
+
hidden_act="silu",
|
126 |
+
max_position_embeddings=4096 * 32,
|
127 |
+
initializer_range=0.02,
|
128 |
+
rms_norm_eps=1e-5,
|
129 |
+
use_cache=True,
|
130 |
+
pad_token_id=None,
|
131 |
+
bos_token_id=1,
|
132 |
+
eos_token_id=2,
|
133 |
+
tie_word_embeddings=False,
|
134 |
+
rope_theta=1e6,
|
135 |
+
sliding_window=None,
|
136 |
+
attention_dropout=0.0,
|
137 |
+
num_experts_per_tok=2,
|
138 |
+
num_local_experts=16,
|
139 |
+
output_router_logits=False,
|
140 |
+
router_aux_loss_coef=0.001,
|
141 |
+
router_jitter_noise=0.01,
|
142 |
+
input_jitter_noise=0.01,
|
143 |
+
attention_bias = False,
|
144 |
+
lm_head_bias = False,
|
145 |
+
**kwargs,
|
146 |
+
):
|
147 |
+
self.vocab_size = vocab_size
|
148 |
+
self.max_position_embeddings = max_position_embeddings
|
149 |
+
self.hidden_size = hidden_size
|
150 |
+
self.intermediate_size = intermediate_size
|
151 |
+
self.num_hidden_layers = num_hidden_layers
|
152 |
+
self.num_attention_heads = num_attention_heads
|
153 |
+
self.sliding_window = sliding_window
|
154 |
+
self.attention_bias = attention_bias
|
155 |
+
self.lm_head_bias = lm_head_bias
|
156 |
+
# for backward compatibility
|
157 |
+
if num_key_value_heads is None:
|
158 |
+
num_key_value_heads = num_attention_heads
|
159 |
+
|
160 |
+
self.num_key_value_heads = num_key_value_heads
|
161 |
+
self.hidden_act = hidden_act
|
162 |
+
self.initializer_range = initializer_range
|
163 |
+
self.rms_norm_eps = rms_norm_eps
|
164 |
+
self.use_cache = use_cache
|
165 |
+
self.rope_theta = rope_theta
|
166 |
+
self.attention_dropout = attention_dropout
|
167 |
+
|
168 |
+
self.num_experts_per_tok = num_experts_per_tok
|
169 |
+
self.num_local_experts = num_local_experts
|
170 |
+
self.output_router_logits = output_router_logits
|
171 |
+
self.router_aux_loss_coef = router_aux_loss_coef
|
172 |
+
self.router_jitter_noise = router_jitter_noise
|
173 |
+
self.input_jitter_noise = input_jitter_noise
|
174 |
+
|
175 |
+
super().__init__(
|
176 |
+
pad_token_id=pad_token_id,
|
177 |
+
bos_token_id=bos_token_id,
|
178 |
+
eos_token_id=eos_token_id,
|
179 |
+
tie_word_embeddings=tie_word_embeddings,
|
180 |
+
**kwargs,
|
181 |
+
)
|
generation_config.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": [
|
5 |
+
32000,
|
6 |
+
32001,
|
7 |
+
32007
|
8 |
+
],
|
9 |
+
"pad_token_id": 32000,
|
10 |
+
"use_cache": true,
|
11 |
+
"transformers_version": "4.40.2"
|
12 |
+
}
|
model-00001-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bfae60bacab5226fcfa437a7105359b61f602820bdb302986106065e19066da5
|
3 |
+
size 4992095880
|
model-00002-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fd343bf6d9e5a3363a23f878fb244c7653f855ae8cc02680c797314ef8ea60ca
|
3 |
+
size 4991605352
|
model-00003-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5829c532b698d42a298648bb4f9c9285f59751915f4c22dcac19723888c0395f
|
3 |
+
size 4991605352
|
model-00004-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6c6a8c02b7aeade710689234e062a2971f3ebbf471fbfc16338cd4e469e25bae
|
3 |
+
size 4991605352
|
model-00005-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7fb7769b8e802b11aac4cfc80db8ac6f8b16b57317de156f0727405ed189ae1e
|
3 |
+
size 4991605360
|
model-00006-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d546af749fd73b516dbb392a82be0c143c632d45bb5bc5c89870f93708ec2908
|
3 |
+
size 4991605448
|
model-00007-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:23280638638fd63494845afc3d8d2ffc8c7b0a2adcb4f4663d31eb8dbe075e16
|
3 |
+
size 4991605480
|
model-00008-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0e956930902155b68659b4d704e8bd42d76f1fc2409a8c72d79e661be4ff71c8
|
3 |
+
size 4991605480
|
model-00009-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:802266e25572d7adeb7dbe71d6d7a44ecc0ec6c928d98a41c624d8638f511506
|
3 |
+
size 4991605480
|
model-00010-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:86e6d54d42980de66f7d8c5cc09892836e8c116adc70858c95dd280c279584d8
|
3 |
+
size 4991605480
|
model-00011-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bf76f6fafa486fa8334e235508a9435493624d09bf8be22003489a3b1743bdf2
|
3 |
+
size 4993558592
|
model-00012-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4e1f8481f808d703076e5279d837496434ea67da900849d5e6e79b6b4e4d4ac4
|
3 |
+
size 4958009392
|
model-00013-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:99e4fc8a07c2a00b0056483bb4ec27c48a4ce6b663851ab9e1667eeab8d250de
|
3 |
+
size 4991605472
|
model-00014-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f6cb890c5984b94ba8e683a65a5ef482a2f6c8e50e9161d46c708b2968545a88
|
3 |
+
size 4991605472
|
model-00015-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9f972476b3635f899e593e949b8566af3187c7d84ba855d991f5f0871058c37e
|
3 |
+
size 4991605472
|
model-00016-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:70ae58f517f9d1f750d33c46fa37fc1c2d2025c4dba2d229602a7bc387ea1f5f
|
3 |
+
size 4991605472
|
model-00017-of-00017.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d56845fc7afcf1b98676f57123aed6bd701040ca85d23114243aeadcfd6b4691
|
3 |
+
size 3912021632
|
model.safetensors.index.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
modeling_grinmoe.py
ADDED
@@ -0,0 +1,1703 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
|
16 |
+
""" PyTorch GRINMoE model."""
|
17 |
+
import inspect
|
18 |
+
import math
|
19 |
+
import warnings
|
20 |
+
from typing import List, Optional, Tuple, Union
|
21 |
+
|
22 |
+
import torch
|
23 |
+
import torch.nn.functional as F
|
24 |
+
import torch.utils.checkpoint
|
25 |
+
from torch import nn
|
26 |
+
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
|
27 |
+
|
28 |
+
from transformers.activations import ACT2FN
|
29 |
+
from transformers.cache_utils import Cache, DynamicCache
|
30 |
+
from transformers.modeling_attn_mask_utils import (
|
31 |
+
_prepare_4d_causal_attention_mask,
|
32 |
+
_prepare_4d_causal_attention_mask_for_sdpa,
|
33 |
+
)
|
34 |
+
from transformers.modeling_outputs import (
|
35 |
+
MoeCausalLMOutputWithPast,
|
36 |
+
MoeModelOutputWithPast,
|
37 |
+
SequenceClassifierOutputWithPast,
|
38 |
+
)
|
39 |
+
from transformers.modeling_utils import PreTrainedModel
|
40 |
+
from transformers.pytorch_utils import is_torch_greater_or_equal_than_1_13
|
41 |
+
from transformers.utils import (
|
42 |
+
add_start_docstrings,
|
43 |
+
add_start_docstrings_to_model_forward,
|
44 |
+
is_flash_attn_2_available,
|
45 |
+
is_flash_attn_greater_or_equal_2_10,
|
46 |
+
logging,
|
47 |
+
replace_return_docstrings,
|
48 |
+
)
|
49 |
+
from transformers.utils.import_utils import is_torch_fx_available
|
50 |
+
from .configuration_grinmoe import GRINMoEConfig
|
51 |
+
|
52 |
+
from transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
|
53 |
+
MODEL_FOR_CAUSAL_LM_MAPPING_NAMES.update({'grin': 'GRINMoEForCausalLM'})
|
54 |
+
|
55 |
+
if is_flash_attn_2_available():
|
56 |
+
from flash_attn import flash_attn_func, flash_attn_varlen_func
|
57 |
+
from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
|
58 |
+
|
59 |
+
from einops import rearrange
|
60 |
+
from flash_attn.layers.rotary import RotaryEmbedding as FlashRotaryEmbedding
|
61 |
+
|
62 |
+
_flash_supports_window_size = "window_size" in list(inspect.signature(flash_attn_func).parameters)
|
63 |
+
|
64 |
+
# This makes `_prepare_4d_causal_attention_mask` a leaf function in the FX graph.
|
65 |
+
# It means that the function will not be traced through and simply appear as a node in the graph.
|
66 |
+
if is_torch_fx_available():
|
67 |
+
if not is_torch_greater_or_equal_than_1_13:
|
68 |
+
import torch.fx
|
69 |
+
|
70 |
+
_prepare_4d_causal_attention_mask = torch.fx.wrap(_prepare_4d_causal_attention_mask)
|
71 |
+
|
72 |
+
|
73 |
+
logger = logging.get_logger(__name__)
|
74 |
+
|
75 |
+
_CONFIG_FOR_DOC = "GRINMoEConfig"
|
76 |
+
|
77 |
+
|
78 |
+
def load_balancing_loss_func(
|
79 |
+
gate_logits: torch.Tensor, num_experts: torch.Tensor = None, top_k=2, attention_mask: Optional[torch.Tensor] = None
|
80 |
+
) -> float:
|
81 |
+
r"""
|
82 |
+
Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch.
|
83 |
+
|
84 |
+
See Switch Transformer (https://arxiv.org/abs/2101.03961) for more details. This function implements the loss
|
85 |
+
function presented in equations (4) - (6) of the paper. It aims at penalizing cases where the routing between
|
86 |
+
experts is too unbalanced.
|
87 |
+
|
88 |
+
Args:
|
89 |
+
gate_logits (Union[`torch.Tensor`, Tuple[torch.Tensor]):
|
90 |
+
Logits from the `gate`, should be a tuple of model.config.num_hidden_layers tensors of
|
91 |
+
shape [batch_size X sequence_length, num_experts].
|
92 |
+
attention_mask (`torch.Tensor`, None):
|
93 |
+
The attention_mask used in forward function
|
94 |
+
shape [batch_size X sequence_length] if not None.
|
95 |
+
num_experts (`int`, *optional*):
|
96 |
+
Number of experts
|
97 |
+
|
98 |
+
Returns:
|
99 |
+
The auxiliary loss.
|
100 |
+
"""
|
101 |
+
if gate_logits is None or not isinstance(gate_logits, tuple):
|
102 |
+
return 0
|
103 |
+
|
104 |
+
if isinstance(gate_logits, tuple):
|
105 |
+
compute_device = gate_logits[0].device
|
106 |
+
concatenated_gate_logits = torch.cat([layer_gate.to(compute_device) for layer_gate in gate_logits], dim=0)
|
107 |
+
|
108 |
+
routing_weights = torch.nn.functional.softmax(concatenated_gate_logits, dim=-1)
|
109 |
+
|
110 |
+
_, selected_experts = torch.topk(routing_weights, top_k, dim=-1)
|
111 |
+
|
112 |
+
expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts)
|
113 |
+
|
114 |
+
if attention_mask is None:
|
115 |
+
# Compute the percentage of tokens routed to each experts
|
116 |
+
tokens_per_expert = torch.mean(expert_mask.float(), dim=0)
|
117 |
+
|
118 |
+
# Compute the average probability of routing to these experts
|
119 |
+
router_prob_per_expert = torch.mean(routing_weights, dim=0)
|
120 |
+
else:
|
121 |
+
batch_size, sequence_length = attention_mask.shape
|
122 |
+
num_hidden_layers = concatenated_gate_logits.shape[0] // (batch_size * sequence_length)
|
123 |
+
|
124 |
+
# Compute the mask that masks all padding tokens as 0 with the same shape of expert_mask
|
125 |
+
expert_attention_mask = (
|
126 |
+
attention_mask[None, :, :, None, None]
|
127 |
+
.expand((num_hidden_layers, batch_size, sequence_length, top_k, num_experts))
|
128 |
+
.reshape(-1, top_k, num_experts)
|
129 |
+
.to(compute_device)
|
130 |
+
)
|
131 |
+
|
132 |
+
# Compute the percentage of tokens routed to each experts
|
133 |
+
tokens_per_expert = torch.sum(expert_mask.float() * expert_attention_mask, dim=0) / torch.sum(
|
134 |
+
expert_attention_mask, dim=0
|
135 |
+
)
|
136 |
+
|
137 |
+
# Compute the mask that masks all padding tokens as 0 with the same shape of tokens_per_expert
|
138 |
+
router_per_expert_attention_mask = (
|
139 |
+
attention_mask[None, :, :, None]
|
140 |
+
.expand((num_hidden_layers, batch_size, sequence_length, num_experts))
|
141 |
+
.reshape(-1, num_experts)
|
142 |
+
.to(compute_device)
|
143 |
+
)
|
144 |
+
|
145 |
+
# Compute the average probability of routing to these experts
|
146 |
+
router_prob_per_expert = torch.sum(routing_weights * router_per_expert_attention_mask, dim=0) / torch.sum(
|
147 |
+
router_per_expert_attention_mask, dim=0
|
148 |
+
)
|
149 |
+
|
150 |
+
overall_loss = torch.sum(tokens_per_expert * router_prob_per_expert.unsqueeze(0))
|
151 |
+
return overall_loss * num_experts
|
152 |
+
|
153 |
+
|
154 |
+
# Copied from Phi-3.5-MoE
|
155 |
+
def _get_unpad_data(attention_mask):
|
156 |
+
seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
|
157 |
+
indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
|
158 |
+
max_seqlen_in_batch = seqlens_in_batch.max().item()
|
159 |
+
cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.int32), (1, 0))
|
160 |
+
return (
|
161 |
+
indices,
|
162 |
+
cu_seqlens,
|
163 |
+
max_seqlen_in_batch,
|
164 |
+
)
|
165 |
+
|
166 |
+
|
167 |
+
# Copied from Phi-3.5-MoE
|
168 |
+
class GRINMoERotaryEmbedding(nn.Module):
|
169 |
+
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
|
170 |
+
super().__init__()
|
171 |
+
|
172 |
+
self.dim = dim
|
173 |
+
self.max_position_embeddings = max_position_embeddings
|
174 |
+
self.base = base
|
175 |
+
inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2, dtype=torch.int64).float().to(device) / self.dim))
|
176 |
+
self.register_buffer("inv_freq", inv_freq, persistent=False)
|
177 |
+
|
178 |
+
# Build here to make `torch.jit.trace` work.
|
179 |
+
self._set_cos_sin_cache(
|
180 |
+
seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype()
|
181 |
+
)
|
182 |
+
|
183 |
+
def _set_cos_sin_cache(self, seq_len, device, dtype):
|
184 |
+
self.max_seq_len_cached = seq_len
|
185 |
+
t = torch.arange(self.max_seq_len_cached, device=device, dtype=torch.int64).type_as(self.inv_freq)
|
186 |
+
|
187 |
+
freqs = torch.outer(t, self.inv_freq)
|
188 |
+
# Different from paper, but it uses a different permutation in order to obtain the same calculation
|
189 |
+
emb = torch.cat((freqs, freqs), dim=-1)
|
190 |
+
self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False)
|
191 |
+
self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False)
|
192 |
+
|
193 |
+
def forward(self, x, seq_len=None):
|
194 |
+
# x: [bs, num_attention_heads, seq_len, head_size]
|
195 |
+
if seq_len > self.max_seq_len_cached:
|
196 |
+
self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype)
|
197 |
+
|
198 |
+
return (
|
199 |
+
self.cos_cached[:seq_len].to(dtype=x.dtype),
|
200 |
+
self.sin_cached[:seq_len].to(dtype=x.dtype),
|
201 |
+
)
|
202 |
+
|
203 |
+
|
204 |
+
# Copied from Phi-3.5-MoE
|
205 |
+
def rotate_half(x):
|
206 |
+
"""Rotates half the hidden dims of the input."""
|
207 |
+
x1 = x[..., : x.shape[-1] // 2]
|
208 |
+
x2 = x[..., x.shape[-1] // 2 :]
|
209 |
+
return torch.cat((-x2, x1), dim=-1)
|
210 |
+
|
211 |
+
# Copied from Phi-3.5-MoE
|
212 |
+
def apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim=1):
|
213 |
+
"""Applies Rotary Position Embedding to the query and key tensors.
|
214 |
+
|
215 |
+
Args:
|
216 |
+
q (`torch.Tensor`): The query tensor.
|
217 |
+
k (`torch.Tensor`): The key tensor.
|
218 |
+
cos (`torch.Tensor`): The cosine part of the rotary embedding.
|
219 |
+
sin (`torch.Tensor`): The sine part of the rotary embedding.
|
220 |
+
position_ids (`torch.Tensor`):
|
221 |
+
The position indices of the tokens corresponding to the query and key tensors. For example, this can be
|
222 |
+
used to pass offsetted position ids when working with a KV-cache.
|
223 |
+
unsqueeze_dim (`int`, *optional*, defaults to 1):
|
224 |
+
The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
|
225 |
+
sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
|
226 |
+
that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
|
227 |
+
k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
|
228 |
+
cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
|
229 |
+
the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
|
230 |
+
Returns:
|
231 |
+
`tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
|
232 |
+
"""
|
233 |
+
cos = cos[position_ids].unsqueeze(unsqueeze_dim)
|
234 |
+
sin = sin[position_ids].unsqueeze(unsqueeze_dim)
|
235 |
+
q_embed = (q * cos) + (rotate_half(q) * sin)
|
236 |
+
k_embed = (k * cos) + (rotate_half(k) * sin)
|
237 |
+
return q_embed, k_embed
|
238 |
+
|
239 |
+
|
240 |
+
# Copied from Phi-3.5-MoE
|
241 |
+
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
|
242 |
+
"""
|
243 |
+
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
|
244 |
+
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
|
245 |
+
"""
|
246 |
+
batch, num_key_value_heads, slen, head_dim = hidden_states.shape
|
247 |
+
if n_rep == 1:
|
248 |
+
return hidden_states
|
249 |
+
hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
|
250 |
+
return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
|
251 |
+
|
252 |
+
|
253 |
+
# Copied from Phi-3.5-MoE
|
254 |
+
class GRINMoEAttention(nn.Module):
|
255 |
+
"""
|
256 |
+
Multi-headed attention from 'Attention Is All You Need' paper. Modified to use sliding window attention: Longformer
|
257 |
+
and "Generating Long Sequences with Sparse Transformers".
|
258 |
+
"""
|
259 |
+
|
260 |
+
def __init__(self, config: GRINMoEConfig, layer_idx: Optional[int] = None):
|
261 |
+
super().__init__()
|
262 |
+
self.config = config
|
263 |
+
self.layer_idx = layer_idx
|
264 |
+
if layer_idx is None:
|
265 |
+
logger.warning_once(
|
266 |
+
f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
|
267 |
+
"lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
|
268 |
+
"when creating this class."
|
269 |
+
)
|
270 |
+
|
271 |
+
self.hidden_size = config.hidden_size
|
272 |
+
self.num_heads = config.num_attention_heads
|
273 |
+
self.head_dim = self.hidden_size // self.num_heads
|
274 |
+
self.num_key_value_heads = config.num_key_value_heads
|
275 |
+
self.num_key_value_groups = self.num_heads // self.num_key_value_heads
|
276 |
+
self.max_position_embeddings = config.max_position_embeddings
|
277 |
+
self.rope_theta = config.rope_theta
|
278 |
+
self.is_causal = True
|
279 |
+
self.attention_dropout = config.attention_dropout
|
280 |
+
|
281 |
+
if (self.head_dim * self.num_heads) != self.hidden_size:
|
282 |
+
raise ValueError(
|
283 |
+
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
|
284 |
+
f" and `num_heads`: {self.num_heads})."
|
285 |
+
)
|
286 |
+
self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=self.config.attention_bias)
|
287 |
+
self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=self.config.attention_bias)
|
288 |
+
self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=self.config.attention_bias)
|
289 |
+
self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=self.config.attention_bias)
|
290 |
+
self.rotary_emb = GRINMoERotaryEmbedding(
|
291 |
+
self.head_dim,
|
292 |
+
max_position_embeddings=self.max_position_embeddings,
|
293 |
+
base=self.rope_theta,
|
294 |
+
)
|
295 |
+
|
296 |
+
def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
|
297 |
+
return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
|
298 |
+
|
299 |
+
def forward(
|
300 |
+
self,
|
301 |
+
hidden_states: torch.Tensor,
|
302 |
+
attention_mask: Optional[torch.Tensor] = None,
|
303 |
+
position_ids: Optional[torch.LongTensor] = None,
|
304 |
+
past_key_value: Optional[Cache] = None,
|
305 |
+
output_attentions: bool = False,
|
306 |
+
use_cache: bool = False,
|
307 |
+
**kwargs,
|
308 |
+
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
|
309 |
+
if "padding_mask" in kwargs:
|
310 |
+
warnings.warn(
|
311 |
+
"Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
|
312 |
+
)
|
313 |
+
bsz, q_len, _ = hidden_states.size()
|
314 |
+
|
315 |
+
query_states = self.q_proj(hidden_states)
|
316 |
+
key_states = self.k_proj(hidden_states)
|
317 |
+
value_states = self.v_proj(hidden_states)
|
318 |
+
|
319 |
+
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
320 |
+
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
|
321 |
+
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
|
322 |
+
|
323 |
+
kv_seq_len = key_states.shape[-2]
|
324 |
+
if past_key_value is not None:
|
325 |
+
if self.layer_idx is None:
|
326 |
+
raise ValueError(
|
327 |
+
f"The cache structure has changed since version v4.36. If you are using {self.__class__.__name__} "
|
328 |
+
"for auto-regressive decoding with k/v caching, please make sure to initialize the attention class "
|
329 |
+
"with a layer index."
|
330 |
+
)
|
331 |
+
kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
|
332 |
+
|
333 |
+
# print ("before apply rotary pos_emb", len(kv_seq_len),torch.norm(value_states).items(),\
|
334 |
+
# torch.norm(query_states).items(), torch.norm(key_states).items(), position_ids)
|
335 |
+
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
|
336 |
+
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
|
337 |
+
|
338 |
+
# print ('after pos emb', torch.norm(query_states).item(), torch.norm(key_states).items())
|
339 |
+
if past_key_value is not None:
|
340 |
+
cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
|
341 |
+
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
|
342 |
+
|
343 |
+
# repeat k/v heads if n_kv_heads < n_heads
|
344 |
+
key_states = repeat_kv(key_states, self.num_key_value_groups)
|
345 |
+
value_states = repeat_kv(value_states, self.num_key_value_groups)
|
346 |
+
|
347 |
+
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
|
348 |
+
|
349 |
+
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
|
350 |
+
raise ValueError(
|
351 |
+
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
|
352 |
+
f" {attn_weights.size()}"
|
353 |
+
)
|
354 |
+
|
355 |
+
if attention_mask is not None:
|
356 |
+
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
|
357 |
+
raise ValueError(
|
358 |
+
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
|
359 |
+
)
|
360 |
+
|
361 |
+
attn_weights = attn_weights + attention_mask
|
362 |
+
|
363 |
+
# upcast attention to fp32
|
364 |
+
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
|
365 |
+
attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
|
366 |
+
attn_output = torch.matmul(attn_weights, value_states)
|
367 |
+
|
368 |
+
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
|
369 |
+
raise ValueError(
|
370 |
+
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
|
371 |
+
f" {attn_output.size()}"
|
372 |
+
)
|
373 |
+
|
374 |
+
attn_output = attn_output.transpose(1, 2).contiguous()
|
375 |
+
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
|
376 |
+
|
377 |
+
attn_output = self.o_proj(attn_output)
|
378 |
+
|
379 |
+
if not output_attentions:
|
380 |
+
attn_weights = None
|
381 |
+
|
382 |
+
return attn_output, attn_weights, past_key_value
|
383 |
+
|
384 |
+
# Copied from Phi-3.5-MoE
|
385 |
+
class GRINFlashAttention2(GRINMoEAttention):
|
386 |
+
"""
|
387 |
+
GRIN flash attention module. This module inherits from `GRINAttention` as the weights of the module stays
|
388 |
+
untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
|
389 |
+
flash attention and deal with padding tokens in case the input contains any of them.
|
390 |
+
"""
|
391 |
+
|
392 |
+
def __init__(self, *args, **kwargs):
|
393 |
+
super().__init__(*args, **kwargs)
|
394 |
+
self.rotary_emb = FlashRotaryEmbedding(
|
395 |
+
self.head_dim,
|
396 |
+
base=self.rope_theta,
|
397 |
+
scale_base=None,
|
398 |
+
device=torch.device("cuda"),
|
399 |
+
)
|
400 |
+
|
401 |
+
# TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
|
402 |
+
# flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
|
403 |
+
# Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
|
404 |
+
self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
|
405 |
+
|
406 |
+
def forward(
|
407 |
+
self,
|
408 |
+
hidden_states: torch.Tensor,
|
409 |
+
attention_mask: Optional[torch.Tensor] = None,
|
410 |
+
position_ids: Optional[torch.LongTensor] = None,
|
411 |
+
past_key_value: Optional[Cache] = None,
|
412 |
+
output_attentions: bool = False,
|
413 |
+
use_cache: bool = False,
|
414 |
+
**kwargs,
|
415 |
+
):
|
416 |
+
if "padding_mask" in kwargs:
|
417 |
+
warnings.warn(
|
418 |
+
"Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
|
419 |
+
)
|
420 |
+
|
421 |
+
# overwrite attention_mask with padding_mask
|
422 |
+
attention_mask = kwargs.pop("padding_mask")
|
423 |
+
bsz, q_len, _ = hidden_states.size()
|
424 |
+
|
425 |
+
query_states = self.q_proj(hidden_states)
|
426 |
+
key_states = self.k_proj(hidden_states)
|
427 |
+
value_states = self.v_proj(hidden_states)
|
428 |
+
|
429 |
+
q = rearrange(query_states, "... (h d) -> ... h d", d=self.head_dim)
|
430 |
+
kv = torch.cat([key_states, value_states], dim=2)
|
431 |
+
kv = rearrange(kv, "... (two hkv d) -> ... two hkv d", two=2, d=self.head_dim)
|
432 |
+
|
433 |
+
kv_seq_len = key_states.shape[1]
|
434 |
+
seqlen_offset = max(past_key_value.get_usable_length(kv_seq_len, self.layer_idx) if past_key_value is not None else 0, 0)
|
435 |
+
|
436 |
+
query_states, kv = self.rotary_emb(q, kv=kv, seqlen_offset=seqlen_offset)
|
437 |
+
key_states, value_states = kv.chunk(2, dim=2)
|
438 |
+
|
439 |
+
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
440 |
+
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
|
441 |
+
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
|
442 |
+
|
443 |
+
use_sliding_windows = (
|
444 |
+
_flash_supports_window_size
|
445 |
+
and getattr(self.config, "sliding_window", None) is not None
|
446 |
+
and kv_seq_len > self.config.sliding_window
|
447 |
+
)
|
448 |
+
|
449 |
+
if not _flash_supports_window_size:
|
450 |
+
logger.warning_once(
|
451 |
+
"The current flash attention version does not support sliding window attention, for a more memory efficient implementation"
|
452 |
+
" make sure to upgrade flash-attn library."
|
453 |
+
)
|
454 |
+
|
455 |
+
if past_key_value is not None:
|
456 |
+
cache_kwargs = {}
|
457 |
+
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
|
458 |
+
|
459 |
+
# repeat k/v heads if n_kv_heads < n_heads
|
460 |
+
key_states = repeat_kv(key_states, self.num_key_value_groups)
|
461 |
+
value_states = repeat_kv(value_states, self.num_key_value_groups)
|
462 |
+
dropout_rate = 0.0 if not self.training else self.attention_dropout
|
463 |
+
# print(dropout_rate)
|
464 |
+
|
465 |
+
# In PEFT, usually we cast the layer norms in float32 for training stability reasons
|
466 |
+
# therefore the input hidden states gets silently casted in float32. Hence, we need
|
467 |
+
# cast them back in float16 just to be sure everything works as expected.
|
468 |
+
input_dtype = query_states.dtype
|
469 |
+
if input_dtype == torch.float32:
|
470 |
+
if torch.is_autocast_enabled():
|
471 |
+
target_dtype = torch.get_autocast_gpu_dtype()
|
472 |
+
# Handle the case where the model is quantized
|
473 |
+
elif hasattr(self.config, "_pre_quantization_dtype"):
|
474 |
+
target_dtype = self.config._pre_quantization_dtype
|
475 |
+
else:
|
476 |
+
target_dtype = self.q_proj.weight.dtype
|
477 |
+
|
478 |
+
logger.warning_once(
|
479 |
+
f"The input hidden states seems to be silently casted in float32, this might be related to"
|
480 |
+
f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
|
481 |
+
f" {target_dtype}."
|
482 |
+
)
|
483 |
+
|
484 |
+
query_states = query_states.to(target_dtype)
|
485 |
+
key_states = key_states.to(target_dtype)
|
486 |
+
value_states = value_states.to(target_dtype)
|
487 |
+
|
488 |
+
# Reashape to the expected shape for Flash Attention
|
489 |
+
query_states = query_states.transpose(1, 2)
|
490 |
+
key_states = key_states.transpose(1, 2)
|
491 |
+
value_states = value_states.transpose(1, 2)
|
492 |
+
|
493 |
+
attn_output = self._flash_attention_forward(
|
494 |
+
query_states,
|
495 |
+
key_states,
|
496 |
+
value_states,
|
497 |
+
attention_mask,
|
498 |
+
q_len,
|
499 |
+
dropout=dropout_rate,
|
500 |
+
use_sliding_windows=use_sliding_windows,
|
501 |
+
)
|
502 |
+
|
503 |
+
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
|
504 |
+
attn_output = self.o_proj(attn_output)
|
505 |
+
|
506 |
+
if not output_attentions:
|
507 |
+
attn_weights = None
|
508 |
+
|
509 |
+
return attn_output, attn_weights, past_key_value
|
510 |
+
|
511 |
+
def _flash_attention_forward(
|
512 |
+
self,
|
513 |
+
query_states,
|
514 |
+
key_states,
|
515 |
+
value_states,
|
516 |
+
attention_mask,
|
517 |
+
query_length,
|
518 |
+
dropout=0.0,
|
519 |
+
softmax_scale=None,
|
520 |
+
use_sliding_windows=False,
|
521 |
+
):
|
522 |
+
"""
|
523 |
+
Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
|
524 |
+
first unpad the input, then computes the attention scores and pad the final attention scores.
|
525 |
+
|
526 |
+
Args:
|
527 |
+
query_states (`torch.Tensor`):
|
528 |
+
Input query states to be passed to Flash Attention API
|
529 |
+
key_states (`torch.Tensor`):
|
530 |
+
Input key states to be passed to Flash Attention API
|
531 |
+
value_states (`torch.Tensor`):
|
532 |
+
Input value states to be passed to Flash Attention API
|
533 |
+
attention_mask (`torch.Tensor`):
|
534 |
+
The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
|
535 |
+
position of padding tokens and 1 for the position of non-padding tokens.
|
536 |
+
dropout (`float`):
|
537 |
+
Attention dropout
|
538 |
+
softmax_scale (`float`, *optional*):
|
539 |
+
The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
|
540 |
+
use_sliding_windows (`bool`, *optional*):
|
541 |
+
Whether to activate sliding window attention.
|
542 |
+
"""
|
543 |
+
if not self._flash_attn_uses_top_left_mask:
|
544 |
+
causal = self.is_causal
|
545 |
+
else:
|
546 |
+
# TODO: Remove the `query_length != 1` check once Flash Attention for RoCm is bumped to 2.1. For details, please see the comment in LlamaFlashAttention2 __init__.
|
547 |
+
causal = self.is_causal and query_length != 1
|
548 |
+
|
549 |
+
# Contains at least one padding token in the sequence
|
550 |
+
if attention_mask is not None:
|
551 |
+
batch_size = query_states.shape[0]
|
552 |
+
query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(
|
553 |
+
query_states, key_states, value_states, attention_mask, query_length
|
554 |
+
)
|
555 |
+
|
556 |
+
cu_seqlens_q, cu_seqlens_k = cu_seq_lens
|
557 |
+
max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
|
558 |
+
|
559 |
+
if not use_sliding_windows:
|
560 |
+
attn_output_unpad = flash_attn_varlen_func(
|
561 |
+
query_states,
|
562 |
+
key_states,
|
563 |
+
value_states,
|
564 |
+
cu_seqlens_q=cu_seqlens_q,
|
565 |
+
cu_seqlens_k=cu_seqlens_k,
|
566 |
+
max_seqlen_q=max_seqlen_in_batch_q,
|
567 |
+
max_seqlen_k=max_seqlen_in_batch_k,
|
568 |
+
dropout_p=dropout,
|
569 |
+
softmax_scale=softmax_scale,
|
570 |
+
causal=causal,
|
571 |
+
)
|
572 |
+
else:
|
573 |
+
attn_output_unpad = flash_attn_varlen_func(
|
574 |
+
query_states,
|
575 |
+
key_states,
|
576 |
+
value_states,
|
577 |
+
cu_seqlens_q=cu_seqlens_q,
|
578 |
+
cu_seqlens_k=cu_seqlens_k,
|
579 |
+
max_seqlen_q=max_seqlen_in_batch_q,
|
580 |
+
max_seqlen_k=max_seqlen_in_batch_k,
|
581 |
+
dropout_p=dropout,
|
582 |
+
softmax_scale=softmax_scale,
|
583 |
+
causal=causal,
|
584 |
+
window_size=(self.config.sliding_window, self.config.sliding_window),
|
585 |
+
)
|
586 |
+
|
587 |
+
attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)
|
588 |
+
else:
|
589 |
+
if not use_sliding_windows:
|
590 |
+
attn_output = flash_attn_func(
|
591 |
+
query_states,
|
592 |
+
key_states,
|
593 |
+
value_states,
|
594 |
+
dropout,
|
595 |
+
softmax_scale=softmax_scale,
|
596 |
+
causal=causal,
|
597 |
+
)
|
598 |
+
else:
|
599 |
+
attn_output = flash_attn_func(
|
600 |
+
query_states,
|
601 |
+
key_states,
|
602 |
+
value_states,
|
603 |
+
dropout,
|
604 |
+
softmax_scale=softmax_scale,
|
605 |
+
causal=causal,
|
606 |
+
window_size=(self.config.sliding_window, self.config.sliding_window),
|
607 |
+
)
|
608 |
+
|
609 |
+
return attn_output
|
610 |
+
|
611 |
+
def _upad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
|
612 |
+
batch_size, kv_seq_len, num_heads, head_dim = key_layer.shape
|
613 |
+
|
614 |
+
# On the first iteration we need to properly re-create the padding mask
|
615 |
+
# by slicing it on the proper place
|
616 |
+
if kv_seq_len != attention_mask.shape[-1]:
|
617 |
+
attention_mask_num_tokens = attention_mask.shape[-1]
|
618 |
+
attention_mask = attention_mask[:, attention_mask_num_tokens - kv_seq_len :]
|
619 |
+
|
620 |
+
indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
|
621 |
+
|
622 |
+
key_layer = index_first_axis(key_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k)
|
623 |
+
value_layer = index_first_axis(value_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k)
|
624 |
+
|
625 |
+
if query_length == kv_seq_len:
|
626 |
+
query_layer = index_first_axis(
|
627 |
+
query_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k
|
628 |
+
)
|
629 |
+
cu_seqlens_q = cu_seqlens_k
|
630 |
+
max_seqlen_in_batch_q = max_seqlen_in_batch_k
|
631 |
+
indices_q = indices_k
|
632 |
+
elif query_length == 1:
|
633 |
+
max_seqlen_in_batch_q = 1
|
634 |
+
cu_seqlens_q = torch.arange(
|
635 |
+
batch_size + 1, dtype=torch.int32, device=query_layer.device
|
636 |
+
) # There is a memcpy here, that is very bad.
|
637 |
+
indices_q = cu_seqlens_q[:-1]
|
638 |
+
query_layer = query_layer.squeeze(1)
|
639 |
+
else:
|
640 |
+
# The -q_len: slice assumes left padding.
|
641 |
+
attention_mask = attention_mask[:, -query_length:]
|
642 |
+
query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, attention_mask)
|
643 |
+
|
644 |
+
return (
|
645 |
+
query_layer,
|
646 |
+
key_layer,
|
647 |
+
value_layer,
|
648 |
+
indices_q,
|
649 |
+
(cu_seqlens_q, cu_seqlens_k),
|
650 |
+
(max_seqlen_in_batch_q, max_seqlen_in_batch_k),
|
651 |
+
)
|
652 |
+
|
653 |
+
# Copied from Phi-3.5-MoE
|
654 |
+
class GRINMoESdpaAttention(GRINMoEAttention):
|
655 |
+
"""
|
656 |
+
GRINMoE attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
|
657 |
+
`GRINMoEAttention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
|
658 |
+
SDPA API.
|
659 |
+
"""
|
660 |
+
|
661 |
+
# Adapted from GRINMoEAttention.forward
|
662 |
+
def forward(
|
663 |
+
self,
|
664 |
+
hidden_states: torch.Tensor,
|
665 |
+
attention_mask: Optional[torch.Tensor] = None,
|
666 |
+
position_ids: Optional[torch.LongTensor] = None,
|
667 |
+
past_key_value: Optional[Cache] = None,
|
668 |
+
output_attentions: bool = False,
|
669 |
+
use_cache: bool = False,
|
670 |
+
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
|
671 |
+
if output_attentions:
|
672 |
+
# TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
|
673 |
+
logger.warning_once(
|
674 |
+
"GRINMoEModel is using GRINMoESdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
|
675 |
+
'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
|
676 |
+
)
|
677 |
+
return super().forward(
|
678 |
+
hidden_states=hidden_states,
|
679 |
+
attention_mask=attention_mask,
|
680 |
+
position_ids=position_ids,
|
681 |
+
past_key_value=past_key_value,
|
682 |
+
output_attentions=output_attentions,
|
683 |
+
use_cache=use_cache,
|
684 |
+
)
|
685 |
+
|
686 |
+
bsz, q_len, _ = hidden_states.size()
|
687 |
+
|
688 |
+
query_states = self.q_proj(hidden_states)
|
689 |
+
key_states = self.k_proj(hidden_states)
|
690 |
+
value_states = self.v_proj(hidden_states)
|
691 |
+
|
692 |
+
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
693 |
+
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
|
694 |
+
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
|
695 |
+
|
696 |
+
kv_seq_len = key_states.shape[-2]
|
697 |
+
if past_key_value is not None:
|
698 |
+
kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
|
699 |
+
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
|
700 |
+
|
701 |
+
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
|
702 |
+
|
703 |
+
if past_key_value is not None:
|
704 |
+
cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
|
705 |
+
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
|
706 |
+
|
707 |
+
key_states = repeat_kv(key_states, self.num_key_value_groups)
|
708 |
+
value_states = repeat_kv(value_states, self.num_key_value_groups)
|
709 |
+
|
710 |
+
if attention_mask is not None:
|
711 |
+
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
|
712 |
+
raise ValueError(
|
713 |
+
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
|
714 |
+
)
|
715 |
+
|
716 |
+
# SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
|
717 |
+
# Reference: https://github.com/pytorch/pytorch/issues/112577.
|
718 |
+
if query_states.device.type == "cuda" and attention_mask is not None:
|
719 |
+
query_states = query_states.contiguous()
|
720 |
+
key_states = key_states.contiguous()
|
721 |
+
value_states = value_states.contiguous()
|
722 |
+
|
723 |
+
attn_output = torch.nn.functional.scaled_dot_product_attention(
|
724 |
+
query_states,
|
725 |
+
key_states,
|
726 |
+
value_states,
|
727 |
+
attn_mask=attention_mask,
|
728 |
+
dropout_p=self.attention_dropout if self.training else 0.0,
|
729 |
+
# The q_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create a causal mask in case q_len == 1.
|
730 |
+
is_causal=self.is_causal and attention_mask is None and q_len > 1,
|
731 |
+
)
|
732 |
+
|
733 |
+
attn_output = attn_output.transpose(1, 2).contiguous()
|
734 |
+
attn_output = attn_output.view(bsz, q_len, self.hidden_size)
|
735 |
+
|
736 |
+
attn_output = self.o_proj(attn_output)
|
737 |
+
|
738 |
+
return attn_output, None, past_key_value
|
739 |
+
|
740 |
+
|
741 |
+
GRINMOE_ATTENTION_CLASSES = {
|
742 |
+
"eager": GRINMoEAttention,
|
743 |
+
"sdpa": GRINMoESdpaAttention,
|
744 |
+
"flash_attention_2": GRINFlashAttention2,
|
745 |
+
}
|
746 |
+
|
747 |
+
|
748 |
+
class GRINMoEBlockSparseTop2MLP(nn.Module):
|
749 |
+
def __init__(self, config: GRINMoEConfig):
|
750 |
+
super().__init__()
|
751 |
+
self.ffn_dim = config.intermediate_size
|
752 |
+
self.hidden_dim = config.hidden_size
|
753 |
+
|
754 |
+
self.w1 = nn.Linear(self.hidden_dim, self.ffn_dim, bias=False)
|
755 |
+
self.w2 = nn.Linear(self.ffn_dim, self.hidden_dim, bias=False)
|
756 |
+
self.w3 = nn.Linear(self.hidden_dim, self.ffn_dim, bias=False)
|
757 |
+
|
758 |
+
self.act_fn = ACT2FN[config.hidden_act]
|
759 |
+
|
760 |
+
def forward(self, hidden_states):
|
761 |
+
current_hidden_states = self.act_fn(self.w1(hidden_states)) * self.w3(hidden_states)
|
762 |
+
current_hidden_states = self.w2(current_hidden_states)
|
763 |
+
return current_hidden_states
|
764 |
+
|
765 |
+
|
766 |
+
class mp(torch.autograd.Function):
|
767 |
+
@staticmethod
|
768 |
+
def forward(
|
769 |
+
ctx,
|
770 |
+
scores: torch.Tensor,
|
771 |
+
multiplier: torch.Tensor,
|
772 |
+
selected_experts: torch.Tensor,
|
773 |
+
masked_gates: torch.Tensor,
|
774 |
+
mask_for_one: torch.Tensor,
|
775 |
+
):
|
776 |
+
ctx.save_for_backward(multiplier, selected_experts, masked_gates)
|
777 |
+
return multiplier * mask_for_one
|
778 |
+
|
779 |
+
@staticmethod
|
780 |
+
def backward(
|
781 |
+
ctx,
|
782 |
+
grad_at_output: torch.Tensor,
|
783 |
+
):
|
784 |
+
multiplier, selected_experts, masked_gates = ctx.saved_tensors
|
785 |
+
|
786 |
+
grad_at_output = grad_at_output * multiplier
|
787 |
+
|
788 |
+
grad_at_scores_expaned = masked_gates * grad_at_output.mul(-1)
|
789 |
+
grad_at_scores_expaned.scatter_add_(
|
790 |
+
dim=-1,
|
791 |
+
index=selected_experts,
|
792 |
+
src=grad_at_output,
|
793 |
+
)
|
794 |
+
|
795 |
+
return (
|
796 |
+
grad_at_scores_expaned,
|
797 |
+
None,
|
798 |
+
None,
|
799 |
+
None,
|
800 |
+
None,
|
801 |
+
)
|
802 |
+
|
803 |
+
def sparsemixer(scores, top_k, jitter_eps, training):
|
804 |
+
assert top_k == 2
|
805 |
+
|
806 |
+
################ first expert ################
|
807 |
+
|
808 |
+
with torch.no_grad():
|
809 |
+
# compute mask for sparsity
|
810 |
+
mask_logits_threshold, max_ind = scores.max(dim=-1, keepdim=True)
|
811 |
+
factor = scores.abs().clamp(min=mask_logits_threshold)
|
812 |
+
mask_logits_threshold = (
|
813 |
+
(mask_logits_threshold - scores) / factor
|
814 |
+
) > (2 * jitter_eps)
|
815 |
+
|
816 |
+
# apply mask
|
817 |
+
masked_gates = scores.masked_fill(mask_logits_threshold, float('-inf'))
|
818 |
+
if training:
|
819 |
+
selected_experts = (
|
820 |
+
masked_gates - torch.empty_like(masked_gates, memory_format=torch.legacy_contiguous_format).exponential_().log()
|
821 |
+
).max(dim=-1)[1].unsqueeze(-1) # gumbel sampling, more robust than than the multinomial method
|
822 |
+
else:
|
823 |
+
selected_experts = max_ind
|
824 |
+
|
825 |
+
# compute scores for gradients
|
826 |
+
masked_gates = torch.softmax(masked_gates, dim=-1)
|
827 |
+
multiplier_o = masked_gates.gather(dim=-1, index=selected_experts)
|
828 |
+
|
829 |
+
if training:
|
830 |
+
# compute midpoint mask
|
831 |
+
max_scores, max_ind = masked_gates.max(dim=-1, keepdim=True)
|
832 |
+
mask_for_one = torch.logical_or(
|
833 |
+
selected_experts == max_ind,
|
834 |
+
torch.rand_like(max_scores) > 0.75 # Heun's third-order method: f(x) - f(0) = .25 f'(x) + .75 f'(x/3.)
|
835 |
+
)
|
836 |
+
# 1 -> 1.0 & 0 -> 1./3: lambda x: (x + 0.5) / 1.5
|
837 |
+
mask_for_one = torch.add(0.3333, mask_for_one, alpha=0.6667).type_as(masked_gates)
|
838 |
+
|
839 |
+
multiplier = mp.apply(
|
840 |
+
scores,
|
841 |
+
multiplier_o,
|
842 |
+
selected_experts,
|
843 |
+
masked_gates,
|
844 |
+
mask_for_one,
|
845 |
+
)
|
846 |
+
else:
|
847 |
+
multiplier = multiplier_o
|
848 |
+
|
849 |
+
# masked out first expert
|
850 |
+
masked_scores = torch.scatter(
|
851 |
+
scores,
|
852 |
+
-1,
|
853 |
+
selected_experts,
|
854 |
+
float('-inf'),
|
855 |
+
)
|
856 |
+
with torch.no_grad():
|
857 |
+
# compute mask for sparsity
|
858 |
+
mask_logits_threshold, max_ind = masked_scores.max(dim=-1, keepdim=True)
|
859 |
+
factor = scores.abs().clamp(min=mask_logits_threshold)
|
860 |
+
mask_logits_threshold = (
|
861 |
+
(mask_logits_threshold - scores) / factor
|
862 |
+
) > (2 * jitter_eps)
|
863 |
+
|
864 |
+
# apply mask
|
865 |
+
masked_gates_top2 = masked_scores.masked_fill(mask_logits_threshold, float('-inf'))
|
866 |
+
if training:
|
867 |
+
selected_experts_top2 = (
|
868 |
+
masked_gates_top2 - torch.empty_like(masked_gates_top2, memory_format=torch.legacy_contiguous_format).exponential_().log()
|
869 |
+
).max(dim=-1)[1].unsqueeze(-1) # gumbel sampling, more robust than than the multinomial method
|
870 |
+
else:
|
871 |
+
selected_experts_top2 = max_ind
|
872 |
+
# compute scores for gradients
|
873 |
+
masked_gates_top2 = torch.softmax(masked_gates_top2, dim=-1)
|
874 |
+
multiplier_top2_o = masked_gates_top2.gather(dim=-1, index=selected_experts_top2)
|
875 |
+
|
876 |
+
if training:
|
877 |
+
# compute midpoint mask
|
878 |
+
max_scores, max_ind = masked_gates_top2.max(dim=-1, keepdim=True)
|
879 |
+
mask_for_one_top2 = torch.logical_or(
|
880 |
+
selected_experts_top2 == max_ind,
|
881 |
+
torch.rand_like(max_scores).uniform_() > 0.75 # Heun's third-order method: f(x) - f(0) = .25 f'(x) + .75 f'(x/3.)
|
882 |
+
)
|
883 |
+
# 1 -> 1.0 & 0 -> 1./3: lambda x: (x + 0.5) / 1.5
|
884 |
+
mask_for_one_top2 = torch.add(0.3333, mask_for_one_top2, alpha=0.6667).type_as(masked_gates_top2)
|
885 |
+
|
886 |
+
multiplier_top2 = mp.apply(
|
887 |
+
scores,
|
888 |
+
multiplier_top2_o,
|
889 |
+
selected_experts_top2,
|
890 |
+
masked_gates_top2,
|
891 |
+
mask_for_one_top2,
|
892 |
+
)
|
893 |
+
else:
|
894 |
+
multiplier_top2 = multiplier_top2_o
|
895 |
+
|
896 |
+
multiplier = torch.concat((multiplier, multiplier_top2), dim=-1)
|
897 |
+
selected_experts = torch.concat((selected_experts, selected_experts_top2), dim=-1)
|
898 |
+
|
899 |
+
return (
|
900 |
+
multiplier,
|
901 |
+
selected_experts,
|
902 |
+
)
|
903 |
+
|
904 |
+
iterations = 0
|
905 |
+
class GRINMoESparseMoeBlock(nn.Module):
|
906 |
+
"""
|
907 |
+
This implementation is
|
908 |
+
strictly equivalent to standard MoE with full capacity (no
|
909 |
+
dropped tokens). It's faster since it formulates MoE operations
|
910 |
+
in terms of block-sparse operations to accomodate imbalanced
|
911 |
+
assignments of tokens to experts, whereas standard MoE either
|
912 |
+
(1) drop tokens at the cost of reduced performance or (2) set
|
913 |
+
capacity factor to number of experts and thus waste computation
|
914 |
+
and memory on padding.
|
915 |
+
"""
|
916 |
+
|
917 |
+
def __init__(self, config):
|
918 |
+
super().__init__()
|
919 |
+
self.hidden_dim = config.hidden_size
|
920 |
+
self.ffn_dim = config.intermediate_size
|
921 |
+
self.num_experts = config.num_local_experts
|
922 |
+
self.top_k = config.num_experts_per_tok
|
923 |
+
global iterations
|
924 |
+
iterations +=1
|
925 |
+
self.iter = iterations
|
926 |
+
# gating
|
927 |
+
self.gate = nn.Linear(self.hidden_dim, self.num_experts, bias=False)
|
928 |
+
|
929 |
+
self.experts = nn.ModuleList([GRINMoEBlockSparseTop2MLP(config) for _ in range(self.num_experts)])
|
930 |
+
|
931 |
+
# Jitter parameters
|
932 |
+
self.router_jitter_noise = config.router_jitter_noise
|
933 |
+
self.input_jitter_noise = config.input_jitter_noise
|
934 |
+
|
935 |
+
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
|
936 |
+
""" """
|
937 |
+
batch_size, sequence_length, hidden_dim = hidden_states.shape
|
938 |
+
if self.training and self.input_jitter_noise > 0:
|
939 |
+
hidden_states *= torch.empty_like(hidden_states).uniform_(1.0 - self.input_jitter_noise, 1.0 + self.input_jitter_noise)
|
940 |
+
hidden_states = hidden_states.view(-1, hidden_dim)
|
941 |
+
# router_logits: (batch * sequence_length, n_experts)
|
942 |
+
# print ( 'moe', self.iter, torch.norm(hidden_states).item())
|
943 |
+
router_logits = self.gate(hidden_states)
|
944 |
+
|
945 |
+
routing_weights, selected_experts = sparsemixer(
|
946 |
+
router_logits,
|
947 |
+
top_k=self.top_k,
|
948 |
+
jitter_eps=self.router_jitter_noise,
|
949 |
+
training=self.training,
|
950 |
+
)
|
951 |
+
|
952 |
+
final_hidden_states = torch.zeros(
|
953 |
+
(batch_size * sequence_length, hidden_dim), dtype=hidden_states.dtype, device=hidden_states.device
|
954 |
+
)
|
955 |
+
|
956 |
+
# One hot encode the selected experts to create an expert mask
|
957 |
+
# this will be used to easily index which expert is going to be sollicitated
|
958 |
+
expert_mask = torch.nn.functional.one_hot(selected_experts, num_classes=self.num_experts).permute(2, 1, 0)
|
959 |
+
|
960 |
+
# Loop over all available experts in the model and perform the computation on each expert
|
961 |
+
for expert_idx in range(self.num_experts):
|
962 |
+
expert_layer = self.experts[expert_idx]
|
963 |
+
idx, top_x = torch.where(expert_mask[expert_idx])
|
964 |
+
|
965 |
+
if top_x.shape[0] == 0:
|
966 |
+
continue
|
967 |
+
|
968 |
+
# in torch it is faster to index using lists than torch tensors
|
969 |
+
top_x_list = top_x.tolist()
|
970 |
+
idx_list = idx.tolist()
|
971 |
+
|
972 |
+
# Index the correct hidden states and compute the expert hidden state for
|
973 |
+
# the current expert. We need to make sure to multiply the output hidden
|
974 |
+
# states by `routing_weights` on the corresponding tokens (top-1 and top-2)
|
975 |
+
current_state = hidden_states[None, top_x_list].reshape(-1, hidden_dim)
|
976 |
+
current_hidden_states = expert_layer(current_state) * routing_weights[top_x_list, idx_list, None]
|
977 |
+
|
978 |
+
# However `index_add_` only support torch tensors for indexing so we'll use
|
979 |
+
# the `top_x` tensor here.
|
980 |
+
final_hidden_states.index_add_(0, top_x, current_hidden_states.to(hidden_states.dtype))
|
981 |
+
final_hidden_states = final_hidden_states.reshape(batch_size, sequence_length, hidden_dim)
|
982 |
+
# print ( 'moe', self.iter, torch.norm(final_hidden_states).item())
|
983 |
+
return final_hidden_states, router_logits
|
984 |
+
|
985 |
+
|
986 |
+
class GRINMoEDecoderLayer(nn.Module):
|
987 |
+
def __init__(self, config: GRINMoEConfig, layer_idx: int):
|
988 |
+
super().__init__()
|
989 |
+
self.hidden_size = config.hidden_size
|
990 |
+
|
991 |
+
self.self_attn = GRINMOE_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx)
|
992 |
+
|
993 |
+
self.block_sparse_moe = GRINMoESparseMoeBlock(config)
|
994 |
+
self.input_layernorm = nn.LayerNorm(config.hidden_size, eps=config.rms_norm_eps, elementwise_affine=True)
|
995 |
+
self.post_attention_layernorm = nn.LayerNorm(config.hidden_size, eps=config.rms_norm_eps, elementwise_affine=True)
|
996 |
+
|
997 |
+
def forward(
|
998 |
+
self,
|
999 |
+
hidden_states: torch.Tensor,
|
1000 |
+
attention_mask: Optional[torch.Tensor] = None,
|
1001 |
+
position_ids: Optional[torch.LongTensor] = None,
|
1002 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
1003 |
+
output_attentions: Optional[bool] = False,
|
1004 |
+
output_router_logits: Optional[bool] = False,
|
1005 |
+
use_cache: Optional[bool] = False,
|
1006 |
+
**kwargs,
|
1007 |
+
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
|
1008 |
+
if "padding_mask" in kwargs:
|
1009 |
+
warnings.warn(
|
1010 |
+
"Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
|
1011 |
+
)
|
1012 |
+
"""
|
1013 |
+
Args:
|
1014 |
+
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
|
1015 |
+
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
|
1016 |
+
`(batch, sequence_length)` where padding elements are indicated by 0.
|
1017 |
+
past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
|
1018 |
+
output_attentions (`bool`, *optional*):
|
1019 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
|
1020 |
+
returned tensors for more detail.
|
1021 |
+
output_router_logits (`bool`, *optional*):
|
1022 |
+
Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
|
1023 |
+
should not be returned during inference.
|
1024 |
+
use_cache (`bool`, *optional*):
|
1025 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
|
1026 |
+
(see `past_key_values`).
|
1027 |
+
"""
|
1028 |
+
|
1029 |
+
residual = hidden_states
|
1030 |
+
|
1031 |
+
hidden_states = self.input_layernorm(hidden_states)
|
1032 |
+
|
1033 |
+
# Self Attention
|
1034 |
+
hidden_states, self_attn_weights, present_key_value = self.self_attn(
|
1035 |
+
hidden_states=hidden_states,
|
1036 |
+
attention_mask=attention_mask,
|
1037 |
+
position_ids=position_ids,
|
1038 |
+
past_key_value=past_key_value,
|
1039 |
+
output_attentions=output_attentions,
|
1040 |
+
use_cache=use_cache,
|
1041 |
+
)
|
1042 |
+
hidden_states = residual + hidden_states
|
1043 |
+
|
1044 |
+
# Fully Connected
|
1045 |
+
residual = hidden_states
|
1046 |
+
hidden_states = self.post_attention_layernorm(hidden_states)
|
1047 |
+
hidden_states, router_logits = self.block_sparse_moe(hidden_states)
|
1048 |
+
hidden_states = residual + hidden_states
|
1049 |
+
|
1050 |
+
outputs = (hidden_states,)
|
1051 |
+
|
1052 |
+
if output_attentions:
|
1053 |
+
outputs += (self_attn_weights,)
|
1054 |
+
|
1055 |
+
if use_cache:
|
1056 |
+
outputs += (present_key_value,)
|
1057 |
+
|
1058 |
+
if output_router_logits:
|
1059 |
+
outputs += (router_logits,)
|
1060 |
+
|
1061 |
+
return outputs
|
1062 |
+
|
1063 |
+
|
1064 |
+
GRINMOE_START_DOCSTRING = r"""
|
1065 |
+
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
1066 |
+
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
|
1067 |
+
etc.)
|
1068 |
+
|
1069 |
+
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
1070 |
+
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
1071 |
+
and behavior.
|
1072 |
+
|
1073 |
+
Parameters:
|
1074 |
+
config ([`GRINMoEConfig`]):
|
1075 |
+
Model configuration class with all the parameters of the model. Initializing with a config file does not
|
1076 |
+
load the weights associated with the model, only the configuration. Check out the
|
1077 |
+
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
1078 |
+
"""
|
1079 |
+
|
1080 |
+
|
1081 |
+
@add_start_docstrings(
|
1082 |
+
"The bare GRINMoE Model outputting raw hidden-states without any specific head on top.",
|
1083 |
+
GRINMOE_START_DOCSTRING,
|
1084 |
+
)
|
1085 |
+
|
1086 |
+
class GRINMoEPreTrainedModel(PreTrainedModel):
|
1087 |
+
config_class = GRINMoEConfig
|
1088 |
+
base_model_prefix = "model"
|
1089 |
+
supports_gradient_checkpointing = True
|
1090 |
+
_no_split_modules = ["GRINMoEDecoderLayer"]
|
1091 |
+
_skip_keys_device_placement = "past_key_values"
|
1092 |
+
_supports_flash_attn_2 = True
|
1093 |
+
_supports_sdpa = True
|
1094 |
+
_supports_cache_class = True
|
1095 |
+
|
1096 |
+
def _init_weights(self, module):
|
1097 |
+
pass
|
1098 |
+
|
1099 |
+
GRINMOE_INPUTS_DOCSTRING = r"""
|
1100 |
+
Args:
|
1101 |
+
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
|
1102 |
+
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
|
1103 |
+
it.
|
1104 |
+
|
1105 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
1106 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
1107 |
+
|
1108 |
+
[What are input IDs?](../glossary#input-ids)
|
1109 |
+
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
|
1110 |
+
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
|
1111 |
+
|
1112 |
+
- 1 for tokens that are **not masked**,
|
1113 |
+
- 0 for tokens that are **masked**.
|
1114 |
+
|
1115 |
+
[What are attention masks?](../glossary#attention-mask)
|
1116 |
+
|
1117 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
1118 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
1119 |
+
|
1120 |
+
If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
|
1121 |
+
`past_key_values`).
|
1122 |
+
|
1123 |
+
If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
|
1124 |
+
and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
|
1125 |
+
information on the default strategy.
|
1126 |
+
|
1127 |
+
- 1 indicates the head is **not masked**,
|
1128 |
+
- 0 indicates the head is **masked**.
|
1129 |
+
position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
1130 |
+
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
|
1131 |
+
config.n_positions - 1]`.
|
1132 |
+
|
1133 |
+
[What are position IDs?](../glossary#position-ids)
|
1134 |
+
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
|
1135 |
+
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
|
1136 |
+
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
|
1137 |
+
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
|
1138 |
+
|
1139 |
+
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
|
1140 |
+
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
1141 |
+
|
1142 |
+
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
|
1143 |
+
don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
|
1144 |
+
`decoder_input_ids` of shape `(batch_size, sequence_length)`.
|
1145 |
+
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
|
1146 |
+
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
|
1147 |
+
is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
|
1148 |
+
model's internal embedding lookup matrix.
|
1149 |
+
use_cache (`bool`, *optional*):
|
1150 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
|
1151 |
+
`past_key_values`).
|
1152 |
+
output_attentions (`bool`, *optional*):
|
1153 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
|
1154 |
+
tensors for more detail.
|
1155 |
+
output_hidden_states (`bool`, *optional*):
|
1156 |
+
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
|
1157 |
+
more detail.
|
1158 |
+
output_router_logits (`bool`, *optional*):
|
1159 |
+
Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
|
1160 |
+
should not be returned during inference.
|
1161 |
+
return_dict (`bool`, *optional*):
|
1162 |
+
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
|
1163 |
+
"""
|
1164 |
+
|
1165 |
+
|
1166 |
+
@add_start_docstrings(
|
1167 |
+
"The bare GRINMoE Model outputting raw hidden-states without any specific head on top.",
|
1168 |
+
GRINMOE_START_DOCSTRING,
|
1169 |
+
)
|
1170 |
+
|
1171 |
+
# Copied from Phi-3.5-MoE
|
1172 |
+
class GRINMoEModel(GRINMoEPreTrainedModel):
|
1173 |
+
"""
|
1174 |
+
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`GRINMoEDecoderLayer`]
|
1175 |
+
|
1176 |
+
Args:
|
1177 |
+
config: GRINMoEConfig
|
1178 |
+
"""
|
1179 |
+
|
1180 |
+
def __init__(self, config: GRINMoEConfig):
|
1181 |
+
super().__init__(config)
|
1182 |
+
self.padding_idx = config.pad_token_id
|
1183 |
+
self.vocab_size = config.vocab_size
|
1184 |
+
|
1185 |
+
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
|
1186 |
+
self.layers = nn.ModuleList(
|
1187 |
+
[GRINMoEDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
|
1188 |
+
)
|
1189 |
+
self._attn_implementation = config._attn_implementation
|
1190 |
+
self.norm = nn.LayerNorm(config.hidden_size, eps=config.rms_norm_eps, elementwise_affine=True)
|
1191 |
+
|
1192 |
+
self.gradient_checkpointing = False
|
1193 |
+
# Initialize weights and apply final processing
|
1194 |
+
self.post_init()
|
1195 |
+
|
1196 |
+
def get_input_embeddings(self):
|
1197 |
+
return self.embed_tokens
|
1198 |
+
|
1199 |
+
def set_input_embeddings(self, value):
|
1200 |
+
self.embed_tokens = value
|
1201 |
+
|
1202 |
+
# Ignore copy
|
1203 |
+
@add_start_docstrings_to_model_forward(GRINMOE_INPUTS_DOCSTRING)
|
1204 |
+
def forward(
|
1205 |
+
self,
|
1206 |
+
input_ids: torch.LongTensor = None,
|
1207 |
+
attention_mask: Optional[torch.Tensor] = None,
|
1208 |
+
position_ids: Optional[torch.LongTensor] = None,
|
1209 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
1210 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
1211 |
+
use_cache: Optional[bool] = None,
|
1212 |
+
output_attentions: Optional[bool] = None,
|
1213 |
+
output_hidden_states: Optional[bool] = None,
|
1214 |
+
output_router_logits: Optional[bool] = None,
|
1215 |
+
return_dict: Optional[bool] = None,
|
1216 |
+
) -> Union[Tuple, MoeModelOutputWithPast]:
|
1217 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
1218 |
+
output_router_logits = (
|
1219 |
+
output_router_logits if output_router_logits is not None else self.config.output_router_logits
|
1220 |
+
)
|
1221 |
+
output_hidden_states = (
|
1222 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
1223 |
+
)
|
1224 |
+
use_cache = use_cache if use_cache is not None else self.config.use_cache
|
1225 |
+
|
1226 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
1227 |
+
|
1228 |
+
# retrieve input_ids and inputs_embeds
|
1229 |
+
if input_ids is not None and inputs_embeds is not None:
|
1230 |
+
raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
|
1231 |
+
elif input_ids is not None:
|
1232 |
+
batch_size, seq_length = input_ids.shape
|
1233 |
+
elif inputs_embeds is not None:
|
1234 |
+
batch_size, seq_length, _ = inputs_embeds.shape
|
1235 |
+
else:
|
1236 |
+
raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
|
1237 |
+
|
1238 |
+
past_key_values_length = 0
|
1239 |
+
|
1240 |
+
if self.gradient_checkpointing and self.training:
|
1241 |
+
if use_cache:
|
1242 |
+
logger.warning_once(
|
1243 |
+
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`transformers."
|
1244 |
+
)
|
1245 |
+
use_cache = False
|
1246 |
+
|
1247 |
+
if use_cache:
|
1248 |
+
use_legacy_cache = not isinstance(past_key_values, Cache)
|
1249 |
+
if use_legacy_cache:
|
1250 |
+
past_key_values = DynamicCache.from_legacy_cache(past_key_values)
|
1251 |
+
past_key_values_length = past_key_values.get_usable_length(seq_length)
|
1252 |
+
|
1253 |
+
if position_ids is None:
|
1254 |
+
device = input_ids.device if input_ids is not None else inputs_embeds.device
|
1255 |
+
position_ids = torch.arange(
|
1256 |
+
past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
|
1257 |
+
)
|
1258 |
+
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
|
1259 |
+
else:
|
1260 |
+
position_ids = position_ids.view(-1, seq_length).long()
|
1261 |
+
|
1262 |
+
if inputs_embeds is None:
|
1263 |
+
inputs_embeds = self.embed_tokens(input_ids)
|
1264 |
+
|
1265 |
+
if attention_mask is not None and self._attn_implementation == "flash_attention_2" and use_cache:
|
1266 |
+
is_padding_right = attention_mask[:, -1].sum().item() != batch_size
|
1267 |
+
if is_padding_right:
|
1268 |
+
raise ValueError(
|
1269 |
+
"You are attempting to perform batched generation with padding_side='right'"
|
1270 |
+
" this may lead to unexpected behaviour for Flash Attention version of Mixtral. Make sure to "
|
1271 |
+
" call `tokenizer.padding_side = 'left'` before tokenizing the input. "
|
1272 |
+
)
|
1273 |
+
if self._attn_implementation == "flash_attention_2":
|
1274 |
+
# 2d mask is passed through the layers
|
1275 |
+
attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
|
1276 |
+
elif self._attn_implementation == "sdpa" and not output_attentions:
|
1277 |
+
# output_attentions=True can not be supported when using SDPA, and we fall back on
|
1278 |
+
# the manual implementation that requires a 4D causal mask in all cases.
|
1279 |
+
attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
|
1280 |
+
attention_mask,
|
1281 |
+
(batch_size, seq_length),
|
1282 |
+
inputs_embeds,
|
1283 |
+
past_key_values_length,
|
1284 |
+
)
|
1285 |
+
else:
|
1286 |
+
# 4d mask is passed through the layers
|
1287 |
+
attention_mask = _prepare_4d_causal_attention_mask(
|
1288 |
+
attention_mask,
|
1289 |
+
(batch_size, seq_length),
|
1290 |
+
inputs_embeds,
|
1291 |
+
past_key_values_length,
|
1292 |
+
sliding_window=self.config.sliding_window,
|
1293 |
+
)
|
1294 |
+
|
1295 |
+
hidden_states = inputs_embeds
|
1296 |
+
|
1297 |
+
# decoder layers
|
1298 |
+
all_hidden_states = () if output_hidden_states else None
|
1299 |
+
all_self_attns = () if output_attentions else None
|
1300 |
+
all_router_logits = () if output_router_logits else None
|
1301 |
+
next_decoder_cache = None
|
1302 |
+
|
1303 |
+
for decoder_layer in self.layers:
|
1304 |
+
if output_hidden_states:
|
1305 |
+
all_hidden_states += (hidden_states,)
|
1306 |
+
|
1307 |
+
if self.gradient_checkpointing and self.training:
|
1308 |
+
layer_outputs = self._gradient_checkpointing_func(
|
1309 |
+
decoder_layer.__call__,
|
1310 |
+
hidden_states,
|
1311 |
+
attention_mask,
|
1312 |
+
position_ids,
|
1313 |
+
past_key_values,
|
1314 |
+
output_attentions,
|
1315 |
+
output_router_logits,
|
1316 |
+
use_cache,
|
1317 |
+
)
|
1318 |
+
else:
|
1319 |
+
layer_outputs = decoder_layer(
|
1320 |
+
hidden_states,
|
1321 |
+
attention_mask=attention_mask,
|
1322 |
+
position_ids=position_ids,
|
1323 |
+
past_key_value=past_key_values,
|
1324 |
+
output_attentions=output_attentions,
|
1325 |
+
output_router_logits=output_router_logits,
|
1326 |
+
use_cache=use_cache,
|
1327 |
+
)
|
1328 |
+
|
1329 |
+
hidden_states = layer_outputs[0]
|
1330 |
+
|
1331 |
+
if use_cache:
|
1332 |
+
next_decoder_cache = layer_outputs[2 if output_attentions else 1]
|
1333 |
+
|
1334 |
+
if output_attentions:
|
1335 |
+
all_self_attns += (layer_outputs[1],)
|
1336 |
+
|
1337 |
+
if output_router_logits:
|
1338 |
+
all_router_logits += (layer_outputs[-1],)
|
1339 |
+
|
1340 |
+
hidden_states = self.norm(hidden_states)
|
1341 |
+
|
1342 |
+
# add hidden states from the last decoder layer
|
1343 |
+
if output_hidden_states:
|
1344 |
+
all_hidden_states += (hidden_states,)
|
1345 |
+
|
1346 |
+
next_cache = None
|
1347 |
+
if use_cache:
|
1348 |
+
next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
|
1349 |
+
|
1350 |
+
if not return_dict:
|
1351 |
+
return tuple(
|
1352 |
+
v
|
1353 |
+
for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_router_logits]
|
1354 |
+
if v is not None
|
1355 |
+
)
|
1356 |
+
return MoeModelOutputWithPast(
|
1357 |
+
last_hidden_state=hidden_states,
|
1358 |
+
past_key_values=next_cache,
|
1359 |
+
hidden_states=all_hidden_states,
|
1360 |
+
attentions=all_self_attns,
|
1361 |
+
router_logits=all_router_logits,
|
1362 |
+
)
|
1363 |
+
|
1364 |
+
class GRINMoEForCausalLM(GRINMoEPreTrainedModel):
|
1365 |
+
_tied_weights_keys = ["lm_head.weight"]
|
1366 |
+
|
1367 |
+
def __init__(self, config):
|
1368 |
+
super().__init__(config)
|
1369 |
+
self.model = GRINMoEModel(config)
|
1370 |
+
self.vocab_size = config.vocab_size
|
1371 |
+
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=self.config.lm_head_bias)
|
1372 |
+
self.router_aux_loss_coef = config.router_aux_loss_coef
|
1373 |
+
self.num_experts = config.num_local_experts
|
1374 |
+
self.num_experts_per_tok = config.num_experts_per_tok
|
1375 |
+
# Initialize weights and apply final processing
|
1376 |
+
self.post_init()
|
1377 |
+
|
1378 |
+
def get_input_embeddings(self):
|
1379 |
+
return self.model.embed_tokens
|
1380 |
+
|
1381 |
+
def set_input_embeddings(self, value):
|
1382 |
+
self.model.embed_tokens = value
|
1383 |
+
|
1384 |
+
def get_output_embeddings(self):
|
1385 |
+
return self.lm_head
|
1386 |
+
|
1387 |
+
def set_output_embeddings(self, new_embeddings):
|
1388 |
+
self.lm_head = new_embeddings
|
1389 |
+
|
1390 |
+
def set_decoder(self, decoder):
|
1391 |
+
self.model = decoder
|
1392 |
+
|
1393 |
+
def get_decoder(self):
|
1394 |
+
return self.model
|
1395 |
+
|
1396 |
+
@add_start_docstrings_to_model_forward(GRINMOE_INPUTS_DOCSTRING)
|
1397 |
+
@replace_return_docstrings(output_type=MoeCausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
|
1398 |
+
# Ignore copy
|
1399 |
+
def forward(
|
1400 |
+
self,
|
1401 |
+
input_ids: torch.LongTensor = None,
|
1402 |
+
attention_mask: Optional[torch.Tensor] = None,
|
1403 |
+
position_ids: Optional[torch.LongTensor] = None,
|
1404 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
1405 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
1406 |
+
labels: Optional[torch.LongTensor] = None,
|
1407 |
+
use_cache: Optional[bool] = None,
|
1408 |
+
output_attentions: Optional[bool] = None,
|
1409 |
+
output_hidden_states: Optional[bool] = None,
|
1410 |
+
output_router_logits: Optional[bool] = None,
|
1411 |
+
return_dict: Optional[bool] = None,
|
1412 |
+
) -> Union[Tuple, MoeCausalLMOutputWithPast]:
|
1413 |
+
r"""
|
1414 |
+
Args:
|
1415 |
+
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
1416 |
+
Labels for computing the masked language modeling loss. Indices should either be in `[0, transformers.,
|
1417 |
+
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
|
1418 |
+
(masked), the loss is only computed for the tokens with labels in `[0, transformers., config.vocab_size]`.
|
1419 |
+
|
1420 |
+
Returns:
|
1421 |
+
|
1422 |
+
Example:
|
1423 |
+
|
1424 |
+
```python
|
1425 |
+
>>> from transformers import AutoTokenizer, GRINMoEForCausalLM
|
1426 |
+
|
1427 |
+
>>> model = GRINMoEForCausalLM.from_pretrained("microsoft/GRIN-MoE")
|
1428 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/GRIN-MoE")
|
1429 |
+
|
1430 |
+
>>> prompt = "Hey, are you conscious? Can you talk to me?"
|
1431 |
+
>>> inputs = tokenizer(prompt, return_tensors="pt")
|
1432 |
+
|
1433 |
+
>>> # Generate
|
1434 |
+
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
|
1435 |
+
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
1436 |
+
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
|
1437 |
+
```"""
|
1438 |
+
|
1439 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
1440 |
+
output_router_logits = (
|
1441 |
+
output_router_logits if output_router_logits is not None else self.config.output_router_logits
|
1442 |
+
)
|
1443 |
+
|
1444 |
+
output_hidden_states = (
|
1445 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
1446 |
+
)
|
1447 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
1448 |
+
|
1449 |
+
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
|
1450 |
+
outputs = self.model(
|
1451 |
+
input_ids=input_ids,
|
1452 |
+
attention_mask=attention_mask,
|
1453 |
+
position_ids=position_ids,
|
1454 |
+
past_key_values=past_key_values,
|
1455 |
+
inputs_embeds=inputs_embeds,
|
1456 |
+
use_cache=use_cache,
|
1457 |
+
output_attentions=output_attentions,
|
1458 |
+
output_hidden_states=output_hidden_states,
|
1459 |
+
output_router_logits=output_router_logits,
|
1460 |
+
return_dict=return_dict,
|
1461 |
+
)
|
1462 |
+
|
1463 |
+
hidden_states = outputs[0]
|
1464 |
+
logits = self.lm_head(hidden_states)
|
1465 |
+
logits = logits.float()
|
1466 |
+
|
1467 |
+
loss = None
|
1468 |
+
if labels is not None:
|
1469 |
+
# Shift so that tokens < n predict n
|
1470 |
+
shift_logits = logits[..., :-1, :].contiguous()
|
1471 |
+
shift_labels = labels[..., 1:].contiguous()
|
1472 |
+
# Flatten the tokens
|
1473 |
+
loss_fct = CrossEntropyLoss()
|
1474 |
+
shift_logits = shift_logits.view(-1, self.config.vocab_size)
|
1475 |
+
shift_labels = shift_labels.view(-1)
|
1476 |
+
# Enable model parallelism
|
1477 |
+
shift_labels = shift_labels.to(shift_logits.device)
|
1478 |
+
loss = loss_fct(shift_logits, shift_labels)
|
1479 |
+
|
1480 |
+
aux_loss = None
|
1481 |
+
if output_router_logits:
|
1482 |
+
aux_loss = load_balancing_loss_func(
|
1483 |
+
outputs.router_logits if return_dict else outputs[-1],
|
1484 |
+
self.num_experts,
|
1485 |
+
self.num_experts_per_tok,
|
1486 |
+
attention_mask,
|
1487 |
+
)
|
1488 |
+
if labels is not None:
|
1489 |
+
loss += self.router_aux_loss_coef * aux_loss.to(loss.device) # make sure to reside in the same device
|
1490 |
+
|
1491 |
+
if not return_dict:
|
1492 |
+
output = (logits,) + outputs[1:]
|
1493 |
+
if output_router_logits:
|
1494 |
+
output = (aux_loss,) + output
|
1495 |
+
return (loss,) + output if loss is not None else output
|
1496 |
+
|
1497 |
+
return MoeCausalLMOutputWithPast(
|
1498 |
+
loss=loss,
|
1499 |
+
aux_loss=aux_loss,
|
1500 |
+
logits=logits,
|
1501 |
+
past_key_values=outputs.past_key_values,
|
1502 |
+
hidden_states=outputs.hidden_states,
|
1503 |
+
attentions=outputs.attentions,
|
1504 |
+
router_logits=outputs.router_logits,
|
1505 |
+
)
|
1506 |
+
|
1507 |
+
def prepare_inputs_for_generation(
|
1508 |
+
self,
|
1509 |
+
input_ids,
|
1510 |
+
past_key_values=None,
|
1511 |
+
attention_mask=None,
|
1512 |
+
inputs_embeds=None,
|
1513 |
+
output_router_logits=False,
|
1514 |
+
**kwargs,
|
1515 |
+
):
|
1516 |
+
# Omit tokens covered by past_key_values
|
1517 |
+
if past_key_values is not None:
|
1518 |
+
if isinstance(past_key_values, Cache):
|
1519 |
+
cache_length = past_key_values.get_seq_length()
|
1520 |
+
past_length = past_key_values.seen_tokens
|
1521 |
+
max_cache_length = past_key_values.get_max_length()
|
1522 |
+
else:
|
1523 |
+
cache_length = past_length = past_key_values[0][0].shape[2]
|
1524 |
+
max_cache_length = None
|
1525 |
+
|
1526 |
+
# Keep only the unprocessed tokens:
|
1527 |
+
# 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
|
1528 |
+
# some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
|
1529 |
+
# input)
|
1530 |
+
if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
|
1531 |
+
input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
|
1532 |
+
# 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
|
1533 |
+
# input_ids based on the past_length.
|
1534 |
+
elif past_length < input_ids.shape[1]:
|
1535 |
+
input_ids = input_ids[:, past_length:]
|
1536 |
+
# 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
|
1537 |
+
|
1538 |
+
# If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
|
1539 |
+
if (
|
1540 |
+
max_cache_length is not None
|
1541 |
+
and attention_mask is not None
|
1542 |
+
and cache_length + input_ids.shape[1] > max_cache_length
|
1543 |
+
):
|
1544 |
+
attention_mask = attention_mask[:, -max_cache_length:]
|
1545 |
+
|
1546 |
+
position_ids = kwargs.get("position_ids", None)
|
1547 |
+
if attention_mask is not None and position_ids is None:
|
1548 |
+
# create position_ids on the fly for batch generation
|
1549 |
+
position_ids = attention_mask.long().cumsum(-1) - 1
|
1550 |
+
position_ids.masked_fill_(attention_mask == 0, 1)
|
1551 |
+
if past_key_values:
|
1552 |
+
position_ids = position_ids[:, -input_ids.shape[1] :]
|
1553 |
+
|
1554 |
+
# if `inputs_embeds` are passed, we only want to use them in the 1st generation step
|
1555 |
+
if inputs_embeds is not None and past_key_values is None:
|
1556 |
+
model_inputs = {"inputs_embeds": inputs_embeds}
|
1557 |
+
else:
|
1558 |
+
model_inputs = {"input_ids": input_ids}
|
1559 |
+
|
1560 |
+
model_inputs.update(
|
1561 |
+
{
|
1562 |
+
"position_ids": position_ids,
|
1563 |
+
"past_key_values": past_key_values,
|
1564 |
+
"use_cache": kwargs.get("use_cache"),
|
1565 |
+
"attention_mask": attention_mask,
|
1566 |
+
"output_router_logits": output_router_logits,
|
1567 |
+
}
|
1568 |
+
)
|
1569 |
+
return model_inputs
|
1570 |
+
|
1571 |
+
@staticmethod
|
1572 |
+
def _reorder_cache(past_key_values, beam_idx):
|
1573 |
+
reordered_past = ()
|
1574 |
+
for layer_past in past_key_values:
|
1575 |
+
reordered_past += (
|
1576 |
+
tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
|
1577 |
+
)
|
1578 |
+
return reordered_past
|
1579 |
+
|
1580 |
+
|
1581 |
+
@add_start_docstrings(
|
1582 |
+
"""
|
1583 |
+
The GRINMoE Model transformer with a sequence classification head on top (linear layer).
|
1584 |
+
|
1585 |
+
[`GRINMoEForSequenceClassification`] uses the last token in order to do the classification, as other causal models
|
1586 |
+
(e.g. GPT-2) do.
|
1587 |
+
|
1588 |
+
Since it does classification on the last token, it requires to know the position of the last token. If a
|
1589 |
+
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
1590 |
+
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
|
1591 |
+
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
|
1592 |
+
each row of the batch).
|
1593 |
+
""",
|
1594 |
+
GRINMOE_START_DOCSTRING,
|
1595 |
+
)
|
1596 |
+
|
1597 |
+
# Copied from Phi-3.5-MoE
|
1598 |
+
class GRINMoEForSequenceClassification(GRINMoEPreTrainedModel):
|
1599 |
+
def __init__(self, config):
|
1600 |
+
super().__init__(config)
|
1601 |
+
self.num_labels = config.num_labels
|
1602 |
+
self.model = GRINMoEModel(config)
|
1603 |
+
self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
|
1604 |
+
|
1605 |
+
# Initialize weights and apply final processing
|
1606 |
+
self.post_init()
|
1607 |
+
|
1608 |
+
def get_input_embeddings(self):
|
1609 |
+
return self.model.embed_tokens
|
1610 |
+
|
1611 |
+
def set_input_embeddings(self, value):
|
1612 |
+
self.model.embed_tokens = value
|
1613 |
+
|
1614 |
+
@add_start_docstrings_to_model_forward(GRINMOE_INPUTS_DOCSTRING)
|
1615 |
+
def forward(
|
1616 |
+
self,
|
1617 |
+
input_ids: torch.LongTensor = None,
|
1618 |
+
attention_mask: Optional[torch.Tensor] = None,
|
1619 |
+
position_ids: Optional[torch.LongTensor] = None,
|
1620 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
1621 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
1622 |
+
labels: Optional[torch.LongTensor] = None,
|
1623 |
+
use_cache: Optional[bool] = None,
|
1624 |
+
output_attentions: Optional[bool] = None,
|
1625 |
+
output_hidden_states: Optional[bool] = None,
|
1626 |
+
return_dict: Optional[bool] = None,
|
1627 |
+
) -> Union[Tuple, SequenceClassifierOutputWithPast]:
|
1628 |
+
r"""
|
1629 |
+
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
|
1630 |
+
Labels for computing the sequence classification/regression loss. Indices should be in `[0, transformers.,
|
1631 |
+
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
|
1632 |
+
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
|
1633 |
+
"""
|
1634 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
1635 |
+
|
1636 |
+
transformer_outputs = self.model(
|
1637 |
+
input_ids,
|
1638 |
+
attention_mask=attention_mask,
|
1639 |
+
position_ids=position_ids,
|
1640 |
+
past_key_values=past_key_values,
|
1641 |
+
inputs_embeds=inputs_embeds,
|
1642 |
+
use_cache=use_cache,
|
1643 |
+
output_attentions=output_attentions,
|
1644 |
+
output_hidden_states=output_hidden_states,
|
1645 |
+
return_dict=return_dict,
|
1646 |
+
)
|
1647 |
+
hidden_states = transformer_outputs[0]
|
1648 |
+
logits = self.score(hidden_states)
|
1649 |
+
|
1650 |
+
if input_ids is not None:
|
1651 |
+
batch_size = input_ids.shape[0]
|
1652 |
+
else:
|
1653 |
+
batch_size = inputs_embeds.shape[0]
|
1654 |
+
|
1655 |
+
if self.config.pad_token_id is None and batch_size != 1:
|
1656 |
+
raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
|
1657 |
+
if self.config.pad_token_id is None:
|
1658 |
+
sequence_lengths = -1
|
1659 |
+
else:
|
1660 |
+
if input_ids is not None:
|
1661 |
+
# if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
|
1662 |
+
sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
|
1663 |
+
sequence_lengths = sequence_lengths % input_ids.shape[-1]
|
1664 |
+
sequence_lengths = sequence_lengths.to(logits.device)
|
1665 |
+
else:
|
1666 |
+
sequence_lengths = -1
|
1667 |
+
|
1668 |
+
pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
|
1669 |
+
|
1670 |
+
loss = None
|
1671 |
+
if labels is not None:
|
1672 |
+
labels = labels.to(logits.device)
|
1673 |
+
if self.config.problem_type is None:
|
1674 |
+
if self.num_labels == 1:
|
1675 |
+
self.config.problem_type = "regression"
|
1676 |
+
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
|
1677 |
+
self.config.problem_type = "single_label_classification"
|
1678 |
+
else:
|
1679 |
+
self.config.problem_type = "multi_label_classification"
|
1680 |
+
|
1681 |
+
if self.config.problem_type == "regression":
|
1682 |
+
loss_fct = MSELoss()
|
1683 |
+
if self.num_labels == 1:
|
1684 |
+
loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
|
1685 |
+
else:
|
1686 |
+
loss = loss_fct(pooled_logits, labels)
|
1687 |
+
elif self.config.problem_type == "single_label_classification":
|
1688 |
+
loss_fct = CrossEntropyLoss()
|
1689 |
+
loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
|
1690 |
+
elif self.config.problem_type == "multi_label_classification":
|
1691 |
+
loss_fct = BCEWithLogitsLoss()
|
1692 |
+
loss = loss_fct(pooled_logits, labels)
|
1693 |
+
if not return_dict:
|
1694 |
+
output = (pooled_logits,) + transformer_outputs[1:]
|
1695 |
+
return ((loss,) + output) if loss is not None else output
|
1696 |
+
|
1697 |
+
return SequenceClassifierOutputWithPast(
|
1698 |
+
loss=loss,
|
1699 |
+
logits=pooled_logits,
|
1700 |
+
past_key_values=transformer_outputs.past_key_values,
|
1701 |
+
hidden_states=transformer_outputs.hidden_states,
|
1702 |
+
attentions=transformer_outputs.attentions,
|
1703 |
+
)
|
special_tokens_map.json
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<s>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": {
|
17 |
+
"content": "<|endoftext|>",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
},
|
23 |
+
"unk_token": {
|
24 |
+
"content": "<unk>",
|
25 |
+
"lstrip": false,
|
26 |
+
"normalized": false,
|
27 |
+
"rstrip": false,
|
28 |
+
"single_word": false
|
29 |
+
}
|
30 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_eos_token": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"0": {
|
6 |
+
"content": "<unk>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"1": {
|
14 |
+
"content": "<s>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"2": {
|
22 |
+
"content": "</s>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": true,
|
26 |
+
"single_word": false,
|
27 |
+
"special": false
|
28 |
+
},
|
29 |
+
"32000": {
|
30 |
+
"content": "<|endoftext|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"32001": {
|
38 |
+
"content": "<|assistant|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": true,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"32002": {
|
46 |
+
"content": "<|placeholder1|>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": true,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"32003": {
|
54 |
+
"content": "<|placeholder2|>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": true,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
},
|
61 |
+
"32004": {
|
62 |
+
"content": "<|placeholder3|>",
|
63 |
+
"lstrip": false,
|
64 |
+
"normalized": false,
|
65 |
+
"rstrip": true,
|
66 |
+
"single_word": false,
|
67 |
+
"special": true
|
68 |
+
},
|
69 |
+
"32005": {
|
70 |
+
"content": "<|placeholder4|>",
|
71 |
+
"lstrip": false,
|
72 |
+
"normalized": false,
|
73 |
+
"rstrip": true,
|
74 |
+
"single_word": false,
|
75 |
+
"special": true
|
76 |
+
},
|
77 |
+
"32006": {
|
78 |
+
"content": "<|system|>",
|
79 |
+
"lstrip": false,
|
80 |
+
"normalized": false,
|
81 |
+
"rstrip": true,
|
82 |
+
"single_word": false,
|
83 |
+
"special": true
|
84 |
+
},
|
85 |
+
"32007": {
|
86 |
+
"content": "<|end|>",
|
87 |
+
"lstrip": false,
|
88 |
+
"normalized": false,
|
89 |
+
"rstrip": true,
|
90 |
+
"single_word": false,
|
91 |
+
"special": true
|
92 |
+
},
|
93 |
+
"32008": {
|
94 |
+
"content": "<|placeholder5|>",
|
95 |
+
"lstrip": false,
|
96 |
+
"normalized": false,
|
97 |
+
"rstrip": true,
|
98 |
+
"single_word": false,
|
99 |
+
"special": true
|
100 |
+
},
|
101 |
+
"32009": {
|
102 |
+
"content": "<|placeholder6|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": true,
|
106 |
+
"single_word": false,
|
107 |
+
"special": true
|
108 |
+
},
|
109 |
+
"32010": {
|
110 |
+
"content": "<|user|>",
|
111 |
+
"lstrip": false,
|
112 |
+
"normalized": false,
|
113 |
+
"rstrip": true,
|
114 |
+
"single_word": false,
|
115 |
+
"special": true
|
116 |
+
}
|
117 |
+
},
|
118 |
+
"bos_token": "<s>",
|
119 |
+
"chat_template": "{% for message in messages %}{% if message['role'] == 'system' and message['content'] %}{{'<|system|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'user' %}{{'<|user|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'assistant' %}{{'<|assistant|>\n' + message['content'] + '<|end|>\n'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>\n' }}{% else %}{{ eos_token }}{% endif %}",
|
120 |
+
"clean_up_tokenization_spaces": false,
|
121 |
+
"eos_token": "<|endoftext|>",
|
122 |
+
"legacy": false,
|
123 |
+
"model_max_length": 4096,
|
124 |
+
"pad_token": "<|endoftext|>",
|
125 |
+
"padding_side": "left",
|
126 |
+
"sp_model_kwargs": {},
|
127 |
+
"tokenizer_class": "LlamaTokenizer",
|
128 |
+
"unk_token": "<unk>",
|
129 |
+
"use_default_system_prompt": false
|
130 |
+
}
|