Commit
·
e9af0ee
1
Parent(s):
b2ce6ac
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,9 @@ license: cc-by-sa-4.0
|
|
11 |
|
12 |
## Model Description
|
13 |
|
14 |
-
`StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
|
|
|
|
|
15 |
The model is intended to do single/multiline code completion from a long context window upto 16k tokens.
|
16 |
Get started generating code with `StableCode-Completion-Alpha-3B` by using the following code snippet:
|
17 |
|
@@ -38,7 +40,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
|
38 |
|
39 |
* **Developed by**: Code.AI Team @ [Stability AI](https://stability.ai/)
|
40 |
* **Model type**: `StableCode-Completion-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture.
|
41 |
-
* **Language(s)**:
|
42 |
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
|
43 |
* **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
|
44 |
* **Contact**: For questions and comments about the model, please email `[email protected]`
|
@@ -60,7 +62,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
|
60 |
|
61 |
### Training Dataset
|
62 |
|
63 |
-
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey. We then finetune it on a longer context augmentation of `starcoder-data.
|
64 |
|
65 |
### Training Procedure
|
66 |
|
|
|
11 |
|
12 |
## Model Description
|
13 |
|
14 |
+
`StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
|
15 |
+
|
16 |
+
## Usage
|
17 |
The model is intended to do single/multiline code completion from a long context window upto 16k tokens.
|
18 |
Get started generating code with `StableCode-Completion-Alpha-3B` by using the following code snippet:
|
19 |
|
|
|
40 |
|
41 |
* **Developed by**: Code.AI Team @ [Stability AI](https://stability.ai/)
|
42 |
* **Model type**: `StableCode-Completion-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture.
|
43 |
+
* **Language(s)**: Code
|
44 |
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
|
45 |
* **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
|
46 |
* **Contact**: For questions and comments about the model, please email `[email protected]`
|
|
|
62 |
|
63 |
### Training Dataset
|
64 |
|
65 |
+
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey in the `starcoder-data` dataset. We then finetune it on a longer context augmentation of `starcoder-data` dataset.
|
66 |
|
67 |
### Training Procedure
|
68 |
|