Text Generation
Transformers
PyTorch
code
gpt2
custom_code
Eval Results
text-generation-inference
Inference Endpoints
harmdevries commited on
Commit
bdeb6cc
1 Parent(s): fa6c997

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -200,7 +200,7 @@ model-index:
200
 
201
  # Model Summary
202
 
203
- The SantaCoder models are a series of 1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
204
  The main model uses multi-query attention, was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the Fill-in-the-Middle objective.
205
  In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
206
 
@@ -219,9 +219,9 @@ In addition there are several models that were trained on datasets with differen
219
  |`fertility`| MQA | AR + FIM | Tokenizer fertility |
220
  |`comments`| MQA | AR + FIM | Comment-to-code ratio |
221
  |`dedup-alt`| MQA | AR + FIM | Stronger near-deduplication |
222
- |`dedup-alt-comments`| MQA | AR + FIM | Stronger near-deduplication and comment-to-code ratio |
223
 
224
- The `dedup-alt-comments` model is the best performing model and was trained twice as long as the others. This checkpoint is the default model and available on the `main` branch. All other checkpoints are on separate branches with according names.
225
 
226
  # Use
227
 
@@ -251,7 +251,7 @@ print(tokenizer.decode(outputs[0]))
251
  ```
252
 
253
  ### Fill-in-the-middle
254
- Fill-in-the-mid uses special tokens to identify the prefix/middle/suffic part of the input and output:
255
 
256
  ```python
257
  input_text = "<fim-prefix>def print_hello_world():\n <fim-suffix>\n print('Hello world!')<fim-middle>"
 
200
 
201
  # Model Summary
202
 
203
+ The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
204
  The main model uses multi-query attention, was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the Fill-in-the-Middle objective.
205
  In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
206
 
 
219
  |`fertility`| MQA | AR + FIM | Tokenizer fertility |
220
  |`comments`| MQA | AR + FIM | Comment-to-code ratio |
221
  |`dedup-alt`| MQA | AR + FIM | Stronger near-deduplication |
222
+ |`final`| MQA | AR + FIM | Stronger near-deduplication and comment-to-code ratio |
223
 
224
+ The `final` model is the best performing model and was trained twice as long as the others. This checkpoint is the default model and available on the `main` branch. All other checkpoints are on separate branches with according names.
225
 
226
  # Use
227
 
 
251
  ```
252
 
253
  ### Fill-in-the-middle
254
+ Fill-in-the-middle uses special tokens to identify the prefix/middle/suffic part of the input and output:
255
 
256
  ```python
257
  input_text = "<fim-prefix>def print_hello_world():\n <fim-suffix>\n print('Hello world!')<fim-middle>"