File size: 2,147 Bytes
1b8187c
6c3d53e
 
 
 
 
 
 
 
 
 
1b8187c
293b8e1
1b8187c
3ed80e9
1b8187c
293b8e1
1b8187c
293b8e1
 
 
1b8187c
293b8e1
 
 
 
 
 
 
 
1b8187c
293b8e1
 
1b8187c
293b8e1
1b8187c
293b8e1
1b8187c
293b8e1
1b8187c
293b8e1
 
1b8187c
293b8e1
 
1b8187c
 
293b8e1
1b8187c
293b8e1
1b8187c
293b8e1
 
1b8187c
293b8e1
 
 
1b8187c
293b8e1
1b8187c
293b8e1
1b8187c
293b8e1
1b8187c
293b8e1
1b8187c
 
293b8e1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69

---
license: apache-2.0
datasets:
- oscar-corpus/OSCAR-2109
language:
- en
- nl
pipeline_tag: text-generation
library_name: transformers
---

# B-GPT_en_nl_sequential

The B-GPT Models are bilingual GPT-2 style models. For the first half of training, this model was trained only on English data. In the second half of training, the model was trained on only {language_2} data. At the end of training, 50 % of training data seen by the model is English and 50 % is Dutch. The tokenizer was trained on the same proportions of English and Dutch data. 

## Model details:

    All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
    For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)!
    Details for this model specifically:

    * Architecture: gpt2
    * Parameters: 124770816
    * Maximum sequence length: 512 tokens
    * Training text data (raw): [XXXX]
    * Training tokens: 12B
    * Vocabulary size: 50000
    * Compute cost: ~9 NVIDIA A6000 GPU hours
    * CO2 Emission: 1.17 kg

    Training datasets (percentages prior to deduplication):
    * 100.00000%: [OSCAR 2021/09](https://huggingface.co./datasets/oscar-corpus/OSCAR-2109)

    Checkpoints are taken at training steps: 0, 10000, 20000, 30000, 40000, 50000, 64000, 64010, 64020, 64030, 64040, 64050, 64060, 64070, 64080, 64090, 64100, 64110, 64120, 64130, 64140, 64150, 64160, 64170, 64180, 64190, 64200, 64300, 64400, 64500, 64600, 64700, 64800, 64900, 65000, 66000, 67000, 68000, 69000, 70000, 80000, 90000, 100000, 110000, 120000, 128000.

    ## Use This Model

    Load the model:

    ```
    from transformers import AutoTokenizer, AutoModel

    tokenizer = AutoTokenizer.from_pretrained("B-GPT_en_nl_sequential")
    model = AutoModel.from_pretrained("B-GPT_en_nl_sequential")


    ````

    Text Generation:

    ```
    from transformers import pipeline

    pipe = pipeline("text-generation", model="B-GPT_en_nl_sequential")
    
    pipe("I am a")

    ```

    ## Citation

    If you use this model, please cite:

    ```


    ```