sudy-super commited on
Commit
2fee5b2
·
verified ·
1 Parent(s): 2331a48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -22,11 +22,22 @@ size_categories:
22
  - 10M<n<100M
23
  license: apache-2.0
24
  ---
25
- # Description
 
 
 
 
 
 
 
 
 
 
 
26
  This dataset was used to pre-train [Contrail-200m-64k](https://huggingface.co/sudy-super/Contrail-200m-64k) when we participated in [LOCAL AI HACKATHON #000](https://imminent-land-e64.notion.site/000-2024-04-01-8b9b0ce5c2454002ac8ecdc6311e3a49).
27
 
 
28
 
29
- # The number of tokens (Using tokenizer of [calm2-chat](https://huggingface.co/cyberagent/calm2-7b-chat))
30
  | Language | The number of tokens |
31
  | --- | --- |
32
  | Japanese | 4.7b |
@@ -34,5 +45,6 @@ This dataset was used to pre-train [Contrail-200m-64k](https://huggingface.co/su
34
  | Code | 0.9b |
35
 
36
 
37
- # NOTE
 
38
  This dataset has not passed sentence end boundary determination or Perplexity Filtering, so there is room for improvement in quality.
 
22
  - 10M<n<100M
23
  license: apache-2.0
24
  ---
25
+ # JetCopper-10B
26
+
27
+ ## Description
28
+
29
+ JetCopper-10B was created by extracting a portion of the data after cleaning, filtering, and deduplicating the following datasets.
30
+
31
+ * The japanese subset of [C4](https://huggingface.co/datasets/allenai/c4)
32
+ * The japanese subset of [CC-100](https://data.statmt.org/cc-100)
33
+ * The japanese subset of [OSCAR-2301](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301)
34
+ * The japanese subset of [HPLT Datasets v1.2](https://hplt-project.org/datasets/v1.2)
35
+ * [wiki40b-ja](https://huggingface.co/datasets/range3/wiki40b-ja)
36
+
37
  This dataset was used to pre-train [Contrail-200m-64k](https://huggingface.co/sudy-super/Contrail-200m-64k) when we participated in [LOCAL AI HACKATHON #000](https://imminent-land-e64.notion.site/000-2024-04-01-8b9b0ce5c2454002ac8ecdc6311e3a49).
38
 
39
+ ## The number of tokens (Using tokenizer of [calm2-chat](https://huggingface.co/cyberagent/calm2-7b-chat))
40
 
 
41
  | Language | The number of tokens |
42
  | --- | --- |
43
  | Japanese | 4.7b |
 
45
  | Code | 0.9b |
46
 
47
 
48
+ ## NOTE
49
+
50
  This dataset has not passed sentence end boundary determination or Perplexity Filtering, so there is room for improvement in quality.