holylovenia commited on
Commit
b9c955b
1 Parent(s): a12a146

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +19 -19
README.md CHANGED
@@ -18,16 +18,16 @@ tags:
18
  - self-supervised-pretraining
19
  ---
20
 
21
- This corpus is an attempt to recreate the dataset used for training
22
- XLM-R. This corpus comprises of monolingual data for 100+ languages and
23
- also includes data for romanized languages (indicated by *_rom). This
24
- was constructed using the urls and paragraph indices provided by the
25
- CC-Net repository by processing January-December 2018 Commoncrawl
26
- snapshots. Each file comprises of documents separated by
27
- double-newlines and paragraphs within the same document separated by a
28
- newline. The data is generated using the open source CC-Net repository.
29
- No claims of intellectual property are made on the work of preparation
30
- of the corpus.
31
 
32
 
33
  ## Languages
@@ -37,25 +37,25 @@ ind, jav, sun, mya, mya_zaw, lao, khm, tgl, vie, tha, zlm
37
  ## Supported Tasks
38
 
39
  Self Supervised Pretraining
40
-
41
  ## Dataset Usage
42
  ### Using `datasets` library
43
  ```
44
- from datasets import load_dataset
45
- dset = datasets.load_dataset("SEACrowd/cc100", trust_remote_code=True)
46
  ```
47
  ### Using `seacrowd` library
48
  ```import seacrowd as sc
49
  # Load the dataset using the default config
50
- dset = sc.load_dataset("cc100", schema="seacrowd")
51
  # Check all available subsets (config names) of the dataset
52
- print(sc.available_config_names("cc100"))
53
  # Load the dataset using a specific config
54
- dset = sc.load_dataset_by_config_name(config_name="<config_name>")
55
  ```
56
-
57
- More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
58
-
59
 
60
  ## Dataset Homepage
61
 
 
18
  - self-supervised-pretraining
19
  ---
20
 
21
+ This corpus is an attempt to recreate the dataset used for training
22
+ XLM-R. This corpus comprises of monolingual data for 100+ languages and
23
+ also includes data for romanized languages (indicated by *_rom). This
24
+ was constructed using the urls and paragraph indices provided by the
25
+ CC-Net repository by processing January-December 2018 Commoncrawl
26
+ snapshots. Each file comprises of documents separated by
27
+ double-newlines and paragraphs within the same document separated by a
28
+ newline. The data is generated using the open source CC-Net repository.
29
+ No claims of intellectual property are made on the work of preparation
30
+ of the corpus.
31
 
32
 
33
  ## Languages
 
37
  ## Supported Tasks
38
 
39
  Self Supervised Pretraining
40
+
41
  ## Dataset Usage
42
  ### Using `datasets` library
43
  ```
44
+ from datasets import load_dataset
45
+ dset = datasets.load_dataset("SEACrowd/cc100", trust_remote_code=True)
46
  ```
47
  ### Using `seacrowd` library
48
  ```import seacrowd as sc
49
  # Load the dataset using the default config
50
+ dset = sc.load_dataset("cc100", schema="seacrowd")
51
  # Check all available subsets (config names) of the dataset
52
+ print(sc.available_config_names("cc100"))
53
  # Load the dataset using a specific config
54
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
55
  ```
56
+
57
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
58
+
59
 
60
  ## Dataset Homepage
61