bourdoiscatie commited on
Commit
5e4b271
·
1 Parent(s): 0a127ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -8
README.md CHANGED
@@ -4,19 +4,29 @@ language:
4
  license:
5
  - unknown
6
  size_categories:
7
- - 10K<n<100K
8
  task_categories:
9
  - token-classification
10
  tags:
11
  - ner
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  # wikiann_fr_prompt_ner
15
  ## Summary
16
 
17
- **wikiann_fr_prompt_ner** is a subset of the [**Dataset of French Prompts (DFP)**]().
18
- It contains **X** rows that can be used for a name entity recognition task.
19
- The original data (without prompts) comes from the dataset [wikiann](https://huggingface.co/datasets/tner/wikiann) by Pan et al. where only the French part has been kept.
20
  A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
21
 
22
 
@@ -57,11 +67,11 @@ wikiann['train']['tags'] = list(map(lambda x: x.replace("[","").replace("]","").
57
  ```
58
 
59
 
60
-
61
  # Splits
62
- - train with X samples
63
- - dev with Y samples
64
- - test with Z samples
 
65
 
66
 
67
  # How to use?
 
4
  license:
5
  - unknown
6
  size_categories:
7
+ - 100K<n<1M
8
  task_categories:
9
  - token-classification
10
  tags:
11
  - ner
12
+ - DFP
13
+ - french prompts
14
+ annotations_creators:
15
+ - found
16
+ language_creators:
17
+ - found
18
+ multilinguality:
19
+ - monolingual
20
+ source_datasets:
21
+ - wikiann
22
  ---
23
 
24
  # wikiann_fr_prompt_ner
25
  ## Summary
26
 
27
+ **wikiann_fr_prompt_ner** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
28
+ It contains **840,000** rows that can be used for a name entity recognition task.
29
+ The original data (without prompts) comes from the dataset [wikiann](https://huggingface.co/datasets/tner/wikiann) by Pan et al. where only the French part has been kept.
30
  A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
31
 
32
 
 
67
  ```
68
 
69
 
 
70
  # Splits
71
+ - `train` with 420,000 samples
72
+ - `valid` with 210,000 samples
73
+ - `test` with 210,000 samples
74
+
75
 
76
 
77
  # How to use?