nbel commited on
Commit
11474b1
·
verified ·
1 Parent(s): 7cc6da0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -2
README.md CHANGED
@@ -1,6 +1,110 @@
1
  ---
2
- license: creativeml-openrail-m
3
  language:
4
  - es
5
  pretty_name: 'EsCoLA: Spanish Corpus of Linguistic Acceptability'
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-sa-4.0
3
  language:
4
  - es
5
  pretty_name: 'EsCoLA: Spanish Corpus of Linguistic Acceptability'
6
+ ---
7
+ Introduction
8
+
9
+ The Spanish Corpus of Linguistic Acceptability (EsCoLA) includes 11,174 sentences taken from linguistic literature with a binary annotation made by the original authors themselves. The work is inspired by CoLA: https://nyu-mll.github.io/CoLA/#
10
+
11
+ Paper
12
+
13
+ Núria Bel, Marta Punsola, Valle Ruiz-Fernández, 2024, EsCoLA: Spanish Corpus of Linguistic Acceptability. Joint International Conference on Computational Linguistics, Language Resources and Evaluation LREC-COLING 2024. Torino. Italy.
14
+
15
+ Download
16
+
17
+ The corpus has a CC-BY 4.0 license. Download EsCoLA inDomain train and dev datasets, plus human annotation, from https://github.com/nuriabel/LUTEST/ For EsCoLA outDomain dataset and EsCoLA inDomain test data, please contact [email protected].
18
+
19
+ Data format
20
+
21
+ EsCoLA dataset is split into two subsets: an in-domain subset (InDomain) with 10,567 sentences, and an out-of-domain subset (OutDomain) with 607 sentences. The in-domain subset has been split five times into train/dev/test sections:
22
+
23
+ train: 8454 sentences
24
+ dev: 1053 sentences
25
+ test: 1060 sentences
26
+
27
+ And the out-of-domain subset is split into dev/test sections. The test sets are not made public.
28
+
29
+ For the in-domain subset, each line in the .tsv files consists of 11 tab-separated columns:
30
+
31
+ Column 1: a unique ID
32
+ Column 2: the source of the sentence
33
+ Column 3: the acceptability judgment label from the source (0=unacceptable, 1=acceptable)
34
+ Column 4: the source's annotation (* for the unacceptable sentences)
35
+ Columns 5, 6 and 7: the human annotations
36
+ Column 8: the human annotations' median
37
+ Column 9: the sentence
38
+ Column 10: the category of the linguistic phenomenon the sentence is an example of
39
+ Column 11: the split to which the sentence belongs
40
+
41
+ For the out-of-domain, each line in the .tsv file consists of 6 tab-separated columns:
42
+
43
+ Column 1: a unique ID
44
+ Column 2: the source of the sentence
45
+ Column 3: the acceptability judgment label from the source (0=unacceptable, 1=acceptable)
46
+ Column 4: the source's annotation (* for the unacceptable sentences)
47
+ Column 5: the sentence
48
+ Column 6: the category of the linguistic phenomenon the sentence is an example of
49
+ Corpus Sample
50
+
51
+ In-domain:
52
+
53
+ ID Source Label Source_annotation Annotator_1 Annotator_2 Annotator_3 Human_annotation_median Sentence Category Split
54
+ EsCoLA_5681 GDE35 1 1 0 1 1 ¿Opinaron si debían hacerlo? 7 train
55
+ EsCoLA_8872 GDE51 1 1 1 1 1 ¿Quién ha llamado? 7 train EsCoLA_5661
56
+ GDE35 1 1 0 0 0 No se sabe donde ir. 7 train
57
+ EsCoLA_7328 GDE42 0 * 0 1 1 1 Sólo tenía una peseta, y aquel tipo me pedía doscientos. 14 train
58
+
59
+ Out-of-domain:
60
+
61
+ ID Source Label Source annotation Sentence Category
62
+ OD_1 ng34 1 El camino bordea el río. 1
63
+ OD_2 ng34 1 Las aves vuelan. 1
64
+ OD_3 ng34 1 Pepe tiene dinero. 1
65
+ OD_4 ng34 1 Tengo hambre. 1
66
+ OD_5 ng34 0 * Dudo tu solución. 1
67
+
68
+ Processing
69
+
70
+ During gathering of the data and processing, some sentences from the source documents may have been omitted or altered. We discarded examples marking dubious acceptability with "?" or other signs, but those examples that included acceptability alternations were taken by creating the two versions: the acceptable and the unacceptable sentence. Finally, the examples that were not full sentences, that is, that contain no main verb, were manually edited to add a neutral verb to convert them into sentences, while keeping the acceptability value.
71
+
72
+ Sources
73
+
74
+ InDomain: Demonte and Bosque (1999)
75
+ OutDomain: RAE (2009), Palencia and Aragonés (2007) Díaz and Yagüe (2019)
76
+
77
+ Annotation The dataset has been manually annotated with 14 linguistic phenomena.
78
+
79
+ 1. Simple
80
+ 2. Predicative
81
+ 3. Adjuncts
82
+ 4. Argument types
83
+ 5. Argument alternation
84
+ 6. Binding pronouns
85
+ 7. Wh-phenomena
86
+ 8. Complement clauses
87
+ 9. Modal, negation, periphrasis and auxiliaries
88
+ 10. Infinitive embedded VPs and referential phenomena
89
+ 11. Complex NPs and APs
90
+ 12. S-Syntax
91
+ 13. Determiners, quantifiers, comparative and superlative constructions
92
+ 14. Spanish phenomena The Spanish phenomena have been classified into 6 categories:
93
+ 14.1. Agreement in nominal constructions
94
+ 14.2. Subjunctive mode and tense
95
+ 14.3. Spurious preposition for completive clauses ('dequeismo')
96
+ 14.4. Subject ellipsis
97
+ 14.5. Pronominal cliticization
98
+ 14.6. Ser/estar copula selection
99
+
100
+ Citation
101
+
102
+ Please, if you use the dataset cite the following papers:
103
+
104
+ Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471.
105
+
106
+ Núria Bel, Marta Punsola, Valle Ruiz-Fernández, 2024, EsCoLA: Spanish Corpus of Linguistic Acceptability. Joint International Conference on Computational Linguistics, Language Resources and Evaluation LREC-COLING 2024. Torino. Italy.
107
+
108
+ Disclaimer
109
+
110
+ The dataset has been made by copying the examples from published works that are protected by copyright. According to Spanish law, we have respected the copyright because the number of elements taken represent less than a 10% of the whole work, and the number of items copied is justified by the aims of research.