ArneBinder commited on
Commit
277dc70
·
1 Parent(s): fd681a0

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +224 -0
  2. abstrct.py +38 -0
  3. requirements.txt +1 -0
README.md ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PIE Dataset Card for "abstrct"
2
+
3
+ This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the AbstRCT dataset ([paper](https://ebooks.iospress.nl/publication/55129) and [data repository](https://gitlab.com/tomaye/abstrct)). Since the AbstRCT dataset is published in the [BRAT standoff format](https://brat.nlplab.org/standoff.html), this dataset builder is based on the [PyTorch-IE brat dataset loading script](https://huggingface.co/datasets/pie/brat).
4
+
5
+ Therefore, the `abstrct` dataset as described here follows the data structure from the [PIE brat dataset card](https://huggingface.co/datasets/pie/brat).
6
+
7
+ ### Dataset Summary
8
+
9
+ A novel corpus of healthcare texts (i.e., RCT abstracts on various diseases) from the MEDLINE database, which
10
+ are annotated with argumentative components (i.e., `MajorClaim`, `Claim`, and `Premise`) and relations (i.e., `Support`, `Attack`, and `Partial-attack`),
11
+ in order to support clinicians' daily tasks in information finding and evidence-based reasoning for decision making.
12
+
13
+ ### Supported Tasks and Leaderboards
14
+
15
+ - **Tasks**: Argumentation Mining, Component Identification, Boundary Detection, Relation Identification, Link Prediction
16
+ - **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
17
+
18
+ ### Languages
19
+
20
+ The language in the dataset is English (in the medical/healthcare domain).
21
+
22
+ ### Dataset Variants
23
+
24
+ The `abstrct` dataset comes in a single version (`default`) with `BratDocumentWithMergedSpans` as document type. Note,
25
+ that this in contrast to the base `brat` dataset, where the document type for the `default` variant is `BratDocument`.
26
+ The reason is that the AbstRCT dataset has already been published with only single-fragment spans.
27
+ Without any need to merge fragments, the document type `BratDocumentWithMergedSpans` is easier to handle for most of the task modules.
28
+
29
+ ### Data Schema
30
+
31
+ See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema).
32
+
33
+ ### Usage
34
+
35
+ ```python
36
+ from pie_datasets import load_dataset, builders
37
+
38
+ # load default version
39
+ datasets = load_dataset("pie/abstrct")
40
+ doc = datasets["neoplasm_train"][0]
41
+ assert isinstance(doc, builders.brat.BratDocumentWithMergedSpans)
42
+ ```
43
+
44
+ ### Document Converters
45
+
46
+ The dataset provides document converters for the following target document types:
47
+
48
+ - `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
49
+ - `LabeledSpans`, converted from `BratDocumentWithMergedSpans`'s `spans`
50
+ - labels: `MajorClaim`, `Claim`, `Premise`
51
+ - `BinraryRelations`, converted from `BratDocumentWithMergedSpans`'s `relations`
52
+ - labels: `Support`, `Partial-Attack`, `Attack`
53
+
54
+ See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
55
+ definitions.
56
+
57
+ ### Data Splits
58
+
59
+ | Diseease-based Split | `neoplasm` | `glaucoma` | `mixed` |
60
+ | --------------------------------------------------------- | ----------------------: | -------------------: | -------------------: |
61
+ | No.of document <br/>- `_train`<br/>- `_dev`<br/>- `_test` | <br/>350<br/>50<br/>100 | <br/> <br/> <br/>100 | <br/> <br/> <br/>100 |
62
+
63
+ **Important Note**:
64
+
65
+ - `mixed_test` contains 20 abstracts on the following diseases: glaucoma, neoplasm, diabetes, hypertension, hepatitis.
66
+ - 31 out of 40 abstracts in `mixed_test` overlap with abstracts in `neoplasm_test` and `glaucoma_test`.
67
+
68
+ ### Label Descriptions
69
+
70
+ In this section, we describe labels according to [Mayer et al. (2020)](https://ebooks.iospress.nl/publication/55129), as well as our label counts on 669 abstracts.
71
+
72
+ Unfortunately, the number we report does not correspond to what Mayer et al. reported in their paper (see Table 1, p. 2109).
73
+ Morio et al. ([2022](https://aclanthology.org/2022.tacl-1.37.pdf); p. 642, Table 1), who utilized this corpus for their AM tasks, also reported another number, claiming there were double annotation errors in the original statistic collection (see [reference](https://github.com/hitachi-nlp/graph_parser/blob/main/examples/multitask_am/README.md#qas)).
74
+
75
+ #### Components
76
+
77
+ | Components | Count | Percentage |
78
+ | ------------ | ----: | ---------: |
79
+ | `MajorClaim` | 129 | 3 % |
80
+ | `Claim` | 1282 | 30.2 % |
81
+ | `Premise` | 2842 | 66.8 % |
82
+
83
+ - `MajorClaim` are more general/concluding `claim`'s, which is supported by more specific claims
84
+ - `Claim` is a concluding statement made by the author about the outcome of the study. Claims only points to other claims.
85
+ - `Premise` (a.k.a. evidence) is an observation or measurement in the study, which supports or attacks another argument component, usually a `claim`. They are observed facts, and therefore credible without further justifications, as this is the ground truth the argumentation is based on.
86
+
87
+ (Mayer et al. 2020, p.2110)
88
+
89
+ #### Relations
90
+
91
+ | Relations | Count | Percentage |
92
+ | ------------------------ | ----: | ---------: |
93
+ | support: `Support` | 2289 | 87 % |
94
+ | attack: `Partial-Attack` | 275 | 10.4 % |
95
+ | attack: `Attack` | 69 | 2.6 % |
96
+
97
+ - `Support`: All statements or observations justifying the proposition of the target component
98
+ - `Partial-Attack`: when the source component is not in full contradiction, but weakening the target component by constraining its proposition. Usually occur between two claims
99
+ - `Attack`: A component is attacking another one, if it is
100
+ - i) contradicting the proposition of the target component, or
101
+ - ii) undercutting its implicit assumption of significance constraints
102
+ - `Premise` can only be connected to either `Claim` or another `Premise`
103
+ - `Claim`'s can only point to other `Claim`'s
104
+ - There might be more than one **outgoing** and/or **incoming relation** . In rare case, there is no relation to another component at all.
105
+
106
+ (Mayer et al. 2020, p.2110)
107
+
108
+ ## Dataset Creation
109
+
110
+ ### Curation Rationale
111
+
112
+ "\[D\]espite its natural employment in healthcare applications, only few approaches have applied AM methods to this kind
113
+ of text, and their contribution is limited to the detection
114
+ of argument components, disregarding the more complex phase of
115
+ predicting the relations among them. In addition, no huge annotated
116
+ dataset for AM is available for the healthcare domain (p. 2108)...to support clinicians in decision making or in (semi)-automatically
117
+ filling evidence tables for systematic reviews in evidence-based medicine. (p. 2114)"
118
+
119
+ ### Source Data
120
+
121
+ [MEDLINE database](https://www.nlm.nih.gov/medline/medline_overview.html)
122
+
123
+ #### Initial Data Collection and Normalization
124
+
125
+ Extended from the previous dataset in [Mayer et al. 2018](https://webusers.i3s.unice.fr/~riveill/IADB/publications/2018-COMMA.pdf), 500 medical abstract from randomized controlled trials (RCTs) were retrieved directly from [PubMed](https://www.ncbi.nlm.nih.gov/pubmed/) by searching for titles or abstracts containing the disease name.
126
+
127
+ (See the definition of RCT in the authors' [guideline](https://gitlab.com/tomaye/abstrct/-/blob/master/AbstRCT_corpus/AnnotationGuidelines.pdf) (Section 1.2) and [US National Library of Medicine](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235704/))
128
+
129
+ #### Who are the source language producers?
130
+
131
+ \[More Information Needed\]
132
+
133
+ ### Annotations
134
+
135
+ #### Annotation process
136
+
137
+ "An expert in the medical domain (a pharmacist) validated the annotation
138
+ guidelines before starting the annotation process." (p. 2110)
139
+
140
+ "Annotation was started after a training phase, where amongst others the component boundaries were topic of discussion. Gold labels
141
+ were set after a reconciliation phase, during which the annotators
142
+ tried to reach an agreement. While the number of annotators vary for
143
+ the two annotation phases (component and relation annotation).
144
+
145
+ On the annotation of argument components, "IAA among the three annotators has been calculated
146
+ on 30 abstracts, resulting in a Fleiss’ kappa of 0.72 for argumentative
147
+ components and 0.68 for the more fine-grained distinction between
148
+ claims and evidence." (p. 2109)
149
+
150
+ On the annotation of argumentative relation, "IAA has been calculated on 30 abstracts annotated in parallel by three annotators,
151
+ resulting in a Fleiss’ kappa of
152
+ 0.62. The annotation of the remaining abstracts was carried out by
153
+ one of the above mentioned annotators." (p. 2110)
154
+
155
+ See the [Annotation Guideline](https://gitlab.com/tomaye/abstrct/-/blob/master/AbstRCT_corpus/AnnotationGuidelines.pdf?ref_type=heads) for more information on definitions and annotated samples.
156
+
157
+ #### Who are the annotators?
158
+
159
+ Two annotators with background in computational linguistics. No information was given on the third annotator.
160
+
161
+ ### Personal and Sensitive Information
162
+
163
+ \[More Information Needed\]
164
+
165
+ ## Considerations for Using the Data
166
+
167
+ ### Social Impact of Dataset
168
+
169
+ "These \[*intelligent*\] systems apply to clinical trials,
170
+ clinical guidelines, and electronic health records, and their solutions range from the automated detection of PICO elements
171
+ in health records to evidence-based reasoning for decision making. These applications highlight the need of clinicians to be supplied with frameworks able to extract, from the huge
172
+ quantity of data available for the different diseases and treatments,
173
+ the exact information they necessitate and to present this information in a structured way, easy to be (possibly semi-automatically)
174
+ analyzed...Given its aptness to automatically detect in text those
175
+ argumentative structures that are at the basis of evidence-based reasoning applications, AM represents a potential valuable contribution
176
+ in the healthcare domain." (p. 2108)
177
+
178
+ "We expect that our work will have a large impact for clinicians as it
179
+ is a crucial step towards AI supported clinical deliberation at a large
180
+ scale." (p. 2114)
181
+
182
+ ### Discussion of Biases
183
+
184
+ \[More Information Needed\]
185
+
186
+ ### Other Known Limitations
187
+
188
+ \[More Information Needed\]
189
+
190
+ ## Additional Information
191
+
192
+ ### Dataset Curators
193
+
194
+ \[More Information Needed\]
195
+
196
+ ### Licensing Information
197
+
198
+ - **License**: the AbstRCT dataset is released under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)
199
+ - **Funding**: This work is partly funded by the French government labelled PIA
200
+ program under its IDEX UCA JEDI project (ANR-15-IDEX-0001).
201
+ This work has been supported by the French government, through the
202
+ 3IA Cote d’Azur Investments in the Future project managed by the
203
+ National Research Agency (ANR) with the reference number ANR19-P3IA-0002
204
+
205
+ ### Citation Information
206
+
207
+ ```
208
+ @inproceedings{mayer2020ecai,
209
+ author = {Tobias Mayer and
210
+ Elena Cabrio and
211
+ Serena Villata},
212
+ title = {Transformer-Based Argument Mining for Healthcare Applications},
213
+ booktitle = {{ECAI} 2020 - 24th European Conference on Artificial Intelligence},
214
+ series = {Frontiers in Artificial Intelligence and Applications},
215
+ volume = {325},
216
+ pages = {2108--2115},
217
+ publisher = {{IOS} Press},
218
+ year = {2020},
219
+ }
220
+ ```
221
+
222
+ ### Contributions
223
+
224
+ Thanks to [@ArneBinder](https://github.com/ArneBinder) and [@idalr](https://github.com/idalr) for adding this dataset.
abstrct.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pytorch_ie.documents import TextDocumentWithLabeledSpansAndBinaryRelations
2
+
3
+ from pie_datasets.builders import BratBuilder, BratConfig
4
+ from pie_datasets.builders.brat import BratDocumentWithMergedSpans
5
+
6
+ URL = "https://gitlab.com/tomaye/abstrct/-/archive/master/abstrct-master.zip"
7
+ SPLIT_PATHS = {
8
+ "neoplasm_train": "abstrct-master/AbstRCT_corpus/data/train/neoplasm_train",
9
+ "neoplasm_dev": "abstrct-master/AbstRCT_corpus/data/dev/neoplasm_dev",
10
+ "neoplasm_test": "abstrct-master/AbstRCT_corpus/data/test/neoplasm_test",
11
+ "glaucoma_test": "abstrct-master/AbstRCT_corpus/data/test/glaucoma_test",
12
+ "mixed_test": "abstrct-master/AbstRCT_corpus/data/test/mixed_test",
13
+ }
14
+
15
+
16
+ class AbstRCT(BratBuilder):
17
+ BASE_DATASET_PATH = "DFKI-SLT/brat"
18
+ BASE_DATASET_REVISION = "bb8c37d84ddf2da1e691d226c55fef48fd8149b5"
19
+
20
+ BUILDER_CONFIGS = [
21
+ BratConfig(name=BratBuilder.DEFAULT_CONFIG_NAME, merge_fragmented_spans=True),
22
+ ]
23
+ DOCUMENT_TYPES = {
24
+ BratBuilder.DEFAULT_CONFIG_NAME: BratDocumentWithMergedSpans,
25
+ }
26
+
27
+ # we need to add None to the list of dataset variants to support the default dataset variant
28
+ BASE_BUILDER_KWARGS_DICT = {
29
+ dataset_variant: {"url": URL, "split_paths": SPLIT_PATHS}
30
+ for dataset_variant in ["default", None]
31
+ }
32
+
33
+ DOCUMENT_CONVERTERS = {
34
+ TextDocumentWithLabeledSpansAndBinaryRelations: {
35
+ "spans": "labeled_spans",
36
+ "relations": "binary_relations",
37
+ },
38
+ }
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ pie-datasets>=0.4.0,<0.9.0