QA2D / README.md
Farnazgh's picture
Update README.md
55576b7
|
raw
history blame
1.43 kB
**Transforming Question-Answer Pairs to Full Declarative Answer Form**
Considering the question of "Which drug did you take?" and the answer of "Doliprane", the aim of this model is to derive a full answer to the question "I took Doliprane".
We fine-tune T5 (Raffel et al.,2019), a pre-trained encoder-decoder model, on two datasets of (question, incomplete answer, full answer) triples, one for wh- and one for yes-no(YN) questions. For wh-questions, we use 3,300 entries of the dataset consisting of (question, answer, declarative answer sentence) triples gathered by Demszky et al. (2018) using Amazon Mechanical Turk workers. For YN questions, we used the SAMSum corpus, (Gliwa et al., 2019) which contains short dialogs in chit-chat format. We created
1,100 (question, answer, full answer) triples by au-
tomatically extracting YN (question, answer) pairs
from this corpus and manually associating them
with the corresponding declarative answer. Data
was splitted into train and test (9:1) and the fine-
tuned model achieved 0.90 ROUGE-L score on the
test set.
In paper "Exploring the Influence of Dialog Input Format for Unsupervised Clinical Questionnaire Filling", the model was applied in information-seeking dialogs to form the declarative transform of the whole dialog.
widget:
- text: "q : Which drug did you take? a: Doliprane"
- text: "q: do you watch a lot of comedy ? a: yes it will helpful the mind relaxation"