|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: choices |
|
sequence: string |
|
- name: label |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 16078856 |
|
num_examples: 16000 |
|
- name: test |
|
num_bytes: 1591516 |
|
num_examples: 1600 |
|
download_size: 10736830 |
|
dataset_size: 17670372 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
# Assessing DIScourse COherence in Italian TEXts (DISCOTEX) |
|
|
|
Original Paper: https://sites.google.com/view/discotex/ |
|
|
|
Task presented at EVALITA-2023 |
|
|
|
The original task is about modelling discourse coherence for Italian texts. |
|
|
|
We focalized only on the first sub-task: **Last Sentence Classification**: given a short paragraph, and an individual sentence (target), the model will be asked to classify whether the target follows or not the paragraph. |
|
|
|
To assess the capability of a Language Model to solve such kind of task we reframed the task as **Multi-Choice QA**. |
|
|
|
The question will ask to the model given a short paragraph which target sentence is the correct between a list of four, the answers will be the starting letters of the relative target, and a fifth option that indicate that no one target is the correct continuation. |
|
|
|
## Distractors Generation |
|
|
|
For each sample, if the sample has 1 as label, we set the relative target as gold answer and three other random targets (from other samples) as distractors. On the other way around, if the sample has 0 as label, we set the relative target and other three random targets (from other samples) as distractors, as the gold answer will be chosen the sentence: "nessuna delle precedenti". |
|
|
|
## Example |
|
|
|
Here you can see the structure of the single sample in the present dataset. |
|
|
|
|
|
```json |
|
{ |
|
"text": string, # text of the short paragraph |
|
"choices": list, # list of possible answers, with the correct one plus 4 distractors |
|
"label": int, # index of the correct anser in the choices |
|
} |
|
``` |
|
|
|
|
|
## Statistics |
|
|
|
Training: 16000 |
|
|
|
Test: 1600 |
|
|
|
## Proposed Prompts |
|
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity. |
|
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task. |
|
|
|
Description of the task: |
|
```txt |
|
Ti verranno poste delle domande nelle quali è presente un paragrafo, e come possibili risposte varie frasi che possono essere o meno la continuazione del paragrafo.\nIndica la frase che rappresenta la continuazione più probabile del paragrafo, oppure \"nessuna delle precedenti\" se nessuna delle continuazioni è corretta.\n\n |
|
``` |
|
|
|
Prompt: |
|
```txt |
|
Paragrafo: \"{{text}}\"\nDomanda: Quali delle seguenti frasi è la continuazione più probabile del precedente paragrafo?\nA. \"{{choices[0]}}\"\nB. \"{{choices[1]}}\"\nC. \"{{choices[2]}}\"\nD. \"{{choices[3]}}\"\nE. {{choices[4]}}\nRisposta: |
|
``` |
|
|
|
## Results |
|
|
|
| DISCOTEX | ACCURACY (2-shots) | |
|
| :-----: | :--: | |
|
| Gemma-2B | 19.18 | |
|
| QWEN2-1.5B | 35.18 | |
|
| Mistral-7B | 56.43 | |
|
| ZEFIRO | 53.68 | |
|
| Llama-3-8B | 58.56 | |
|
| Llama-3-8B-IT | 66.12 | |
|
| ANITA | 66.37 | |
|
|
|
## Acknowledge |
|
|
|
We would like to thank the authors of this resource for publicly releasing such an intriguing benchmark. |
|
|
|
Additionally, we extend our gratitude to the students of the [MNLP-2024 course](https://naviglinlp.blogspot.com/), whose first homework explored various interesting prompting strategies. |
|
|
|
The original dataset is freely available for download [link](https://github.com/davidecolla/DisCoTex/tree/master/data). |
|
|
|
## License |
|
The original data come under license [CC-BY-NA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) |