discotex / README.md
iperbole's picture
Update README.md
5f70bf9 verified
|
raw
history blame
No virus
3.72 kB
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: text
      dtype: string
    - name: choices
      sequence: string
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 16206856
      num_examples: 16000
    - name: test
      num_bytes: 1604316
      num_examples: 1600
  download_size: 10835222
  dataset_size: 17811172
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

Assessing DIScourse COherence in Italian TEXts (DISCOTEX)

Original Paper: https://sites.google.com/view/discotex/

Task presented at EVALITA-2023

The original task is about modelling discourse coherence for Italian texts.

We focalized only on the first sub-task: Last Sentence Classification: given a short paragraph, and an individual sentence (target), the model will be asked to classify whether the target follows or not the paragraph.

To assess the capability of a Language Model to solve such kind of task we reframed the task as Multi-Choice QA.

The question will ask to the model given a short paragraph which target sentence is the correct between a list of four, the answers will be the starting letters of the relative target, and a fifth option that indicate that no one target is the correct continuation.

Distractors Generation

For each sample, if the sample has 1 as label, we set the relative target as gold answer and three other random targets (from other samples) as distractors. On the other way around, if the sample has 0 as label, we set the relative target and other three random targets (from other samples) as distractors, as the gold answer will be chosen the sentence: "nessuna delle precedenti".

Example

Here you can see the structure of the single sample in the present dataset.

{
  "text": string, # text of the short paragraph
  "choices": list, # list of possible answers, with the correct one plus 4 distractors
  "label": int, # index of the correct anser in the choices
}

Statistics

Training: 16000

Test: 1600

Proposed Prompts

Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity. Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.

Description of the task:

Ti verranno poste delle domande, nelle quali è presente un paragrafo, e come possibili risposte varie frasi che possono essere o meno il continuo.\nIndica la frase che rappresenta la continuazione del paragrafo oppure 'nessuna delle precedenti', se nessuna delle continuazioni è corretta."

Prompt:

Paragrafo: '{{text}}'\nDomanda: Quali delle seguenti frasi presenta una continuazione del precedente paragrafo?\nA. '{{choices[0]}}'\nB. '{{choices[1]}}'\nC. '{{choices[2]}}'\nD. '{{choices[3]}}'\nE. {{choices[4]}}\nRisposta:"

Some Results

DISCOTEX ACCURACY (2-shots)
Gemma-2B 19.18
QWEN2-1.5B 35.18
Mistral-7B 56.43
ZEFIRO 53.68
Llama-3-8B 58.56
Llama-3-8B-IT 66.12
ANITA 66.37

Acknowledge

We want to thanks the authors of this resource to publicly release such interesting benchmark.

Further, We want to thanks the student of MNLP-2024 course, where with their first homework tried different interesting prompting strategies.

The data can be freely downloaded form this link.

License

The original data come under license CC-BY-NA 4.0