Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
File size: 4,670 Bytes
359a1a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f025254
 
359a1a1
f025254
 
 
 
359a1a1
 
 
 
 
 
 
 
5cf3ba4
 
 
64d9506
 
 
5cf3ba4
 
 
 
 
 
 
 
 
 
 
a5075d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2705729
a5075d9
2705729
a5075d9
 
 
 
 
 
 
63bc005
a5075d9
 
ea49c09
 
a5075d9
ea49c09
a5075d9
63bc005
a5075d9
 
ea49c09
 
 
 
 
 
 
 
62e8550
a5075d9
ea49c09
 
 
4ed52e3
 
 
 
 
568a20d
 
64d9506
 
ea49c09
 
 
 
 
 
 
 
 
 
64d9506
 
5270692
64d9506
5270692
64d9506
5270692
64d9506
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
dataset_info:
  features:
  - name: w1
    dtype: string
  - name: w2
    dtype: string
  - name: w3
    dtype: string
  - name: w4
    dtype: string
  - name: w5
    dtype: string
  - name: choices
    sequence: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 6426
    num_examples: 62
  - name: test
    num_bytes: 57662
    num_examples: 553
  download_size: 44398
  dataset_size: 64088
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# ghigliottinAI MCQA

References:
- https://ghigliottin-ai.github.io/
- https://nlp4fun.github.io/

Starting from two different EVALITA tasks, nlp4fun (EVALITA 2018) and ghigliottin-AI (EVALITA 2020), we collected cc. 600 different games extracted from TV show and from BOARDGAME of "L'Eredità".

"La Ghigliottina" is a complex game, to be solved, it needs a very large comprehension of the italian cultural knowledge. It consists in: given five different, uncorrelated words, the solution is a word that is a shared concept between them.
The original game itself is not well-posed, the solution is not unique, and list all the possible solution is not a affordable. We decided to reframe the problem as a Multi-choice QA, where four possible words are listed and between them all but one are incorrect answers to the game.

## Distractor Generation

For each game the three distractor was chosen among all the possible italian words, the distractor was chosen to be aligned with 3 out of 5 hints and distant to the other ones (computing the cosine similarity in FastTest static embeddings).
Moreover, the distractors was chosen to have lenght at most len(solution) + 1.

With this setting, we created three different words that are not the possible solution of the game, making a task relativelly simple to be solved by humans, but not that much for Language Models.

## Example

Here you can see the structure of the single sample in the present dataset.


```json
{
  "w1": string, # text of the first hint
  "w2": string, # text of the second hint
  "w3": string, # text of the third hint
  "w4": string, # text of the fourth hint
  "w5": string, # text of the fifth hint
  "choices": list, # list of possible words, with the correct one plus 3 distractors
  "label": int, # index of the correct answer in the choices
}
```

## Statistics 

Training: 62

Test: 553

## Proposed Prompts
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.

Description of the task: 
```txt
Ti viene chiesto di risolvere il gioco della ghigliottina.\nIl gioco della ghigliottina consiste nel trovare un concetto che lega cinque parole date. Tale concetto è esprimibile tramite una singola parola.\n\n
```

### MCQA style

Prompt:

```txt
Date le parole: {{w1}}, {{w2}}, {{w3}}, {{w4}}, {{w5}}\nDomanda: Quale tra i seguenti concetti è quello che lega le parole date?\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nRisposta:
```

### Cloze style

In this case the gold answer is not the corresponding letter but the word itself.

```txt
Date le parole: {{w1}}, {{w2}}, {{w3}}, {{w4}}, {{w5}}\nDomanda: Quale tra i seguenti concetti è quello che lega le parole date?\n{{choices[0]}}\n{{choices[1]}}\n{{choices[2]}}\n{{choices[3]}}\nRisposta:
```

## Results

Here some results are reported from the two prompting strategies

| GhigliottinAI-MCQA | ACCURACY (5-shots) |
| :-----: | :--: |
| Gemma-2B | 23.86 |
| QWEN2-1.5B | 39.24 |
| Mistral-7B | 42.49 |
| ZEFIRO | 40.86 |
| Llama-3-8B | 46.65 |
| Llama-3-8B-IT | 47.38 |
| ANITA | 41.95 |

| GhigliottinAI-CLOZE | ACCURACY_norm (5-shots) |
| :-----: | :--: |
| Gemma-2B | 35.08 |
| QWEN2-1.5B | 33.81 |
| Mistral-7B | 39.60 |
| ZEFIRO | 41.22 |
| Llama-3-8B | 43.39 |
| Llama-3-8B-IT | 48.46 |
| ANITA |48.64 |

## Acknowledge

We would like to thank the authors of this resource for publicly releasing such an intriguing benchmark.

Further, We want to thanks the student of [MNLP-2024 course](https://naviglinlp.blogspot.com/), where with their first homework tried different interesting prompting strategies, reframing strategies, and distractor generation approaches.

The original dataset is freely available for download [link_1](https://github.com/ghigliottin-AI/ghigliottin-AI.github.io), [link_2](https://github.com/nlp4fun/nlp4fun.github.io).

## License

No license found on original data.