table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D18-1052table_1
Performance on SQuAD dev set with the PIQA constraint (top), and without the constraint (bottom). See Section 4 for the description of the terms.
4
[['Constraint', 'PI', 'Model', 'TF-IDF'], ['Constraint', 'PI', 'Model', 'LSTM'], ['Constraint', 'PI', 'Model', 'LSTM+SA'], ['Constraint', 'PI', 'Model', 'LSTM+ELMo'], ['Constraint', 'PI', 'Model', 'LSTM+SA+ELMo'], ['Constraint', 'None', 'Model', 'Rajpurkar et al. (2016)'], ['Constraint', 'None', 'Model', 'Yu et al. (2018)']]
1
[['F1 (%)'], ['EM (%)']]
[['15.0', '3.9'], ['57.2', '46.8'], ['59.8', '49.0'], ['60.9', '50.9'], ['62.7', '52.7'], ['51.0', '40.0'], ['89.3', '82.5']]
column
['F1 (%)', 'EM (%)']
['LSTM+SA+ELMo']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 (%)</th> <th>EM (%)</th> </tr> </thead> <tbody> <tr> <td>Constraint || PI || Model || TF-IDF</td> <td>15.0</td> <td>3.9</td> </tr> <tr> <td>Constraint || PI || Model || LSTM</td> <td>57.2</td> <td>46.8</td> </tr> <tr> <td>Constraint || PI || Model || LSTM+SA</td> <td>59.8</td> <td>49.0</td> </tr> <tr> <td>Constraint || PI || Model || LSTM+ELMo</td> <td>60.9</td> <td>50.9</td> </tr> <tr> <td>Constraint || PI || Model || LSTM+SA+ELMo</td> <td>62.7</td> <td>52.7</td> </tr> <tr> <td>Constraint || None || Model || Rajpurkar et al. (2016)</td> <td>51.0</td> <td>40.0</td> </tr> <tr> <td>Constraint || None || Model || Yu et al. (2018)</td> <td>89.3</td> <td>82.5</td> </tr> </tbody></table>
Table 1
table_1
D18-1052
4
emnlp2018
Results. Table 1 shows the results for the PIQA baselines (top) and the unconstrained state of the art (bottom). First, the TF-IDF model performs poorly, which signifies the limitations of traditional document retrieval models for the task. Second, we note that the addition of self-attention makes a significant impact on results, improving F1 by 2.6%. Next, we see that adding ELMo gives 3.7% and 2.9% improvement on F1 for LSTM and LSTM+SA models, respectively. Lastly, the best PIQA baseline model is 11.7% higher than the first (unconstrained) baseline model (Rajpurkar et al., 2016) and 26.6% lower than the state of the art (Yu et al., 2018). This gives us a reasonable starting point of the new task and a significant gap to close for future work.
[2, 1, 1, 1, 1, 1, 2]
['Results.', 'Table 1 shows the results for the PIQA baselines (top) and the unconstrained state of the art (bottom).', 'First, the TF-IDF model performs poorly, which signifies the limitations of traditional document retrieval models for the task.', 'Second, we note that the addition of self-attention makes a significant impact on results, improving F1 by 2.6%.', 'Next, we see that adding ELMo gives 3.7% and 2.9% improvement on F1 for LSTM and LSTM+SA models, respectively.', 'Lastly, the best PIQA baseline model is 11.7% higher than the first (unconstrained) baseline model (Rajpurkar et al., 2016) and 26.6% lower than the state of the art (Yu et al., 2018).', 'This gives us a reasonable starting point of the new task and a significant gap to close for future work.']
[None, ['PI', 'None'], ['TF-IDF'], ['LSTM', 'LSTM+SA', 'F1 (%)'], ['LSTM', 'LSTM+SA', 'LSTM+ELMo', 'LSTM+SA+ELMo', 'F1 (%)'], ['LSTM+SA+ELMo', 'Rajpurkar et al. (2016)', 'Yu et al. (2018)', 'F1 (%)'], None]
1
D18-1057table_1
Word similarity and analogy results (ρ× 100 and analogy accuracy). We denote context overlap enhanced method with “+ CO”. 300-dimensional embeddings are used. The datasets used include WS353 (Finkelstein et al., 2001), SL999 (Hill et al., 2016), SCWS (Huang et al., 2012), RW (Luong et al., 2013), MEN (Bruni et al., 2014), MT771 (Halawi et al., 2012), and Mikolov’s analogy dataset (Mikolov et al., 2013a).
2
[['Method', 'GloVe'], ['Method', 'GloVe + CO'], ['Method', 'SGNS'], ['Method', 'Swivel'], ['Method', 'Swivel + CO']]
2
[['WS353', '-'], ['SL999', '-'], ['SCWS', '-'], ['RW', '-'], ['MEN', '-'], ['MT771', '-'], ['Analogy', 'Sem'], ['Analogy', 'Syn']]
[['66.8', '35.0', '59.3', '44.1', '74.7', '69.9', '76.0', '75.3'], ['69.7', '38.0', '63.8', '45.1', '77.6', '71.3', '78.6', '75.0'], ['71.1', '40.7', '67.1', '52.8', '78.1', '70.4', '67.2', '77.3'], ['73.1', '39.9', '66.4', '53.4', '79.1', '71.7', '78.6', '78.0'], ['74.0', '41.2', '66.3', '53.6', '79.8', '72.5', '79.4', '78.1']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['GloVe + CO', 'Swivel + CO']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WS353 || -</th> <th>SL999 || -</th> <th>SCWS || -</th> <th>RW || -</th> <th>MEN || -</th> <th>MT771 || -</th> <th>Analogy || Sem</th> <th>Analogy || Syn</th> </tr> </thead> <tbody> <tr> <td>Method || GloVe</td> <td>66.8</td> <td>35.0</td> <td>59.3</td> <td>44.1</td> <td>74.7</td> <td>69.9</td> <td>76.0</td> <td>75.3</td> </tr> <tr> <td>Method || GloVe + CO</td> <td>69.7</td> <td>38.0</td> <td>63.8</td> <td>45.1</td> <td>77.6</td> <td>71.3</td> <td>78.6</td> <td>75.0</td> </tr> <tr> <td>Method || SGNS</td> <td>71.1</td> <td>40.7</td> <td>67.1</td> <td>52.8</td> <td>78.1</td> <td>70.4</td> <td>67.2</td> <td>77.3</td> </tr> <tr> <td>Method || Swivel</td> <td>73.1</td> <td>39.9</td> <td>66.4</td> <td>53.4</td> <td>79.1</td> <td>71.7</td> <td>78.6</td> <td>78.0</td> </tr> <tr> <td>Method || Swivel + CO</td> <td>74.0</td> <td>41.2</td> <td>66.3</td> <td>53.6</td> <td>79.8</td> <td>72.5</td> <td>79.4</td> <td>78.1</td> </tr> </tbody></table>
Table 1
table_1
D18-1057
4
emnlp2018
4.2 Intrinsic Evalution. Table 1 shows the evaluation results of word similarity tasks and word analogy tasks. Word similarity is measured as the Spearman’s rank correlation ρ between human-judged similarity and cosine distance of word vectors. In word analogy task, the questions are answered over the whole vocabulary through 3CosMul (Levy and Goldberg, 2014a). In addition to GloVe and Swivel, the evaluations of SGNS are also reported for reference. We train SGNS with the word2vec tool, using symmetric context window of five words to the left and five words to the right, and 5 negative samples. As can be seen from the table, the context overlap information enhanced word embeddings perform better in most word similarity tasks and get higher analogy accuracy in semantic aspect at the cost of syntactic score. The improved semantics performance, to a certain extent, reflects second order co-occurrence relations are more semantic.
[2, 1, 2, 2, 1, 2, 1, 2]
['4.2 Intrinsic Evalution.', 'Table 1 shows the evaluation results of word similarity tasks and word analogy tasks.', 'Word similarity is measured as the Spearman’s rank correlation ρ between human-judged similarity and cosine distance of word vectors.', 'In word analogy task, the questions are answered over the whole vocabulary through 3CosMul (Levy and Goldberg, 2014a).', 'In addition to GloVe and Swivel, the evaluations of SGNS are also reported for reference.', 'We train SGNS with the word2vec tool, using symmetric context window of five words to the left and five words to the right, and 5 negative samples.', 'As can be seen from the table, the context overlap information enhanced word embeddings perform better in most word similarity tasks and get higher analogy accuracy in semantic aspect at the cost of syntactic score.', 'The improved semantics performance, to a certain extent, reflects second order co-occurrence relations are more semantic.']
[None, None, None, None, ['GloVe', 'Swivel', 'SGNS'], ['SGNS'], ['Swivel + CO', 'WS353', 'SL999', 'SCWS', 'RW', 'MEN', 'MT771', 'Analogy'], None]
1
D18-1060table_5
The breakdown of performance on the VUA sequence labeling test set by POS tags. We show data statistics (count, % metaphor) on the training set. We only show POS tags whose % metaphor > 10.
2
[['POS', 'VERB'], ['POS', 'NOUN'], ['POS', 'ADP'], ['POS', 'ADJ'], ['POS', 'PART']]
1
[['#'], ['% metaphor'], ['P'], ['R'], ['F1.']]
[['20K', '18.1', '68.1', '71.9', '69.9'], ['20K', '13.6', '59.9', '60.8', '60.4'], ['13K', '28.0', '86.8', '89.0', '87.9'], ['9K', '11.5', '56.1', '60.6', '58.3'], ['3K', '10.1', '57.1', '59.1', '58.1']]
column
['#', '% metaphor', 'P', 'R', 'F1.']
['POS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#</th> <th>% metaphor</th> <th>P</th> <th>R</th> <th>F1.</th> </tr> </thead> <tbody> <tr> <td>POS || VERB</td> <td>20K</td> <td>18.1</td> <td>68.1</td> <td>71.9</td> <td>69.9</td> </tr> <tr> <td>POS || NOUN</td> <td>20K</td> <td>13.6</td> <td>59.9</td> <td>60.8</td> <td>60.4</td> </tr> <tr> <td>POS || ADP</td> <td>13K</td> <td>28.0</td> <td>86.8</td> <td>89.0</td> <td>87.9</td> </tr> <tr> <td>POS || ADJ</td> <td>9K</td> <td>11.5</td> <td>56.1</td> <td>60.6</td> <td>58.3</td> </tr> <tr> <td>POS || PART</td> <td>3K</td> <td>10.1</td> <td>57.1</td> <td>59.1</td> <td>58.1</td> </tr> </tbody></table>
Table 5
table_5
D18-1060
3
emnlp2018
Table 5 reports the breakdown of performance by POS tags. Not surprisingly, tags with more data are easier to classify. Adposition is the easiest to identify as metaphorical and is also the most frequently metaphorical class (28%). On the other hand, particles are challenging to identify, since they are often associated with multi-word expressions, such as “put down the disturbances”.
[1, 2, 1, 2]
['Table 5 reports the breakdown of performance by POS tags.', 'Not surprisingly, tags with more data are easier to classify.', 'Adposition is the easiest to identify as metaphorical and is also the most frequently metaphorical class (28%).', 'On the other hand, particles are challenging to identify, since they are often associated with multi-word expressions, such as “put down the disturbances”.']
[['POS'], None, ['ADP'], None]
1
D18-1060table_6
Model performances for the verb classification task. Our models achieve strong performance on all datasets. The CLS model performs better than the SEQ model when only one word per sentence is annotated by human (TroFi and MOH-X). When all words in the sentence are accurately annotated (VUA), the SEQ model outperforms the CLS model.
2
[['Model', 'Lexical Baseline'], ['Model', 'Klebanov (2016)'], ['Model', 'Rei (2017)'], ['Model', 'Koper (2017)'], ['Model', 'Wu (2018) ensemble'], ['Model', 'CLS'], ['Model', 'SEQ']]
2
[['MOH-X (10 fold)', 'P'], ['MOH-X (10 fold)', 'R'], ['MOH-X (10 fold)', 'F1'], ['MOH-X (10 fold)', 'Acc.'], ['TroFi (10 fold)', 'P'], ['TroFi (10 fold)', 'R'], ['TroFi (10 fold)', 'F1'], ['TroFi (10 fold)', 'Acc.'], ['VUA - Test', 'P'], ['VUA - Test', 'R'], ['VUA - Test', 'F1'], ['VUA - Test', 'Acc.'], ['VUA - Test', 'MaF1']]
[['39.1', '26.7', '31.3', '43.6', '72.4', '55.7', '62.9', '71.4', '67.9', '40.7', '50.9', '76.4', '48.9'], ['-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '60.0'], ['73.6', '76.1', '74.2', '74.8', '-', '-', '-', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '-', '', '75.0', '-', '-', '-', '62.0', '-', '-'], ['-', '-', '-', '-', '-', '-', '-', '-', '60.0', '76.3', '67.2', '-', '-'], ['75.3', '84.3', '79.1', '78.5', '68.7', '74.6', '72.0', '73.7', '53.4', '65.6', '58.9', '69.1', '53.4'], ['79.1', '73.5', '75.6', '77.2', '70.7', '71.6', '71.1', '74.6', '68.2', '71.3', '69.7', '81.4', '66.4']]
column
['P', 'R', 'F1', 'Acc.', 'P', 'R', 'F1', 'Acc.', 'P', 'R', 'F1', 'Acc.', 'MaF1']
['CLS', 'SEQ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MOH-X (10 fold) || P</th> <th>MOH-X (10 fold) || R</th> <th>MOH-X (10 fold) || F1</th> <th>MOH-X (10 fold) || Acc.</th> <th>TroFi (10 fold) || P</th> <th>TroFi (10 fold) || R</th> <th>TroFi (10 fold) || F1</th> <th>TroFi (10 fold) || Acc.</th> <th>VUA - Test || P</th> <th>VUA - Test || R</th> <th>VUA - Test || F1</th> <th>VUA - Test || Acc.</th> <th>VUA - Test || MaF1</th> </tr> </thead> <tbody> <tr> <td>Model || Lexical Baseline</td> <td>39.1</td> <td>26.7</td> <td>31.3</td> <td>43.6</td> <td>72.4</td> <td>55.7</td> <td>62.9</td> <td>71.4</td> <td>67.9</td> <td>40.7</td> <td>50.9</td> <td>76.4</td> <td>48.9</td> </tr> <tr> <td>Model || Klebanov (2016)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>60.0</td> </tr> <tr> <td>Model || Rei (2017)</td> <td>73.6</td> <td>76.1</td> <td>74.2</td> <td>74.8</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Koper (2017)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td></td> <td>75.0</td> <td>-</td> <td>-</td> <td>-</td> <td>62.0</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Wu (2018) ensemble</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>60.0</td> <td>76.3</td> <td>67.2</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || CLS</td> <td>75.3</td> <td>84.3</td> <td>79.1</td> <td>78.5</td> <td>68.7</td> <td>74.6</td> <td>72.0</td> <td>73.7</td> <td>53.4</td> <td>65.6</td> <td>58.9</td> <td>69.1</td> <td>53.4</td> </tr> <tr> <td>Model || SEQ</td> <td>79.1</td> <td>73.5</td> <td>75.6</td> <td>77.2</td> <td>70.7</td> <td>71.6</td> <td>71.1</td> <td>74.6</td> <td>68.2</td> <td>71.3</td> <td>69.7</td> <td>81.4</td> <td>66.4</td> </tr> </tbody></table>
Table 6
table_6
D18-1060
4
emnlp2018
Verb Classification Results. Table 6 shows performance on the verb classification task for three datasets (MOH-X , TroFi and VUA). Our models achieve strong performance on all datasets, outperforming existing models on the MOH-X and VUA datasets. On the MOH-X dataset, the CLS model outperforms the SEQ model, likely due to the simpler overall sentence structure and the fact that the target verbs are the only words annotated for metaphoricity. For the VUA dataset, where we have annotations for all words in a sentence, the SEQ model significantly outperforms the CLS model. This result shows that predicting metaphor labels of context words helps to predict the target verb. We hypothesize that Koper et al. (2017) outperforms our models on the TroFi dataset for a similar reason:. their work uses concreteness labels, which highly correlate to metaphor labels of neighboring words in the sentence. Also, their best model uses the verb lemma as a feature, which itself provides a strong clue in the dataset of 50 verbs (see lexical baseline).
[2, 1, 1, 1, 1, 2, 1, 2, 2]
['Verb Classification Results.', 'Table 6 shows performance on the verb classification task for three datasets (MOH-X , TroFi and VUA).', 'Our models achieve strong performance on all datasets, outperforming existing models on the MOH-X and VUA datasets.', 'On the MOH-X dataset, the CLS model outperforms the SEQ model, likely due to the simpler overall sentence structure and the fact that the target verbs are the only words annotated for metaphoricity.', 'For the VUA dataset, where we have annotations for all words in a sentence, the SEQ model significantly outperforms the CLS model.', 'This result shows that predicting metaphor labels of context words helps to predict the target verb.', 'We hypothesize that Koper et al. (2017) outperforms our models on the TroFi dataset for a similar reason:.', 'their work uses concreteness labels, which highly correlate to metaphor labels of neighboring words in the sentence.', 'Also, their best model uses the verb lemma as a feature, which itself provides a strong clue in the dataset of 50 verbs (see lexical baseline).']
[None, ['MOH-X (10 fold)', 'TroFi (10 fold)', 'VUA - Test'], ['CLS', 'SEQ', 'Lexical Baseline', 'Klebanov (2016)', 'Rei (2017)', 'Koper (2017)', 'Wu (2018) ensemble', 'MOH-X (10 fold)', 'TroFi (10 fold)', 'VUA - Test'], ['MOH-X (10 fold)', 'CLS', 'SEQ'], ['VUA - Test', 'CLS', 'SEQ'], None, ['Koper (2017)', 'TroFi (10 fold)', 'CLS', 'SEQ'], ['Koper (2017)'], ['Koper (2017)']]
1
D18-1062table_2
Experimental results on Chinese-English dataset. The results of baseline models are cited from Zhang et al. (2017).
4
[['Model', 'MonoGiza w/o emb.', '#seeds', '0'], ['Model', 'MonoGiza w/ emb.', '#seeds', '0'], ['Model', 'TM', '#seeds', '50'], ['Model', 'IA', '#seeds', '100'], ['Model', 'Zhang et al. (2017)', '#seeds', '0'], ['Model', 'Ours', '#seeds', '0']]
1
[['Accuracy (%)']]
[['0.05'], ['0.09'], ['0.29'], ['21.79'], ['43.31'], ['51.37']]
column
['Accuracy (%)']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || MonoGiza w/o emb. || #seeds || 0</td> <td>0.05</td> </tr> <tr> <td>Model || MonoGiza w/ emb. || #seeds || 0</td> <td>0.09</td> </tr> <tr> <td>Model || TM || #seeds || 50</td> <td>0.29</td> </tr> <tr> <td>Model || IA || #seeds || 100</td> <td>21.79</td> </tr> <tr> <td>Model || Zhang et al. (2017) || #seeds || 0</td> <td>43.31</td> </tr> <tr> <td>Model || Ours || #seeds || 0</td> <td>51.37</td> </tr> </tbody></table>
Table 2
table_2
D18-1062
4
emnlp2018
Table 2 summarizes the performance of baseline models and our approach. The results of baseline models are cited from Zhang et al. (2017). As we can see from the table, our model could achieve superior performance compared with other baseline models.
[1, 2, 1]
['Table 2 summarizes the performance of baseline models and our approach.', 'The results of baseline models are cited from Zhang et al. (2017).', 'As we can see from the table, our model could achieve superior performance compared with other baseline models.']
[['MonoGiza w/ emb.', 'TM', 'IA', 'Zhang et al. (2017)', 'Ours'], ['MonoGiza w/ emb.', 'TM', 'IA', 'Zhang et al. (2017)'], ['Ours', 'MonoGiza w/ emb.', 'TM', 'IA', 'Zhang et al. (2017)']]
1
D18-1067table_3
Performance of sentiment classifiers on OPT.
8
[['Model', 'LSTM', 'Train', 'SST', 'Dev', 'SST', 'Test', 'OPT'], ['Model', 'BiLSTM', 'Train', 'SST', 'Dev', 'SST', 'Test', 'OPT'], ['Model', 'CNN', 'Train', 'SST', 'Dev', 'SST', 'Test', 'OPT'], ['Model', 'CNN', 'Train', 'TSA', 'Dev', 'TSA', 'Test', 'OPT'], ['Model', 'RNN(char)', 'Train', 'TSA', 'Dev', 'OPT', 'Test', 'OPT'], ['Model', 'GRUStack', 'Train', 'OPT', 'Dev', 'OPT', 'Test', 'OPT']]
1
[['Acc%']]
[['63.20'], ['63.60'], ['59.60'], ['67.60'], ['55.20'], ['80.19']]
column
['Acc%']
['SST', 'TSA', 'OPT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc%</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM || Train || SST || Dev || SST || Test || OPT</td> <td>63.20</td> </tr> <tr> <td>Model || BiLSTM || Train || SST || Dev || SST || Test || OPT</td> <td>63.60</td> </tr> <tr> <td>Model || CNN || Train || SST || Dev || SST || Test || OPT</td> <td>59.60</td> </tr> <tr> <td>Model || CNN || Train || TSA || Dev || TSA || Test || OPT</td> <td>67.60</td> </tr> <tr> <td>Model || RNN(char) || Train || TSA || Dev || OPT || Test || OPT</td> <td>55.20</td> </tr> <tr> <td>Model || GRUStack || Train || OPT || Dev || OPT || Test || OPT</td> <td>80.19</td> </tr> </tbody></table>
Table 3
table_3
D18-1067
3
emnlp2018
Table 3 shows the performance of several deep learning models trained on either SST or TSA datasets and evaluated on the OPT dataset. Note that the Dev set was used for model selection. As can be seen from the table, the models trained on the sentiment datasets perform poorly on the optimism/pessimism dataset. For example, there is a drop in performance from 80.19% to 67.60% when training on TSA (with an even larger decrease when we train on SST). The SST/TSA sentiment classifiers are trained to predict the sentiment as negative, neutral, or positive. To calculate the accuracy in Table 3, an optimistic tweet predicted as positive by the sentiment classifier counts as a correct prediction, whereas an optimistic tweet predicted as either neutral or negative by the sentiment classifier counts as an incorrect prediction (similarly for pessimistic tweets). This analysis is done at tweet level for the threshold of 0.
[1, 2, 1, 1, 2, 2, 2]
['Table 3 shows the performance of several deep learning models trained on either SST or TSA datasets and evaluated on the OPT dataset.', 'Note that the Dev set was used for model selection.', 'As can be seen from the table, the models trained on the sentiment datasets perform poorly on the optimism/pessimism dataset.', 'For example, there is a drop in performance from 80.19% to 67.60% when training on TSA (with an even larger decrease when we train on SST).', 'The SST/TSA sentiment classifiers are trained to predict the sentiment as negative, neutral, or positive.', 'To calculate the accuracy in Table 3, an optimistic tweet predicted as positive by the sentiment classifier counts as a correct prediction, whereas an optimistic tweet predicted as either neutral or negative by the sentiment classifier counts as an incorrect prediction (similarly for pessimistic tweets).', 'This analysis is done at tweet level for the threshold of 0.']
[['LSTM', 'BiLSTM', 'CNN', 'RNN(char)', 'SST', 'TSA'], ['Dev'], ['LSTM', 'BiLSTM', 'CNN', 'RNN(char)'], ['GRUStack', 'CNN', 'Acc%', 'TSA', 'SST'], ['SST', 'TSA'], None, None]
1
D18-1071table_1
The results of human annotations (C = Consistency, L = Logic, E = Emotion).
2
[['Method', 'S2S'], ['Method', 'S2S-AW'], ['Method', 'E-SCBA']]
2
[['Overall', 'C'], ['Overall', 'L'], ['Overall', 'E'], ['Happy', 'C'], ['Happy', 'L'], ['Happy', 'E'], ['Like', 'C'], ['Like', 'L'], ['Like', 'E'], ['Surprise', 'C'], ['Surprise', 'L'], ['Surprise', 'E'], ['Sad', 'C'], ['Sad', 'L'], ['Sad', 'E'], ['Fear', 'C'], ['Fear', 'L'], ['Fear', 'E'], ['Angry', 'C'], ['Angry', 'L'], ['Angry', 'E'], ['Disgust', 'C'], ['Disgust', 'L'], ['Disgust', 'E']]
[['1.301', '0.776', '0.197', '1.368', '0.924', '0.285', '1.341', '0.757', '0.217', '1.186', '0.723', '0.076', '1.393', '0.928', '0.237', '1.245', '0.782', '0.215', '1.205', '0.535', '0.113', '1.368', '0.680', '0.236'], ['1.348', '1.063', '0.231', '1.437', '1.097', '0.237', '1.418', '1.125', '0.276', '1.213', '0.916', '0.105', '1.423', '1.196', '0.293', '1.260', '1.105', '0.272', '1.198', '0.860', '0.182', '1.488', '1.145', '0.253'], ['1.375', '1.123', '0.476', '1.476', '1.286', '0.615', '1.437', '1.173', '0.545', '1.197', '0.902', '0.245', '1.497', '1.268', '0.525', '1.268', '1.124', '0.453', '1.110', '0.822', '0.347', '1.637', '1.289', '0.603']]
column
['C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E']
['E-SCBA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Overall || C</th> <th>Overall || L</th> <th>Overall || E</th> <th>Happy || C</th> <th>Happy || L</th> <th>Happy || E</th> <th>Like || C</th> <th>Like || L</th> <th>Like || E</th> <th>Surprise || C</th> <th>Surprise || L</th> <th>Surprise || E</th> <th>Sad || C</th> <th>Sad || L</th> <th>Sad || E</th> <th>Fear || C</th> <th>Fear || L</th> <th>Fear || E</th> <th>Angry || C</th> <th>Angry || L</th> <th>Angry || E</th> <th>Disgust || C</th> <th>Disgust || L</th> <th>Disgust || E</th> </tr> </thead> <tbody> <tr> <td>Method || S2S</td> <td>1.301</td> <td>0.776</td> <td>0.197</td> <td>1.368</td> <td>0.924</td> <td>0.285</td> <td>1.341</td> <td>0.757</td> <td>0.217</td> <td>1.186</td> <td>0.723</td> <td>0.076</td> <td>1.393</td> <td>0.928</td> <td>0.237</td> <td>1.245</td> <td>0.782</td> <td>0.215</td> <td>1.205</td> <td>0.535</td> <td>0.113</td> <td>1.368</td> <td>0.680</td> <td>0.236</td> </tr> <tr> <td>Method || S2S-AW</td> <td>1.348</td> <td>1.063</td> <td>0.231</td> <td>1.437</td> <td>1.097</td> <td>0.237</td> <td>1.418</td> <td>1.125</td> <td>0.276</td> <td>1.213</td> <td>0.916</td> <td>0.105</td> <td>1.423</td> <td>1.196</td> <td>0.293</td> <td>1.260</td> <td>1.105</td> <td>0.272</td> <td>1.198</td> <td>0.860</td> <td>0.182</td> <td>1.488</td> <td>1.145</td> <td>0.253</td> </tr> <tr> <td>Method || E-SCBA</td> <td>1.375</td> <td>1.123</td> <td>0.476</td> <td>1.476</td> <td>1.286</td> <td>0.615</td> <td>1.437</td> <td>1.173</td> <td>0.545</td> <td>1.197</td> <td>0.902</td> <td>0.245</td> <td>1.497</td> <td>1.268</td> <td>0.525</td> <td>1.268</td> <td>1.124</td> <td>0.453</td> <td>1.110</td> <td>0.822</td> <td>0.347</td> <td>1.637</td> <td>1.289</td> <td>0.603</td> </tr> </tbody></table>
Table 1
table_1
D18-1071
4
emnlp2018
Table 1 depicts the human annotations (t-test: p < 0.05 for C and L, p < 0.01 for E). Overall, E-SCBA outperforms S2S-AW on all three metrics, where the compound information plays a positive role in the comprehensive promotion. However, in Surprise and Angry, the grades of Consistency and Logic are not satisfactory, since the data for them are much less than others (Surprise (1.2%) and Angry (0.7%)). Besides, the score of Emotion in Surprise has a big difference from others. We think the reason is that the characteristic of Surprise overlaps with other categories that have much more data, such as Happy, which interferes with the learning efficiency of the approach in Surprise. Meanwhile, it is harder for annotators to determine which one is the right emotion.
[1, 1, 1, 1, 2, 2]
['Table 1 depicts the human annotations (t-test: p < 0.05 for C and L, p < 0.01 for E).', 'Overall, E-SCBA outperforms S2S-AW on all three metrics, where the compound information plays a positive role in the comprehensive promotion.', 'However, in Surprise and Angry, the grades of Consistency and Logic are not satisfactory, since the data for them are much less than others (Surprise (1.2%) and Angry (0.7%)).', 'Besides, the score of Emotion in Surprise has a big difference from others.', 'We think the reason is that the characteristic of Surprise overlaps with other categories that have much more data, such as Happy, which interferes with the learning efficiency of the approach in Surprise.', 'Meanwhile, it is harder for annotators to determine which one is the right emotion.']
[None, ['Overall', 'E-SCBA', 'S2S-AW', 'C', 'L', 'E'], ['Surprise', 'Angry', 'E-SCBA', 'S2S', 'S2S-AW', 'C', 'L'], ['Surprise', 'E', 'E-SCBA', 'S2S', 'S2S-AW'], ['Surprise'], ['E']]
1
D18-1074table_3
Results of classification evaluations.
5
[['Perez2017 lin', 'Features', 'Ngram+', 'Method', 'Co'], ['Perez2017 lin', 'Features', 'Ngram+', 'Method', 'Co+Ct'], ['Perez2017 lin', 'Features', 'Ngram+', 'Method', 'All'], ['Perez2017 vec', 'Features', 'Vec-con', 'Method', 'Co'], ['Perez2017 vec', 'Features', 'Vec-con', 'Method', 'Co+Ct'], ['Perez2017 vec', 'Features', 'Vec-con', 'Method', 'All'], ['Xiao2016', 'Features', 'Vec', 'Method', 'Co'], ['Xiao2016', 'Features', 'Vec', 'Method', 'Co+Ct'], ['Xiao2016', 'Features', 'Vec', 'Method', 'All'], ['Proposed model', 'Features', 'Vec', 'Method', 'Co+T'], ['Proposed model', 'Features', 'Vec', 'Method', 'Co+Ct+T'], ['Proposed model', 'Features', 'Vec', 'Method', 'All+T']]
1
[['Precision'], ['Recall'], ['F1']]
[['0.62', '0.62', '0.62'], ['0.60', '0.61', '0.61'], ['0.61', '0.61', '0.62'], ['0.60', '0.58', '0.59'], ['0.61', '0.59', '0.60'], ['0.61', '0.57', '0.58'], ['0.65', '0.63', '0.64'], ['0.68', '0.64', '0.65'], ['0.67', '0.67', '0.67'], ['0.65', '0.64', '0.65'], ['0.70', '0.66', '0.68'], ['0.74', '0.67', '0.70']]
column
['Precision', 'Recall', 'F1']
['Proposed model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Perez2017 lin || Features || Ngram+ || Method || Co</td> <td>0.62</td> <td>0.62</td> <td>0.62</td> </tr> <tr> <td>Perez2017 lin || Features || Ngram+ || Method || Co+Ct</td> <td>0.60</td> <td>0.61</td> <td>0.61</td> </tr> <tr> <td>Perez2017 lin || Features || Ngram+ || Method || All</td> <td>0.61</td> <td>0.61</td> <td>0.62</td> </tr> <tr> <td>Perez2017 vec || Features || Vec-con || Method || Co</td> <td>0.60</td> <td>0.58</td> <td>0.59</td> </tr> <tr> <td>Perez2017 vec || Features || Vec-con || Method || Co+Ct</td> <td>0.61</td> <td>0.59</td> <td>0.60</td> </tr> <tr> <td>Perez2017 vec || Features || Vec-con || Method || All</td> <td>0.61</td> <td>0.57</td> <td>0.58</td> </tr> <tr> <td>Xiao2016 || Features || Vec || Method || Co</td> <td>0.65</td> <td>0.63</td> <td>0.64</td> </tr> <tr> <td>Xiao2016 || Features || Vec || Method || Co+Ct</td> <td>0.68</td> <td>0.64</td> <td>0.65</td> </tr> <tr> <td>Xiao2016 || Features || Vec || Method || All</td> <td>0.67</td> <td>0.67</td> <td>0.67</td> </tr> <tr> <td>Proposed model || Features || Vec || Method || Co+T</td> <td>0.65</td> <td>0.64</td> <td>0.65</td> </tr> <tr> <td>Proposed model || Features || Vec || Method || Co+Ct+T</td> <td>0.70</td> <td>0.66</td> <td>0.68</td> </tr> <tr> <td>Proposed model || Features || Vec || Method || All+T</td> <td>0.74</td> <td>0.67</td> <td>0.70</td> </tr> </tbody></table>
Table 3
table_3
D18-1074
4
emnlp2018
The results of our experiments are summarized in the Table 3. Findings indicate that our proposed approach leads to a small performance boost after using the topic embeddings. Thus, our simple feature augmentation approach has the potential to make classifiers more robust. In addition, the contextual information (“Ct”) is quite useful to identify the patients’ current intentions, and the sequential information through time stages has strong indications of human intentions. Significance Analysis. We conducted significance analysis to compare Xiao2016 and our proposed method. Because Xiao2016 only used content and context inputs, in this analysis, we train our method with the same inputs (Co+Ct). We followed the method of bootstrap samples (Berg-Kirkpatrick et al., 2012) to create 50 pairs of training and test datasets with replacement, where we keep the sizes the same in the Table 2. We keep the same experimental steps and use the parameters that achieved the best performances in the Table 3 to train the models.
[1, 1, 2, 2, 2, 2, 2, 0, 1]
['The results of our experiments are summarized in the Table 3.', 'Findings indicate that our proposed approach leads to a small performance boost after using the topic embeddings.', 'Thus, our simple feature augmentation approach has the potential to make classifiers more robust.', 'In addition, the contextual information (“Ct”) is quite useful to identify the patients’ current intentions, and the sequential information through time stages has strong indications of human intentions.', 'Significance Analysis.', 'We conducted significance analysis to compare Xiao2016 and our proposed method.', 'Because Xiao2016 only used content and context inputs, in this analysis, we train our method with the same inputs (Co+Ct).', 'We followed the method of bootstrap samples (Berg-Kirkpatrick et al., 2012) to create 50 pairs of training and test datasets with replacement, where we keep the sizes the same in the Table 2.', 'We keep the same experimental steps and use the parameters that achieved the best performances in the Table 3 to train the models.']
[None, ['Proposed model'], ['Proposed model'], ['Co+Ct', 'Co+Ct+T'], None, ['Proposed model'], ['Proposed model'], None, ['Proposed model']]
1
D18-1075table_3
Human evaluation results of the AEM model and the Seq2Seq model.
2
[['Models', 'Seq2Seq'], ['Models', 'AEM'], ['Models', 'Seq2Seq+Attention'], ['Models', 'AEM+Attention']]
1
[['Fluency'], ['Coherence'], ['G-Score']]
[['6.97', '3.51', '4.95'], ['8.11', '4.18', '5.82'], ['5.11', '3.30', '4.10'], ['7.92', '4.97', '6.27']]
column
['Fluency', 'Coherence', 'G-Score']
['AEM', 'AEM+Attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Coherence</th> <th>G-Score</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq</td> <td>6.97</td> <td>3.51</td> <td>4.95</td> </tr> <tr> <td>Models || AEM</td> <td>8.11</td> <td>4.18</td> <td>5.82</td> </tr> <tr> <td>Models || Seq2Seq+Attention</td> <td>5.11</td> <td>3.30</td> <td>4.10</td> </tr> <tr> <td>Models || AEM+Attention</td> <td>7.92</td> <td>4.97</td> <td>6.27</td> </tr> </tbody></table>
Table 3
table_3
D18-1075
4
emnlp2018
Table 3 shows the results of human evaluation. The inter-annotator agreement is satisfactory considering the difficulty of human evaluation. The Pearson’s correlation coefficient is 0.69 on coherence and 0.57 on fluency, with p < 0.0001. First, it is clear that the AEM model outperforms the Seq2Seq model with a large margin, which proves the effectiveness of the AEM model on. Second, it is interesting to note that with the attention mechanism, the coherence is decreased slightly in the Seq2Seq model but increased significantly in the AEM model. It suggests that the utterance-level dependency greatly benefits the learning of word-level dependency. Therefore, it is expected that the AEM+Attention model achieves the best G-score.
[1, 2, 2, 1, 1, 2, 2]
['Table 3 shows the results of human evaluation.', 'The inter-annotator agreement is satisfactory considering the difficulty of human evaluation.', 'The Pearson’s correlation coefficient is 0.69 on coherence and 0.57 on fluency, with p < 0.0001.', 'First, it is clear that the AEM model outperforms the Seq2Seq model with a large margin, which proves the effectiveness of the AEM model on.', 'Second, it is interesting to note that with the attention mechanism, the coherence is decreased slightly in the Seq2Seq model but increased significantly in the AEM model.', 'It suggests that the utterance-level dependency greatly benefits the learning of word-level dependency.', 'Therefore, it is expected that the AEM+Attention model achieves the best G-score.']
[None, None, None, ['AEM', 'Seq2Seq'], ['Seq2Seq+Attention', 'AEM+Attention', 'Coherence'], None, ['AEM+Attention', 'G-Score']]
1
D18-1078table_2
Accuracy results over the test set ASR transcripts, for w2v and skip-thought (ST).
2
[['Method', 'all-yes'], ['Method', 'w2v title-speech'], ['Method', 'w2v arg-speech'], ['Method', 'w2v title-sentence'], ['Method', 'w2v arg-sentence'], ['Method', 'ST arg-sentence']]
1
[['Accuracy (%)']]
[['39.8'], ['49.8'], ['57.6'], ['55.8'], ['64.6'], ['60.2']]
column
['Accuracy (%)']
['w2v title-speech', 'w2v arg-speech', 'w2v title-sentence', 'w2v arg-sentence', 'ST arg-sentence']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Method || all-yes</td> <td>39.8</td> </tr> <tr> <td>Method || w2v title-speech</td> <td>49.8</td> </tr> <tr> <td>Method || w2v arg-speech</td> <td>57.6</td> </tr> <tr> <td>Method || w2v title-sentence</td> <td>55.8</td> </tr> <tr> <td>Method || w2v arg-sentence</td> <td>64.6</td> </tr> <tr> <td>Method || ST arg-sentence</td> <td>60.2</td> </tr> </tbody></table>
Table 2
table_2
D18-1078
5
emnlp2018
3.2 Results. Table 2 shows the accuracy of all w2v configurations. Representing an argument using its more verbose several-sentences-long content outperforms using its short single-sentence title. On the speech side, considering each sentence separately is preferable to using the entire speech. We compared the results of the best w2v-based configuration (arg-sentence), to the performance of the skip-thought auto-encoder. In this setting, encoding individual speech sentences and an argument, the accuracy of skip-thought was 60.2%. The highest scoring method, w2v arg-sentence, reaches, then, a rather modest accuracy of 64.6%. One weakness of this method, revealed through analysis of its false positive predictions, is its tendency to prefer longer sentences. It is nevertheless substantially superior to the trivial all-yes baseline, as well as its all-no counterpart.
[2, 1, 1, 2, 1, 1, 1, 2, 1]
['3.2 Results.', 'Table 2 shows the accuracy of all w2v configurations.', 'Representing an argument using its more verbose several-sentences-long content outperforms using its short single-sentence title.', 'On the speech side, considering each sentence separately is preferable to using the entire speech.', 'We compared the results of the best w2v-based configuration (arg-sentence), to the performance of the skip-thought auto-encoder.', 'In this setting, encoding individual speech sentences and an argument, the accuracy of skip-thought was 60.2%.', 'The highest scoring method, w2v arg-sentence, reaches, then, a rather modest accuracy of 64.6%.', 'One weakness of this method, revealed through analysis of its false positive predictions, is its tendency to prefer longer sentences.', 'It is nevertheless substantially superior to the trivial all-yes baseline, as well as its all-no counterpart.']
[None, ['w2v title-speech', 'w2v arg-speech', 'w2v title-sentence', 'w2v arg-sentence'], ['w2v arg-sentence', 'w2v title-sentence', 'Accuracy (%)'], None, ['w2v arg-sentence', 'ST arg-sentence'], ['ST arg-sentence', 'Accuracy (%)'], ['w2v arg-sentence', 'Accuracy (%)'], ['w2v arg-sentence'], ['w2v arg-sentence', 'all-yes']]
1
D18-1084table_1
Results of our model compared with prior published results. Note that Liang et al. (2017) also trains a model on additional data, but here we only compare models trained on Visual Genome. Also note that our models employ greedy search, whereas other models employ beam search.
1
[['Krause et al. (Template)'], ['Krause et al. (Flat w/o object detector)'], ['Krause et al. (Flat)'], ['Krause et al. (Hierarchical)'], ['Liang et al. (w/o discriminator)'], ['Liang et al.'], ['Ours (XE training w/o rep. penalty)'], ['Ours (XE training w/ rep. penalty)'], ['Ours (SCST training w/o rep. penalty)'], ['Ours (SCST training w/ rep. penalty)']]
1
[['METEOR'], ['CIDEr'], ['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4']]
[['14.31', '12.15', '37.47', '21.02', '12.30', '7.38'], ['12.82', '11.06', '34.04', '19.95', '12.20', '7.71'], ['13.54', '11.14', '37.30', '21.70', '13.07', '8.07'], ['15.95', '13.52', '41.90', '24.11', '14.23', '8.69'], ['16.57', '15.07', '41.86', '24.33', '14.56', '8.99'], ['17.12', '16.87', '41.99', '24.86', '14.89', '9.03'], ['13.66', '12.89', '32.78', '19.00', '11.40', '6.89'], ['15.17', '22.68', '35.68', '22.40', '14.04', '8.70'], ['13.63', '13.77', '29.67', '16.45', '9.74', '5.88'], ['17.86', '30.63', '43.54', '27.44', '17.33', '10.58']]
column
['METEOR', 'CIDEr', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4']
['Ours (XE training w/o rep. penalty)', 'Ours (XE training w/ rep. penalty)', 'Ours (SCST training w/o rep. penalty)', 'Ours (SCST training w/ rep. penalty)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>METEOR</th> <th>CIDEr</th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>Krause et al. (Template)</td> <td>14.31</td> <td>12.15</td> <td>37.47</td> <td>21.02</td> <td>12.30</td> <td>7.38</td> </tr> <tr> <td>Krause et al. (Flat w/o object detector)</td> <td>12.82</td> <td>11.06</td> <td>34.04</td> <td>19.95</td> <td>12.20</td> <td>7.71</td> </tr> <tr> <td>Krause et al. (Flat)</td> <td>13.54</td> <td>11.14</td> <td>37.30</td> <td>21.70</td> <td>13.07</td> <td>8.07</td> </tr> <tr> <td>Krause et al. (Hierarchical)</td> <td>15.95</td> <td>13.52</td> <td>41.90</td> <td>24.11</td> <td>14.23</td> <td>8.69</td> </tr> <tr> <td>Liang et al. (w/o discriminator)</td> <td>16.57</td> <td>15.07</td> <td>41.86</td> <td>24.33</td> <td>14.56</td> <td>8.99</td> </tr> <tr> <td>Liang et al.</td> <td>17.12</td> <td>16.87</td> <td>41.99</td> <td>24.86</td> <td>14.89</td> <td>9.03</td> </tr> <tr> <td>Ours (XE training w/o rep. penalty)</td> <td>13.66</td> <td>12.89</td> <td>32.78</td> <td>19.00</td> <td>11.40</td> <td>6.89</td> </tr> <tr> <td>Ours (XE training w/ rep. penalty)</td> <td>15.17</td> <td>22.68</td> <td>35.68</td> <td>22.40</td> <td>14.04</td> <td>8.70</td> </tr> <tr> <td>Ours (SCST training w/o rep. penalty)</td> <td>13.63</td> <td>13.77</td> <td>29.67</td> <td>16.45</td> <td>9.74</td> <td>5.88</td> </tr> <tr> <td>Ours (SCST training w/ rep. penalty)</td> <td>17.86</td> <td>30.63</td> <td>43.54</td> <td>27.44</td> <td>17.33</td> <td>10.58</td> </tr> </tbody></table>
Table 1
table_1
D18-1084
4
emnlp2018
Results. Table 1 shows the main experimental results. Our baseline cross-entropy captioning model gets similar scores to the original flat model. When the repetition penalty is applied to a model trained with cross-entropy, we see a large improvement on CIDEr and a minor improvement on other metrics. When combining the repetition penalty with SCST, we see a dramatic improvement across all metrics, and particularly on CIDEr. Interestingly, SCST only works when its baseline reward model is strong;. for this reason the combination of the repetition penalty and SCST is particularly effective.
[2, 1, 1, 1, 1, 2, 2]
['Results.', 'Table 1 shows the main experimental results.', 'Our baseline cross-entropy captioning model gets similar scores to the original flat model.', 'When the repetition penalty is applied to a model trained with cross-entropy, we see a large improvement on CIDEr and a minor improvement on other metrics.', 'When combining the repetition penalty with SCST, we see a dramatic improvement across all metrics, and particularly on CIDEr.', 'Interestingly, SCST only works when its baseline reward model is strong;.', 'for this reason the combination of the repetition penalty and SCST is particularly effective.']
[None, None, ['Ours (XE training w/o rep. penalty)', 'Krause et al. (Flat)'], ['Ours (XE training w/ rep. penalty)', 'Ours (SCST training w/ rep. penalty)', 'CIDEr', 'METEOR', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4'], ['Ours (XE training w/ rep. penalty)', 'CIDEr', 'METEOR', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4'], ['Ours (SCST training w/ rep. penalty)'], ['Ours (SCST training w/ rep. penalty)']]
1
D18-1085table_1
Correlation results with the manual metrics of Pyramid, Responsiveness, and Readability using the correlation metrics of Pearson r, Spearman ρ, and Kendall τ. The best correlations are specified in bold, and the underlined scores show the top correlations in the TAC AESOP 2011.
2
[['Metric', 'C S IIITH3'], ['Metric', 'DemokritosGR1'], ['Metric', 'Catolicasc1'], ['Metric', 'ROUGE-1'], ['Metric', 'ROUGE-2'], ['Metric', 'ROUGE-SU4'], ['Metric', 'ROUGE-WE-1'], ['Metric', 'ROUGE-WE-2'], ['Metric', 'ROUGE-WE-SU4'], ['Metric', 'ROUGE-G-1'], ['Metric', 'ROUGE-G-2'], ['Metric', 'ROUGE-G-SU4']]
2
[['Pyramid', 'Pearson'], ['Pyramid', 'Spearman'], ['Pyramid', 'Kendall'], ['Responsiveness', 'Pearson'], ['Responsiveness', 'Spearman'], ['Responsiveness', 'Kendall'], ['Readability', 'Pearson'], ['Readability', 'Spearman'], ['Readability', 'Kendall']]
[['0.965', '0.903', '0.758', '0.933', '0.781', '0.596', '0.731', '0.358', '0.242'], ['0.974', '0.897', '0.747', '0.947', '0.845', '0.675', '0.794', '0.497', '0.359'], ['0.967', '0.902', '0.735', '0.950', '0.837', '0.666', '0.819', '0.494', '0.366'], ['0.966', '0.909', '0.747', '0.935', '0.818', '0.633', '0.790', '0.391', '0.285'], ['0.961', '0.894', '0.745', '0.942', '0.790', '0.610', '0.752', '0.398', '0.293'], ['0.981', '0.894', '0.737', '0.955', '0.790', '0.602', '0.784', '0.395', '0.293'], ['0.949', '0.914', '0.753', '0.916', '0.819', '0.631', '0.785', '0.431', '0.322'], ['0.977', '0.898', '0.744', '0.953', '0.797', '0.615', '0.782', '0.414', '0.304'], ['0.978', '0.881', '0.720', '0.954', '0.787', '0.597', '0.793', '0.407', '0.302'], ['0.971', '0.915', '0.758', '0.944', '0.825', '0.638', '0.791', '0.434', '0.330'], ['0.983', '0.926', '0.774', '0.956', '0.869', '0.713', '0.790', '0.516', '0.385'], ['0.979', '0.898', '0.741', '0.957', '0.814', '0.616', '0.823', '0.445', '0.334']]
column
['Pearson', 'Spearman', 'Kendall', 'Pearson', 'Spearman', 'Kendall', 'Pearson', 'Spearman', 'Kendall']
['ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pyramid || Pearson</th> <th>Pyramid || Spearman</th> <th>Pyramid || Kendall</th> <th>Responsiveness || Pearson</th> <th>Responsiveness || Spearman</th> <th>Responsiveness || Kendall</th> <th>Readability || Pearson</th> <th>Readability || Spearman</th> <th>Readability || Kendall</th> </tr> </thead> <tbody> <tr> <td>Metric || C S IIITH3</td> <td>0.965</td> <td>0.903</td> <td>0.758</td> <td>0.933</td> <td>0.781</td> <td>0.596</td> <td>0.731</td> <td>0.358</td> <td>0.242</td> </tr> <tr> <td>Metric || DemokritosGR1</td> <td>0.974</td> <td>0.897</td> <td>0.747</td> <td>0.947</td> <td>0.845</td> <td>0.675</td> <td>0.794</td> <td>0.497</td> <td>0.359</td> </tr> <tr> <td>Metric || Catolicasc1</td> <td>0.967</td> <td>0.902</td> <td>0.735</td> <td>0.950</td> <td>0.837</td> <td>0.666</td> <td>0.819</td> <td>0.494</td> <td>0.366</td> </tr> <tr> <td>Metric || ROUGE-1</td> <td>0.966</td> <td>0.909</td> <td>0.747</td> <td>0.935</td> <td>0.818</td> <td>0.633</td> <td>0.790</td> <td>0.391</td> <td>0.285</td> </tr> <tr> <td>Metric || ROUGE-2</td> <td>0.961</td> <td>0.894</td> <td>0.745</td> <td>0.942</td> <td>0.790</td> <td>0.610</td> <td>0.752</td> <td>0.398</td> <td>0.293</td> </tr> <tr> <td>Metric || ROUGE-SU4</td> <td>0.981</td> <td>0.894</td> <td>0.737</td> <td>0.955</td> <td>0.790</td> <td>0.602</td> <td>0.784</td> <td>0.395</td> <td>0.293</td> </tr> <tr> <td>Metric || ROUGE-WE-1</td> <td>0.949</td> <td>0.914</td> <td>0.753</td> <td>0.916</td> <td>0.819</td> <td>0.631</td> <td>0.785</td> <td>0.431</td> <td>0.322</td> </tr> <tr> <td>Metric || ROUGE-WE-2</td> <td>0.977</td> <td>0.898</td> <td>0.744</td> <td>0.953</td> <td>0.797</td> <td>0.615</td> <td>0.782</td> <td>0.414</td> <td>0.304</td> </tr> <tr> <td>Metric || ROUGE-WE-SU4</td> <td>0.978</td> <td>0.881</td> <td>0.720</td> <td>0.954</td> <td>0.787</td> <td>0.597</td> <td>0.793</td> <td>0.407</td> <td>0.302</td> </tr> <tr> <td>Metric || ROUGE-G-1</td> <td>0.971</td> <td>0.915</td> <td>0.758</td> <td>0.944</td> <td>0.825</td> <td>0.638</td> <td>0.791</td> <td>0.434</td> <td>0.330</td> </tr> <tr> <td>Metric || ROUGE-G-2</td> <td>0.983</td> <td>0.926</td> <td>0.774</td> <td>0.956</td> <td>0.869</td> <td>0.713</td> <td>0.790</td> <td>0.516</td> <td>0.385</td> </tr> <tr> <td>Metric || ROUGE-G-SU4</td> <td>0.979</td> <td>0.898</td> <td>0.741</td> <td>0.957</td> <td>0.814</td> <td>0.616</td> <td>0.823</td> <td>0.445</td> <td>0.334</td> </tr> </tbody></table>
Table 1
table_1
D18-1085
5
emnlp2018
We evaluate ROUGE-G, against the top metrics (C S IIITH3, DemokritosGR1, Catolicasc1) among the 23 metrics participated in TAC AESOP 2011, ROUGE, and the most recent related work (ROUGE-WE) (Table 1). Overall results support our proposal to consider semantics besides surface with ROUGE. We analyze the correlation results reported in Table 1 in the following. ROUGE-G-2 achieves the best correlation with Pyramid, regarding all correlation metrics. Moreover, every ROUGE-G variant outperforms its corresponding ROUGE and ROUGE-WE variants, regardless of the correlation metric used. However, the only exception is ROUGE-SU4, which correlates slightly better with Pyramid when measuring with Pearson correlation. One possible reason is that Pyramid measures content similarity between peer and model summaries, while the variants of ROUGE-G favor semantics behind the content for measuring similarities. Since some of the semantics attached to the skipped words are lost in the construction of skip-bigrams, ROUGE-SU4 shows a better correlation comparing to ROUGE-G-SU4. For Responsiveness, ROUGE-G-SU4 achieves the best correlation when measuring with Pearson. We also observe that ROUGE-G-2 obtains the best correlation with Responsiveness while measuring with the Spearman and Kendall rank correlations. The reason is that semantic interpretation of bigrams is easier, and that of contiguous bigrams is much more precise. We also see that every variant of ROUGE-G outperforms its corresponding ROUGE and ROUGE-WE variants. The readability score is based on grammaticality, structure, and coherence. Although our main goal is not to improve the readability, ROUGE-G-SU4 and ROUGE-G-2 are observed to correlate very well with this metric when measured with the Pearson and Spearman/Kendall rank correlations, respectively. Besides, every variant of ROUGE-G represents the best correlation results comparing to its corresponding variants of ROUGE and ROUGE-WE for all correlation metrics.
[1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 2, 1, 2, 1, 1]
['We evaluate ROUGE-G, against the top metrics (C S IIITH3, DemokritosGR1, Catolicasc1) among the 23 metrics participated in TAC AESOP 2011, ROUGE, and the most recent related work (ROUGE-WE) (Table 1).', 'Overall results support our proposal to consider semantics besides surface with ROUGE.', 'We analyze the correlation results reported in Table 1 in the following.', 'ROUGE-G-2 achieves the best correlation with Pyramid, regarding all correlation metrics.', 'Moreover, every ROUGE-G variant outperforms its corresponding ROUGE and ROUGE-WE variants, regardless of the correlation metric used.', 'However, the only exception is ROUGE-SU4, which correlates slightly better with Pyramid when measuring with Pearson correlation.', 'One possible reason is that Pyramid measures content similarity between peer and model summaries, while the variants of ROUGE-G favor semantics behind the content for measuring similarities.', 'Since some of the semantics attached to the skipped words are lost in the construction of skip-bigrams, ROUGE-SU4 shows a better correlation comparing to ROUGE-G-SU4.', 'For Responsiveness, ROUGE-G-SU4 achieves the best correlation when measuring with Pearson.', 'We also observe that ROUGE-G-2 obtains the best correlation with Responsiveness while measuring with the Spearman and Kendall rank correlations.', 'The reason is that semantic interpretation of bigrams is easier, and that of contiguous bigrams is much more precise.', 'We also see that every variant of ROUGE-G outperforms its corresponding ROUGE and ROUGE-WE variants.', 'The readability score is based on grammaticality, structure, and coherence.', 'Although our main goal is not to improve the readability, ROUGE-G-SU4 and ROUGE-G-2 are observed to correlate very well with this metric when measured with the Pearson and Spearman/Kendall rank correlations, respectively.', 'Besides, every variant of ROUGE-G represents the best correlation results comparing to its corresponding variants of ROUGE and ROUGE-WE for all correlation metrics.']
[['ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'C S IIITH3', 'DemokritosGR1', 'Catolicasc1', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4'], ['ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4'], None, ['ROUGE-G-2', 'Pyramid', 'Pearson', 'Spearman', 'Kendall'], ['ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4'], ['ROUGE-SU4', 'ROUGE-G-SU4', 'Pyramid', 'Pearson'], ['Pyramid', 'ROUGE-G-SU4'], ['ROUGE-SU4', 'ROUGE-G-SU4'], ['ROUGE-G-SU4', 'Responsiveness', 'Pearson'], ['ROUGE-G-2', 'Responsiveness', 'Spearman', 'Kendall'], None, ['Responsiveness', 'ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4'], ['Readability'], ['Readability', 'ROUGE-G-2', 'ROUGE-G-SU4', 'Pearson', 'Spearman', 'Kendall'], ['Readability', 'ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4']]
1
D18-1086table_1
Results for AMR-to-text
2
[['Model', 'Our model (unguided NLG)'], ['Model', 'NeuralAMR (Konstas et al. 2017)'], ['Model', 'TSP (Song et al. 2016)'], ['Model', 'TreeToStr (Flanigan et al. 2016)']]
1
[['BLEU']]
[['21.1'], ['22.0'], ['22.4'], ['23.0']]
column
['BLEU']
['Our model (unguided NLG)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || Our model (unguided NLG)</td> <td>21.1</td> </tr> <tr> <td>Model || NeuralAMR (Konstas et al. 2017)</td> <td>22.0</td> </tr> <tr> <td>Model || TSP (Song et al. 2016)</td> <td>22.4</td> </tr> <tr> <td>Model || TreeToStr (Flanigan et al. 2016)</td> <td>23.0</td> </tr> </tbody></table>
Table 1
table_1
D18-1086
3
emnlp2018
AMR-to-Text baseline comparison. We compare our baseline model (described in ยง3.2) against previous works in AMR-to-text using the data from the recent SemEval-2016 Task 8 (May, 2016, LDC2015E86). Table 1 reports BLEU scores comparing our model against previous works. Here, we see that our model achieves a BLEU score comparable with the state-of-the-art, and thus we argue that it is sufficient to be used in our subsequent experiments with guidance.
[2, 2, 1, 1]
['AMR-to-Text baseline comparison.', 'We compare our baseline model (described in ยง3.2) against previous works in AMR-to-text using the data from the recent SemEval-2016 Task 8 (May, 2016, LDC2015E86).', 'Table 1 reports BLEU scores comparing our model against previous works.', 'Here, we see that our model achieves a BLEU score comparable with the state-of-the-art, and thus we argue that it is sufficient to be used in our subsequent experiments with guidance.']
[None, ['NeuralAMR (Konstas et al. 2017)', 'TSP (Song et al. 2016)', 'TreeToStr (Flanigan et al. 2016)'], ['Our model (unguided NLG)', 'NeuralAMR (Konstas et al. 2017)', 'TSP (Song et al. 2016)', 'TreeToStr (Flanigan et al. 2016)', 'BLEU'], ['Our model (unguided NLG)', 'NeuralAMR (Konstas et al. 2017)', 'TSP (Song et al. 2016)', 'TreeToStr (Flanigan et al. 2016)', 'BLEU']]
1
D18-1086table_2
BLEU and ROUGE results for guided and unguided models using test dataset.
2
[['Model', 'Guided NLG (Oracle)'], ['Model', 'Guided NLG'], ['Model', 'Unguided NLG']]
2
[['-', 'BLEU'], ['F1 ROUGE', 'R-1'], ['F1 ROUGE', 'R-2'], ['F1 ROUGE', 'R-L']]
[['61.3', '79.4', '63.7', '76.4'], ['45.8', '70.7', '49.5', '64.9'], ['29.6', '68.6', '39.6', '61.3']]
column
['BLEU', 'R-1', 'R-2', 'R-L']
['Guided NLG (Oracle)', 'Guided NLG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU || -</th> <th>F1 ROUGE || R-1</th> <th>F1 ROUGE || R-2</th> <th>F1 ROUGE || R-L</th> </tr> </thead> <tbody> <tr> <td>Model || Guided NLG (Oracle)</td> <td>61.3</td> <td>79.4</td> <td>63.7</td> <td>76.4</td> </tr> <tr> <td>Model || Guided NLG</td> <td>45.8</td> <td>70.7</td> <td>49.5</td> <td>64.9</td> </tr> <tr> <td>Model || Unguided NLG</td> <td>29.6</td> <td>68.6</td> <td>39.6</td> <td>61.3</td> </tr> </tbody></table>
Table 2
table_2
D18-1086
4
emnlp2018
Guided NLG for AMR-to-Text. In this experiment we apply our guided NLG mechanism described in §3.3 to our baseline seq2seq model. To isolate the effects of guidance we skip the actual summarization process and proceed to directly generating the summary text from the gold standard summary AMR graphs from the Proxy Report section. To determine the hyper-parameters, we perform a grid search using the dev dataset, where we found the best combination of ψ, θ and k are 0.95, 2.5 and 15 respectively. We have two different settings for this experiment: the oracle and non-oracle settings. In the oracle setting, we directly use the gold standard summary text as the guidance for our model. The intuition is that in this setting, our model knows precisely which words should appear in the summary text, thus providing an upper bound for the performance of our guided NLG approach. In the non-oracle setting, we use the mechanism described in §3.3. We also compare them against the baseline (unguided) model from §3.2. Table 2 reports performance for all models. The difference between the guided and the unguided model is 16.2 points in BLEU and 9.9 points in ROUGE-2, while there is room for improvement as evidenced by the difference between the oracle and non-oracle result.
[2, 2, 2, 0, 2, 2, 2, 2, 2, 1, 1]
['Guided NLG for AMR-to-Text.', 'In this experiment we apply our guided NLG mechanism described in §3.3 to our baseline seq2seq model.', 'To isolate the effects of guidance we skip the actual summarization process and proceed to directly generating the summary text from the gold standard summary AMR graphs from the Proxy Report section.', 'To determine the hyper-parameters, we perform a grid search using the dev dataset, where we found the best combination of ψ, θ and k are 0.95, 2.5 and 15 respectively.', 'We have two different settings for this experiment: the oracle and non-oracle settings.', 'In the oracle setting, we directly use the gold standard summary text as the guidance for our model.', 'The intuition is that in this setting, our model knows precisely which words should appear in the summary text, thus providing an upper bound for the performance of our guided NLG approach.', 'In the non-oracle setting, we use the mechanism described in §3.3.', 'We also compare them against the baseline (unguided) model from §3.2.', 'Table 2 reports performance for all models.', 'The difference between the guided and the unguided model is 16.2 points in BLEU and 9.9 points in ROUGE-2, while there is room for improvement as evidenced by the difference between the oracle and non-oracle result.']
[None, ['Guided NLG'], None, None, ['Guided NLG (Oracle)', 'Guided NLG'], ['Guided NLG (Oracle)'], ['Guided NLG (Oracle)'], ['Guided NLG'], ['Guided NLG'], ['Guided NLG (Oracle)', 'Guided NLG', 'Unguided NLG'], ['Guided NLG', 'Unguided NLG', 'BLEU', 'R-2', 'Guided NLG (Oracle)']]
1
D18-1110table_3
Evaluation results on ATIS where Accori and Accpara denote the accuracy on the original and paraphrased development set of ATIS, respectively.
2
[['Feature', 'Word Order'], ['Feature', 'Dep'], ['Feature', 'Cons'], ['Feature', 'Dep + Cons'], ['Feature', 'Word Order + Dep'], ['Feature', 'Word Order + Cons'], ['Feature', 'Word Order + Dep + Cons']]
1
[['Accori'], ['Accpara'], ['Diff.']]
[['84.8', '78.7', '-6.1'], ['83.5', '80.1', '-3.4'], ['82.9', '77.3', '-5.6'], ['84.0', '80.7', '-3.3'], ['85.2', '82.3', '-2.9'], ['84.9', '79.9', '-5.0'], ['86.0', '83.5', '-2.5']]
column
['Accori', 'Accpara', 'Diff.']
['Word Order', 'Dep', 'Cons', 'Dep + Cons', 'Word Order + Dep', 'Word Order + Cons', 'Word Order + Dep + Cons']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accori</th> <th>Accpara</th> <th>Diff.</th> </tr> </thead> <tbody> <tr> <td>Feature || Word Order</td> <td>84.8</td> <td>78.7</td> <td>-6.1</td> </tr> <tr> <td>Feature || Dep</td> <td>83.5</td> <td>80.1</td> <td>-3.4</td> </tr> <tr> <td>Feature || Cons</td> <td>82.9</td> <td>77.3</td> <td>-5.6</td> </tr> <tr> <td>Feature || Dep + Cons</td> <td>84.0</td> <td>80.7</td> <td>-3.3</td> </tr> <tr> <td>Feature || Word Order + Dep</td> <td>85.2</td> <td>82.3</td> <td>-2.9</td> </tr> <tr> <td>Feature || Word Order + Cons</td> <td>84.9</td> <td>79.9</td> <td>-5.0</td> </tr> <tr> <td>Feature || Word Order + Dep + Cons</td> <td>86.0</td> <td>83.5</td> <td>-2.5</td> </tr> </tbody></table>
Table 3
table_3
D18-1110
5
emnlp2018
Table 3 shows the results of our model on the second type of adversarial examples, i.e., the paraphrased ATIS development set. We also report the result of our model on the original ATIS development set. We can see that (1) no matter which feature our model uses, the performance degrades at least 2.5% on the paraphrased dataset;. (2) the model that only uses word order features achieves the worst robustness to the paraphrased queries while the dependency feature seems more robust than other two features. (3) simultaneously utilizing three syntactic features could greatly enhance the robustness of our model. These results again demonstrate that our model could benefit from incorporating more aspects of syntactic information.
[1, 1, 1, 1, 1, 2]
['Table 3 shows the results of our model on the second type of adversarial examples, i.e., the paraphrased ATIS development set.', 'We also report the result of our model on the original ATIS development set.', 'We can see that (1) no matter which feature our model uses, the performance degrades at least 2.5% on the paraphrased dataset;.', '(2) the model that only uses word order features achieves the worst robustness to the paraphrased queries while the dependency feature seems more robust than other two features.', '(3) simultaneously utilizing three syntactic features could greatly enhance the robustness of our model.', 'These results again demonstrate that our model could benefit from incorporating more aspects of syntactic information.']
[['Accpara'], ['Accori'], ['Word Order + Dep + Cons', 'Accpara', 'Diff.'], ['Word Order'], ['Word Order + Dep + Cons'], None]
1
D18-1111table_2
Results compared to baselines. YN17 result is taken from Yin and Neubig (2017). ASN result is taken from Rabinovich et al. (2017)
1
[['SEQ2SEQ'], ['YN17'], ['ASN'], ['ASN + SUPATT'], ['RECODE']]
2
[['HS', 'Acc'], ['HS', 'BLEU'], ['Django', 'Acc'], ['Django', 'BLEU']]
[['0.0', '55.0', '13.9', '67.3'], ['16.2', '75.8', '71.6', '84.5'], ['18.2', '77.6', '-', '-'], ['22.7', '79.2', '-', '-'], ['19.6', '78.4', '72.8', '84.7']]
column
['Acc', 'BLEU', 'Acc', 'BLEU']
['RECODE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HS || Acc</th> <th>HS || BLEU</th> <th>Django || Acc</th> <th>Django || BLEU</th> </tr> </thead> <tbody> <tr> <td>SEQ2SEQ</td> <td>0.0</td> <td>55.0</td> <td>13.9</td> <td>67.3</td> </tr> <tr> <td>YN17</td> <td>16.2</td> <td>75.8</td> <td>71.6</td> <td>84.5</td> </tr> <tr> <td>ASN</td> <td>18.2</td> <td>77.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>ASN + SUPATT</td> <td>22.7</td> <td>79.2</td> <td>-</td> <td>-</td> </tr> <tr> <td>RECODE</td> <td>19.6</td> <td>78.4</td> <td>72.8</td> <td>84.7</td> </tr> </tbody></table>
Table 2
table_2
D18-1111
4
emnlp2018
5.1 Results. Table 2 shows that RECODE outperforms the baselines in both BLEU and accuracy, providing evidence for the effectiveness of incorporating retrieval methods into tree-based approaches. We ran statistical significance tests for RECODE and YN17, using bootstrap resampling with N = 10,000. For the BLEU scores of both datasets, p < 0.001. For the exact match accuracy, p < 0.001 for Django dataset, but for Hearthstone, p > 0.3, showing that the retrieval-based model is on par with YN17. It is worth noting, though, that HS consists of long and complex code, and that generating exact matches is very difficult, making exact match accuracy a less reliable metric. We also compare RECODE with Rabinovich et al. (2017)’s Abstract Syntax Networks with supervision (ASN+SUPATT) which is the state-of-the-art system for HS. RECODE exceeds ASN without extra supervision though ASN+SUPATT has a slightly better result. However, ASN+SUPATT is trained with supervisedattention extracted through heuristic exact wordmatches while our attention is unsupervised.
[2, 1, 2, 2, 2, 2, 1, 1, 2]
['5.1 Results.', 'Table 2 shows that RECODE outperforms the baselines in both BLEU and accuracy, providing evidence for the effectiveness of incorporating retrieval methods into tree-based approaches.', 'We ran statistical significance tests for RECODE and YN17, using bootstrap resampling with N = 10,000.', 'For the BLEU scores of both datasets, p < 0.001.', 'For the exact match accuracy, p < 0.001 for Django dataset, but for Hearthstone, p > 0.3, showing that the retrieval-based model is on par with YN17.', 'It is worth noting, though, that HS consists of long and complex code, and that generating exact matches is very difficult, making exact match accuracy a less reliable metric.', 'We also compare RECODE with Rabinovich et al. (2017)’s Abstract Syntax Networks with supervision (ASN+SUPATT) which is the state-of-the-art system for HS.', 'RECODE exceeds ASN without extra supervision though ASN+SUPATT has a slightly better result.', 'However, ASN+SUPATT is trained with supervisedattention extracted through heuristic exact wordmatches while our attention is unsupervised.']
[None, ['RECODE', 'SEQ2SEQ', 'YN17', 'ASN', 'ASN + SUPATT', 'Acc', 'BLEU'], ['RECODE', 'YN17'], ['BLEU', 'HS', 'Django'], ['Django', 'HS', 'YN17'], ['HS'], ['RECODE', 'ASN + SUPATT', 'HS'], ['RECODE', 'ASN', 'ASN + SUPATT'], ['ASN + SUPATT', 'RECODE']]
1
D18-1112table_1
Results on the WikiSQL (above) and Stackoverflow (below).
1
[['Template'], ['Seq2Seq'], ['Seq2Seq + Copy'], ['Tree2Seq'], ['Graph2Seq-PGE'], ['Graph2Seq-NGE'], ['(Iyer et al. 2016)'], ['Graph2Seq-PGE'], ['Graph2Seq-NGE']]
1
[['BLEU-4'], ['Grammar.'], ['Correct.']]
[['15.71', '1.50', '-'], ['20.91', '2.54', '62.1%'], ['24.12', '2.65', '64.5%'], ['26.67', '2.70', '66.8%'], ['38.97', '3.81', '79.2%'], ['34.28', '3.26', '75.3%'], ['18.4', '3.16', '64.2%'], ['23.3', '3.23', '70.2%'], ['21.9', '2.97', '65.1%']]
column
['BLEU-4', 'Grammar.', 'Correct.']
['Graph2Seq-PGE', 'Graph2Seq-NGE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-4</th> <th>Grammar.</th> <th>Correct.</th> </tr> </thead> <tbody> <tr> <td>Template</td> <td>15.71</td> <td>1.50</td> <td>-</td> </tr> <tr> <td>Seq2Seq</td> <td>20.91</td> <td>2.54</td> <td>62.1%</td> </tr> <tr> <td>Seq2Seq + Copy</td> <td>24.12</td> <td>2.65</td> <td>64.5%</td> </tr> <tr> <td>Tree2Seq</td> <td>26.67</td> <td>2.70</td> <td>66.8%</td> </tr> <tr> <td>Graph2Seq-PGE</td> <td>38.97</td> <td>3.81</td> <td>79.2%</td> </tr> <tr> <td>Graph2Seq-NGE</td> <td>34.28</td> <td>3.26</td> <td>75.3%</td> </tr> <tr> <td>(Iyer et al. 2016)</td> <td>18.4</td> <td>3.16</td> <td>64.2%</td> </tr> <tr> <td>Graph2Seq-PGE</td> <td>23.3</td> <td>3.23</td> <td>70.2%</td> </tr> <tr> <td>Graph2Seq-NGE</td> <td>21.9</td> <td>2.97</td> <td>65.1%</td> </tr> </tbody></table>
Table 1
table_1
D18-1112
4
emnlp2018
Results and Discussion. Table 1 summarizes the results of our models and baselines. Although the template-based method achieves decent BLEU scores, its grammaticality score is substantially worse than other baselines. We can see that on both two datasets, our Graph2Seq models perform significantly better than the Seq2Seq and Tree2Seq baselines. One possible reason is that in our graph encoder, the node embedding retains the information of neighbor nodes within K hops. However, in the tree encoder, the node embedding only aggregates the information of descendants while losing the knowledge of ancestors. The pooling-based graph embedding is found to be more useful than the node-based graph embedding because Graph2Seq-NGE adds a nonexistent node into the graph, which introduces the noisy information in calculating the embeddings of other nodes. We also conducted an experiment that treats the SQL query graph as an undirected graph and found the performance degrades. By manually analyzing the cases in which the Graph2Seq model performs better than Seq2Seq, we find the Graph2Seq model is better at interpreting two classes of queries:.
[2, 1, 1, 1, 2, 2, 2, 1, 1]
['Results and Discussion.', 'Table 1 summarizes the results of our models and baselines.', 'Although the template-based method achieves decent BLEU scores, its grammaticality score is substantially worse than other baselines.', 'We can see that on both two datasets, our Graph2Seq models perform significantly better than the Seq2Seq and Tree2Seq baselines.', 'One possible reason is that in our graph encoder, the node embedding retains the information of neighbor nodes within K hops.', 'However, in the tree encoder, the node embedding only aggregates the information of descendants while losing the knowledge of ancestors.', 'The pooling-based graph embedding is found to be more useful than the node-based graph embedding because Graph2Seq-NGE adds a nonexistent node into the graph, which introduces the noisy information in calculating the embeddings of other nodes.', 'We also conducted an experiment that treats the SQL query graph as an undirected graph and found the performance degrades.', 'By manually analyzing the cases in which the Graph2Seq model performs better than Seq2Seq, we find the Graph2Seq model is better at interpreting two classes of queries:.']
[None, ['Template', 'Seq2Seq', 'Seq2Seq + Copy', 'Tree2Seq', 'Graph2Seq-PGE', 'Graph2Seq-NGE', '(Iyer et al. 2016)'], ['Template', 'BLEU-4', 'Grammar.', 'Seq2Seq', 'Seq2Seq + Copy', 'Tree2Seq'], ['Graph2Seq-PGE', 'Graph2Seq-NGE', 'Seq2Seq', 'Tree2Seq'], ['Graph2Seq-PGE', 'Graph2Seq-NGE'], None, ['Graph2Seq-NGE'], ['Graph2Seq-NGE'], ['Graph2Seq-PGE', 'Graph2Seq-NGE', 'Seq2Seq']]
1
D18-1114table_6
Breakdown by property of binary classification F1 on SPR1. All new results outperforming prior work (CRF) in bold.
1
[['micro f1'], ['macro f1']]
1
[['CRF'], ['SPR1'], ['MT:SPR1'], ['SPR1+2']]
[['81.7', '82.2', '83.3', '83.3'], ['65.9', '69.3', '71.1', '70.4']]
row
['micro f1', 'macro f1']
['SPR1', 'MT:SPR1', 'SPR1+2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CRF</th> <th>SPR1</th> <th>MT:SPR1</th> <th>SPR1+2</th> </tr> </thead> <tbody> <tr> <td>micro f1</td> <td>81.7</td> <td>82.2</td> <td>83.3</td> <td>83.3</td> </tr> <tr> <td>macro f1</td> <td>65.9</td> <td>69.3</td> <td>71.1</td> <td>70.4</td> </tr> </tbody></table>
Table 6
table_6
D18-1114
10
emnlp2018
Table 6 shows binary classification F1 on SPR1. All new results outperforming prior work (CRF) in bold.
[1, 1]
['Table 6 shows binary classification F1 on SPR1.', 'All new results outperforming prior work (CRF) in bold.']
[None, ['SPR1', 'MT:SPR1', 'SPR1+2', 'CRF']]
1
D18-1124table_1
Main results in terms of F1 score (%). w/s: # of words decoded per second, number with † is retrieved from the original paper.
2
[['Models', 'Finkel and Manning (2009)'], ['Models', 'Lu and Roth (2015)'], ['Models', 'Muis and Lu (2017)'], ['Models', 'Katiyar and Cardie (2018)'], ['Models', 'Ju et al. (2018)'], ['Models', 'Ours'], ['Models', '- char-level LSTM'], ['Models', '- pre-trained embeddings'], ['Models', '- dropout layer']]
1
[['ACE04'], ['ACE05'], ['GENIA'], ['w/s']]
[['-', '-', '70.3', '38'], ['62.8', '62.5', '70.3', '454'], ['64.5', '63.1', '70.8', '263'], ['72.7', '70.5', '73.6', '-'], ['-', '72.2', '74.7', '-'], ['73.3', '73.0', '73.9', '1445'], ['72.3', '71.9', '72.1', '1546'], ['71.3', '71.5', '72.0', '1452'], ['71.7', '72.0', '72.7', '1440']]
column
['F1', 'F1', 'F1', 'F1']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ACE04</th> <th>ACE05</th> <th>GENIA</th> <th>w/s</th> </tr> </thead> <tbody> <tr> <td>Models || Finkel and Manning (2009)</td> <td>-</td> <td>-</td> <td>70.3</td> <td>38</td> </tr> <tr> <td>Models || Lu and Roth (2015)</td> <td>62.8</td> <td>62.5</td> <td>70.3</td> <td>454</td> </tr> <tr> <td>Models || Muis and Lu (2017)</td> <td>64.5</td> <td>63.1</td> <td>70.8</td> <td>263</td> </tr> <tr> <td>Models || Katiyar and Cardie (2018)</td> <td>72.7</td> <td>70.5</td> <td>73.6</td> <td>-</td> </tr> <tr> <td>Models || Ju et al. (2018)</td> <td>-</td> <td>72.2</td> <td>74.7</td> <td>-</td> </tr> <tr> <td>Models || Ours</td> <td>73.3</td> <td>73.0</td> <td>73.9</td> <td>1445</td> </tr> <tr> <td>Models || - char-level LSTM</td> <td>72.3</td> <td>71.9</td> <td>72.1</td> <td>1546</td> </tr> <tr> <td>Models || - pre-trained embeddings</td> <td>71.3</td> <td>71.5</td> <td>72.0</td> <td>1452</td> </tr> <tr> <td>Models || - dropout layer</td> <td>71.7</td> <td>72.0</td> <td>72.7</td> <td>1440</td> </tr> </tbody></table>
Table 1
table_1
D18-1124
4
emnlp2018
The main results are reported in Table 1. Our neural transition-based model achieves the best results in ACE datasets and comparable results in GENIA dataset in terms of F1 measure. We hypothesize that the performance gain of our model compared with other methods is largely due to improved performance on the portions of nested mentions in our datasets. To verify this, we design an experiment to evaluate how well a system can recognize nested mentions. Handling Nested Mentions. The idea is that we split the test data into two portions: sentences with and without nested mentions. The results of GENIA are listed in Table 2. We can observe that the margin of improvement is more significant in the portion of nested mentions, revealing our model’s effectiveness in handling nested mentions. This observation helps explain why our model achieves greater improvement in ACE than in GENIA in Table 1 since the former has much more nested structures than the latter. Moreover, Ju et al. (2018) performs better when it comes to non-nested mentions possibly due to the CRF they used, which globally normalizes each stacked layer. Decoding Speed. Note that Lu and Roth (2015) and Muis and Lu (2017) also feature linear-time complexity, but with a greater constant factor. To compare the decoding speed, we re-implemented their model with the same platform (PyTorch) and run them on the same machine (CPU: Intel i5 2.7GHz). Our model turns out to be around 3-5 times faster than theirs, showing its scalability. Ablation Study. To evaluate the contribution of neural components including pre-trained embeddings, the characterlevel LSTM and dropout layers, we test the performances of ablated models. The results are listed in Table 1. From the performance gap, we can conclude that these components contribute significantly to the effectiveness of our model in all three datasets.
[1, 1, 2, 0, 2, 0, 0, 0, 1, 0, 2, 2, 2, 1, 2, 2, 1, 1]
['The main results are reported in Table 1.', 'Our neural transition-based model achieves the best results in ACE datasets and comparable results in GENIA dataset in terms of F1 measure.', 'We hypothesize that the performance gain of our model compared with other methods is largely due to improved performance on the portions of nested mentions in our datasets.', 'To verify this, we design an experiment to evaluate how well a system can recognize nested mentions.', 'Handling Nested Mentions.', 'The idea is that we split the test data into two portions: sentences with and without nested mentions.', 'The results of GENIA are listed in Table 2.', 'We can observe that the margin of improvement is more significant in the portion of nested mentions, revealing our model’s effectiveness in handling nested mentions.', 'This observation helps explain why our model achieves greater improvement in ACE than in GENIA in Table 1 since the former has much more nested structures than the latter.', 'Moreover, Ju et al. (2018) performs better when it comes to non-nested mentions possibly due to the CRF they used, which globally normalizes each stacked layer.', 'Decoding Speed.', 'Note that Lu and Roth (2015) and Muis and Lu (2017) also feature linear-time complexity, but with a greater constant factor.', 'To compare the decoding speed, we re-implemented their model with the same platform (PyTorch) and run them on the same machine (CPU: Intel i5 2.7GHz).', 'Our model turns out to be around 3-5 times faster than theirs, showing its scalability.', 'Ablation Study.', 'To evaluate the contribution of neural components including pre-trained embeddings, the characterlevel LSTM and dropout layers, we test the performances of ablated models.', 'The results are listed in Table 1.', 'From the performance gap, we can conclude that these components contribute significantly to the effectiveness of our model in all three datasets.']
[None, ['Ours', 'ACE04', 'ACE05', 'GENIA'], ['Ours'], None, None, None, None, None, ['Ours', 'ACE04', 'ACE05', 'GENIA'], None, None, ['Lu and Roth (2015)', 'Muis and Lu (2017)'], ['Lu and Roth (2015)', 'Muis and Lu (2017)'], ['Ours', 'Lu and Roth (2015)', 'Muis and Lu (2017)', 'w/s'], None, ['Ours', '- char-level LSTM', '- pre-trained embeddings', '- dropout layer'], None, ['Ours', '- char-level LSTM', '- pre-trained embeddings', '- dropout layer', 'ACE04', 'ACE05', 'GENIA']]
1
D18-1126table_1
Results on the WikilinksNED dev and test sets. Our model including features achieves state-ofthe-art performance on the test set, compared to both the reported numbers from Eshel et al. (2017) as well as their released software. Incorporating character CNNs surprisingly leads to lower performance compared to these simple features.
2
[['Model', 'Eshel et al. (2017)'], ['Model', 'Eshel system release'], ['Model', 'GRU+ATTN'], ['Model', 'GRU+ATTN+FEATS'], ['Model', 'GRU'], ['Model', 'GRU+ATTN'], ['Model', 'GRU+ATTN+FEATS'], ['Model', 'GRU+ATTN+CNN']]
1
[['Accuracy on Test (%)'], ['Accuracy on Dev (%)']]
[['73.0', ''], ['72.2', ''], ['74.5', ''], ['75.8', ''], ['', '73.4'], ['', '74.4'], ['', '74.9'], ['', '73.8']]
column
['Accuracy on Test (%)', 'Accuracy on Dev (%)']
['GRU+ATTN', 'GRU+ATTN+FEATS', 'GRU+ATTN+CNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy on Test (%)</th> <th>Accuracy on Dev (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Eshel et al. (2017)</td> <td>73.0</td> <td></td> </tr> <tr> <td>Model || Eshel system release</td> <td>72.2</td> <td></td> </tr> <tr> <td>Model || GRU+ATTN</td> <td>74.5</td> <td></td> </tr> <tr> <td>Model || GRU+ATTN+FEATS</td> <td>75.8</td> <td></td> </tr> <tr> <td>Model || GRU</td> <td></td> <td>73.4</td> </tr> <tr> <td>Model || GRU+ATTN</td> <td></td> <td>74.4</td> </tr> <tr> <td>Model || GRU+ATTN+FEATS</td> <td></td> <td>74.9</td> </tr> <tr> <td>Model || GRU+ATTN+CNN</td> <td></td> <td>73.8</td> </tr> </tbody></table>
Table 1
table_1
D18-1126
3
emnlp2018
Results. The model set forth in this section is the basis for the remaining models in this paper;. we call it the GRU model as that is the only context encoding mechanism it uses. As shown in Table 1, this GRU model gets a score of 73.4 on the WikilinksNED development set. Results. In Table 1, we see that our model with attention (GRU+ATTN) outperforms our basic GRU model by around 1% absolute. It also outperforms the roughly similar model of Eshel et al. (2017) on the test set:. this gain is due to a combination of factors including the improved training procedure and some small modeling changes. However, our attention scheme is not without its shortcomings, as we now discuss. Table 1 shows the impact of incorporating character CNNs (GRU+ATTN+CNN). Surprisingly, these have a mild negative impact on performance. One possible explanation of this is that it causes the model to split its attention between semantically important and lexically similar context terms. Table 1 shows the results of stacking these features on top of our model with attention (GRU+ATTN+FEATS). We see our highest development set performance and correspondingly high test performance from this model. This indicates that character-level information is useful for disambiguation, but character CNNs as we incorporated them are not able to distill it as effectively as sparse features can. Our model augmented with these sparse features achieves state-of-the-art results on the test set.
[2, 2, 2, 1, 2, 1, 1, 2, 2, 1, 1, 2, 1, 1, 2, 1]
['Results.', 'The model set forth in this section is the basis for the remaining models in this paper;.', 'we call it the GRU model as that is the only context encoding mechanism it uses.', 'As shown in Table 1, this GRU model gets a score of 73.4 on the WikilinksNED development set.', 'Results.', 'In Table 1, we see that our model with attention (GRU+ATTN) outperforms our basic GRU model by around 1% absolute.', 'It also outperforms the roughly similar model of Eshel et al. (2017) on the test set:.', 'this gain is due to a combination of factors including the improved training procedure and some small modeling changes.', 'However, our attention scheme is not without its shortcomings, as we now discuss.', 'Table 1 shows the impact of incorporating character CNNs (GRU+ATTN+CNN).', 'Surprisingly, these have a mild negative impact on performance.', 'One possible explanation of this is that it causes the model to split its attention between semantically important and lexically similar context terms.', 'Table 1 shows the results of stacking these features on top of our model with attention (GRU+ATTN+FEATS).', 'We see our highest development set performance and correspondingly high test performance from this model.', 'This indicates that character-level information is useful for disambiguation, but character CNNs as we incorporated them are not able to distill it as effectively as sparse features can.', 'Our model augmented with these sparse features achieves state-of-the-art results on the test set.']
[None, ['GRU'], ['GRU'], ['GRU', 'Accuracy on Dev (%)'], None, ['GRU', 'GRU+ATTN', 'Accuracy on Dev (%)'], ['GRU+ATTN', 'Eshel et al. (2017)', 'Accuracy on Test (%)'], None, ['GRU+ATTN'], ['GRU+ATTN+CNN'], ['GRU+ATTN+CNN', 'Accuracy on Dev (%)'], ['GRU+ATTN+CNN'], ['GRU+ATTN+FEATS'], ['GRU+ATTN+FEATS', 'Accuracy on Dev (%)', 'Accuracy on Test (%)'], ['GRU+ATTN+FEATS', 'GRU+ATTN+CNN'], ['GRU+ATTN+FEATS', 'GRU+ATTN+CNN', 'Accuracy on Test (%)']]
1
D18-1130table_2
Validation & Test results on all datasets. AttSum* are our models, including variants with features and multi-task loss. Others indicate previous best published results. All improvements over AttSum are statistically significant (α = 0.05) according to the McNemar test with continuity correction (Dietterich, 1998).
2
[['LAMBADA', 'GA Reader (Chu et al. 2017)'], ['LAMBADA', 'MAGE (48) (Dhingra et al. 2017)'], ['LAMBADA', 'MAGE (64) (Dhingra et al. 2017)'], ['LAMBADA', 'GA + C-GRU (Dhingra et al. 2018)'], ['LAMBADA', 'AttSum'], ['LAMBADA', 'AttSum + L1'], ['LAMBADA', 'AttSum + L2'], ['LAMBADA', 'AttSum-Feat'], ['LAMBADA', 'AttSum-Feat + L1'], ['LAMBADA', 'AttSum-Feat + L2'], ['CBT-NE', 'GA Reader (Dhingra et al. 2016)'], ['CBT-NE', 'EpiReader (Trischler et al. 2016)'], ['CBT-NE', 'DIM Reader (Liu et al. 2017)'], ['CBT-NE', 'AoA (Cui et al. 2016)'], ['CBT-NE', 'AoA + Reranker (Cui et al. 2016)'], ['CBT-NE', 'AttSum'], ['CBT-NE', 'AttSum + L1'], ['CBT-NE', 'AttSum + L2'], ['CBT-NE', 'AttSum-Feat'], ['CBT-NE', 'AttSum-Feat + L1'], ['CBT-NE', 'AttSum-Feat + L2']]
1
[['Val'], ['Test']]
[['-', '49.00'], ['51.10', '51.60'], ['52.10', '51.10'], ['-', '55.69'], ['56.03', '55.60'], ['58.35', '56.86'], ['58.08', '57.29'], ['59.62', '59.05'], ['60.22', '59.23'], ['60.13', '58.47'], ['78.50', '74.90'], ['75.30', '69.70'], ['77.10', '72.20'], ['77.80', '72.0'], ['79.60', '74.0'], ['74.35', '69.96'], ['76.20', '72.16'], ['76.80', '72.60'], ['77.80', '72.36'], ['78.40', '74.36'], ['79.40', '72.40']]
column
['accuracy', 'accuracy']
['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Val</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>LAMBADA || GA Reader (Chu et al. 2017)</td> <td>-</td> <td>49.00</td> </tr> <tr> <td>LAMBADA || MAGE (48) (Dhingra et al. 2017)</td> <td>51.10</td> <td>51.60</td> </tr> <tr> <td>LAMBADA || MAGE (64) (Dhingra et al. 2017)</td> <td>52.10</td> <td>51.10</td> </tr> <tr> <td>LAMBADA || GA + C-GRU (Dhingra et al. 2018)</td> <td>-</td> <td>55.69</td> </tr> <tr> <td>LAMBADA || AttSum</td> <td>56.03</td> <td>55.60</td> </tr> <tr> <td>LAMBADA || AttSum + L1</td> <td>58.35</td> <td>56.86</td> </tr> <tr> <td>LAMBADA || AttSum + L2</td> <td>58.08</td> <td>57.29</td> </tr> <tr> <td>LAMBADA || AttSum-Feat</td> <td>59.62</td> <td>59.05</td> </tr> <tr> <td>LAMBADA || AttSum-Feat + L1</td> <td>60.22</td> <td>59.23</td> </tr> <tr> <td>LAMBADA || AttSum-Feat + L2</td> <td>60.13</td> <td>58.47</td> </tr> <tr> <td>CBT-NE || GA Reader (Dhingra et al. 2016)</td> <td>78.50</td> <td>74.90</td> </tr> <tr> <td>CBT-NE || EpiReader (Trischler et al. 2016)</td> <td>75.30</td> <td>69.70</td> </tr> <tr> <td>CBT-NE || DIM Reader (Liu et al. 2017)</td> <td>77.10</td> <td>72.20</td> </tr> <tr> <td>CBT-NE || AoA (Cui et al. 2016)</td> <td>77.80</td> <td>72.0</td> </tr> <tr> <td>CBT-NE || AoA + Reranker (Cui et al. 2016)</td> <td>79.60</td> <td>74.0</td> </tr> <tr> <td>CBT-NE || AttSum</td> <td>74.35</td> <td>69.96</td> </tr> <tr> <td>CBT-NE || AttSum + L1</td> <td>76.20</td> <td>72.16</td> </tr> <tr> <td>CBT-NE || AttSum + L2</td> <td>76.80</td> <td>72.60</td> </tr> <tr> <td>CBT-NE || AttSum-Feat</td> <td>77.80</td> <td>72.36</td> </tr> <tr> <td>CBT-NE || AttSum-Feat + L1</td> <td>78.40</td> <td>74.36</td> </tr> <tr> <td>CBT-NE || AttSum-Feat + L2</td> <td>79.40</td> <td>72.40</td> </tr> </tbody></table>
Table 2
table_2
D18-1130
4
emnlp2018
Results and Discussion. Table 2 shows the full results of our best models on the LAMBADA and CBT-NE datasets, and compares them to recent, best-performing results in the literature. For both tasks the inclusion of either entity features or multi-task objectives leads to large statistically significant increases in validation and test score, according to the McNemar test (α = 0.05) with continuity correction (Dietterich, 1998). Without features, AttSum + L2 achieves the best test results, whereas with features AttSum-Feat + L1 performs best on CBTNE. The results on LAMBADA indicate that entity tracking is a very important overlooked aspect of the task. Interestingly, with features included, AttSum-Feat + L2 appears to hurt test performance on LAMBADA and leaves CBTNE performance essentially unchanged, amounting to a negative result for L2. On the other hand, the effect of AttSum-Feat + L1 is pronounced on CBT-NE, and while our simple models do not increase the state-of-the-art test performance on CBT-NE, they outperform “attentionover-attention” in addition to reranking (Cui et al. 2016), and is outperformed only by architectures supporting “multiple-hop” inference over the document (Dhingra et al. 2016). Our best model on CBT-NE test set, AttSum-Feat + L1, is very close to the current state-of-the-art result. On the validation sets for both LAMBADA and CBT-NE, the improvements from adding features to AttSum + Li are statistically significant (for full results refer to our supplementary material). On LAMBADA, the L1 multi-tasked model is a 3.5-point increase on the state of the art.
[2, 1, 2, 1, 1, 1, 1, 1, 1, 1]
['Results and Discussion.', 'Table 2 shows the full results of our best models on the LAMBADA and CBT-NE datasets, and compares them to recent, best-performing results in the literature.', 'For both tasks the inclusion of either entity features or multi-task objectives leads to large statistically significant increases in validation and test score, according to the McNemar test (α = 0.05) with continuity correction (Dietterich, 1998).', 'Without features, AttSum + L2 achieves the best test results, whereas with features AttSum-Feat + L1 performs best on CBTNE.', 'The results on LAMBADA indicate that entity tracking is a very important overlooked aspect of the task.', 'Interestingly, with features included, AttSum-Feat + L2 appears to hurt test performance on LAMBADA and leaves CBTNE performance essentially unchanged, amounting to a negative result for L2.', 'On the other hand, the effect of AttSum-Feat + L1 is pronounced on CBT-NE, and while our simple models do not increase the state-of-the-art test performance on CBT-NE, they outperform “attentionover-attention” in addition to reranking (Cui et al. 2016), and is outperformed only by architectures supporting “multiple-hop” inference over the document (Dhingra et al. 2016).', 'Our best model on CBT-NE test set, AttSum-Feat + L1, is very close to the current state-of-the-art result.', 'On the validation sets for both LAMBADA and CBT-NE, the improvements from adding features to AttSum + Li are statistically significant (for full results refer to our supplementary material).', 'On LAMBADA, the L1 multi-tasked model is a 3.5-point increase on the state of the art.']
[None, ['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2', 'LAMBADA', 'CBT-NE', 'GA Reader (Chu et al. 2017)', 'MAGE (48) (Dhingra et al. 2017)', 'MAGE (64) (Dhingra et al. 2017)', 'GA + C-GRU (Dhingra et al. 2018)', 'EpiReader (Trischler et al. 2016)', 'DIM Reader (Liu et al. 2017)', 'AoA (Cui et al. 2016)', 'AoA + Reranker (Cui et al. 2016)'], None, ['AttSum + L2', 'Test', 'AttSum-Feat + L1', 'CBT-NE'], ['LAMBADA'], ['AttSum-Feat + L2', 'Test', 'LAMBADA', 'CBT-NE'], ['AttSum-Feat + L1', 'CBT-NE', 'AttSum', 'Test', 'GA Reader (Dhingra et al. 2016)', 'AoA + Reranker (Cui et al. 2016)'], ['AttSum-Feat + L1', 'CBT-NE', 'Test', 'GA Reader (Dhingra et al. 2016)'], ['AttSum-Feat + L1', 'AttSum-Feat + L2', 'Val', 'LAMBADA', 'CBT-NE'], ['AttSum + L1', 'AttSum-Feat + L1', 'GA Reader (Chu et al. 2017)', 'MAGE (48) (Dhingra et al. 2017)', 'MAGE (64) (Dhingra et al. 2017)', 'GA + C-GRU (Dhingra et al. 2018)']]
1
D18-1130table_3
Ablation results on validation sets, see text for definitions of the numeric columns and models.
2
[['LAMBADA', 'AttSum'], ['LAMBADA', 'AttSum + L1'], ['LAMBADA', 'AttSum + L2'], ['LAMBADA', 'AttSum-Feat'], ['LAMBADA', 'AttSum-Feat + L1'], ['LAMBADA', 'AttSum-Feat + L2'], ['CBT-NE', 'AttSum'], ['CBT-NE', 'AttSum + L1'], ['CBT-NE', 'AttSum + L2'], ['CBT-NE', 'AttSum-Feat'], ['CBT-NE', 'AttSum-Feat + L1'], ['CBT-NE', 'AttSum-Feat + L2']]
1
[['All'], ['Entity'], ['Speaker'], ['Quote']]
[['56.03', '75.17', '74.81', '73.31'], ['58.35', '78.51', '78.38', '79.42'], ['58.08', '78.17', '77.96', '76.76'], ['59.62', '79.40', '80.34', '79.68'], ['60.22', '82.00', '82.98', '81.67'], ['60.14', '82.06', '83.06', '82.60'], ['74.35', '76.28', '75.08', '74.96'], ['76.20', '78.03', '76.98', '77.33'], ['76.80', '77.45', '76.27', '76.48'], ['77.80', '80.58', '79.84', '79.61'], ['78.40', '80.44', '79.68', '79.78'], ['79.40', '82.41', '81.51', '81.39']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>All</th> <th>Entity</th> <th>Speaker</th> <th>Quote</th> </tr> </thead> <tbody> <tr> <td>LAMBADA || AttSum</td> <td>56.03</td> <td>75.17</td> <td>74.81</td> <td>73.31</td> </tr> <tr> <td>LAMBADA || AttSum + L1</td> <td>58.35</td> <td>78.51</td> <td>78.38</td> <td>79.42</td> </tr> <tr> <td>LAMBADA || AttSum + L2</td> <td>58.08</td> <td>78.17</td> <td>77.96</td> <td>76.76</td> </tr> <tr> <td>LAMBADA || AttSum-Feat</td> <td>59.62</td> <td>79.40</td> <td>80.34</td> <td>79.68</td> </tr> <tr> <td>LAMBADA || AttSum-Feat + L1</td> <td>60.22</td> <td>82.00</td> <td>82.98</td> <td>81.67</td> </tr> <tr> <td>LAMBADA || AttSum-Feat + L2</td> <td>60.14</td> <td>82.06</td> <td>83.06</td> <td>82.60</td> </tr> <tr> <td>CBT-NE || AttSum</td> <td>74.35</td> <td>76.28</td> <td>75.08</td> <td>74.96</td> </tr> <tr> <td>CBT-NE || AttSum + L1</td> <td>76.20</td> <td>78.03</td> <td>76.98</td> <td>77.33</td> </tr> <tr> <td>CBT-NE || AttSum + L2</td> <td>76.80</td> <td>77.45</td> <td>76.27</td> <td>76.48</td> </tr> <tr> <td>CBT-NE || AttSum-Feat</td> <td>77.80</td> <td>80.58</td> <td>79.84</td> <td>79.61</td> </tr> <tr> <td>CBT-NE || AttSum-Feat + L1</td> <td>78.40</td> <td>80.44</td> <td>79.68</td> <td>79.78</td> </tr> <tr> <td>CBT-NE || AttSum-Feat + L2</td> <td>79.40</td> <td>82.41</td> <td>81.51</td> <td>81.39</td> </tr> </tbody></table>
Table 3
table_3
D18-1130
5
emnlp2018
Table 3 considers the performance of the different models based on a segmentation of the data. Here we consider examples where:. (1) Entity if the answer is a named entity;. (2) Speaker if the answer is a named entity and the speaker of quote;. (3) Quote if the answer is found within a quoted speech. Note that Speaker and Quote categories, while mutually exclusive, are subsets of the overall Entity category. We see that both the additional features and multi-task objectives independently result in a clear improvement in all categories, but that the gains are particularly pronounced for named entities and specifically for Speaker and Quote examples. Here we see sizable increases in performance, particularly in the Speaker category. We see larger increases in the more dialog heavy LAMBADA task.
[1, 2, 2, 2, 2, 2, 1, 1, 1]
['Table 3 considers the performance of the different models based on a segmentation of the data.', 'Here we consider examples where:.', '(1) Entity if the answer is a named entity;.', '(2) Speaker if the answer is a named entity and the speaker of quote;.', '(3) Quote if the answer is found within a quoted speech.', 'Note that Speaker and Quote categories, while mutually exclusive, are subsets of the overall Entity category.', 'We see that both the additional features and multi-task objectives independently result in a clear improvement in all categories, but that the gains are particularly pronounced for named entities and specifically for Speaker and Quote examples.', 'Here we see sizable increases in performance, particularly in the Speaker category.', 'We see larger increases in the more dialog heavy LAMBADA task.']
[['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2', 'All', 'Entity', 'Speaker', 'Quote', 'LAMBADA', 'CBT-NE'], None, ['Entity'], ['Speaker'], ['Quote'], ['Entity', 'Speaker', 'Quote'], ['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2', 'All', 'Speaker', 'Quote'], ['Speaker'], ['LAMBADA', 'Speaker']]
1
D18-1138table_2
Results of human evaluation.
2
[['Model', 'CAE'], ['Model', 'MAE'], ['Model', 'SMAE']]
1
[['Sentiment'], ['Content'], ['Fluency']]
[['6.55', '4.46', '5.98'], ['6.64', '4.43', '5.36'], ['6.57', '5.98', '6.69']]
column
['Sentiment', 'Content', 'Fluency']
['SMAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentiment</th> <th>Content</th> <th>Fluency</th> </tr> </thead> <tbody> <tr> <td>Model || CAE</td> <td>6.55</td> <td>4.46</td> <td>5.98</td> </tr> <tr> <td>Model || MAE</td> <td>6.64</td> <td>4.43</td> <td>5.36</td> </tr> <tr> <td>Model || SMAE</td> <td>6.57</td> <td>5.98</td> <td>6.69</td> </tr> </tbody></table>
Table 2
table_2
D18-1138
4
emnlp2018
Table 2 shows the evaluation results. Our model has obvious advantage over the baseline systems in content preservation, and also performs well in other aspects.
[1, 1]
['Table 2 shows the evaluation results.', 'Our model has obvious advantage over the baseline systems in content preservation, and also performs well in other aspects.']
[None, ['SMAE', 'CAE', 'MAE', 'Content']]
1
D18-1139table_1
Results on the GermEval data, aspect + sentiment task. Micro-averaged F1-score for both aspect category and aspect polarity classification as computed by the GermEval evaluation script. In the bottom part of the table, we report results from (Wojatzki et al., 2017).
1
[['Pipeline LSTM + word2vec'], ['End-to-end LSTM + word2vec'], ['Pipeline CNN + word2vec'], ['End-to-end CNN + word2vec'], ['Pipeline LSTM + glove'], ['End-to-end LSTM + glove'], ['Pipeline CNN + glove'], ['End-to-end CNN + glove'], ['Pipeline LSTM + fasttext'], ['End-to-end LSTM + fasttext'], ['Pipeline CNN + fasttext'], ['End-to-end CNN + fasttext'], ['majority class baseline'], ['GermEval baseline'], ['GermEval best submission']]
1
[['development set'], ['synchronic test set'], ['diachronic test set']]
[['.350', '.297', '.342'], ['.378', '.315', '.383'], ['.350', '.298', '.343'], ['.400', '.319', '.388'], ['.350', '.297', '.342'], ['.378', '.315', '.384'], ['.350', '.298', '.342'], ['.415', '.315', '.390'], ['.350', '.297', '.342'], ['.378', '.315', '.384'], ['.342', '.295', '.342'], ['.511', '.423', '.465'], ['-', '.315', '.384'], ['-', '.322', '.389'], ['-', '.354', '.401']]
column
['f1-score', 'f1-score', 'f1-score']
['Pipeline LSTM + word2vec', 'End-to-end LSTM + word2vec', 'Pipeline CNN + word2vec', 'End-to-end CNN + word2vec', 'Pipeline LSTM + glove', 'End-to-end LSTM + glove', 'Pipeline CNN + glove', 'End-to-end CNN + glove', 'Pipeline LSTM + fasttext', 'End-to-end LSTM + fasttext', 'Pipeline CNN + fasttext', 'End-to-end CNN + fasttext']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>development set</th> <th>synchronic test set</th> <th>diachronic test set</th> </tr> </thead> <tbody> <tr> <td>Pipeline LSTM + word2vec</td> <td>.350</td> <td>.297</td> <td>.342</td> </tr> <tr> <td>End-to-end LSTM + word2vec</td> <td>.378</td> <td>.315</td> <td>.383</td> </tr> <tr> <td>Pipeline CNN + word2vec</td> <td>.350</td> <td>.298</td> <td>.343</td> </tr> <tr> <td>End-to-end CNN + word2vec</td> <td>.400</td> <td>.319</td> <td>.388</td> </tr> <tr> <td>Pipeline LSTM + glove</td> <td>.350</td> <td>.297</td> <td>.342</td> </tr> <tr> <td>End-to-end LSTM + glove</td> <td>.378</td> <td>.315</td> <td>.384</td> </tr> <tr> <td>Pipeline CNN + glove</td> <td>.350</td> <td>.298</td> <td>.342</td> </tr> <tr> <td>End-to-end CNN + glove</td> <td>.415</td> <td>.315</td> <td>.390</td> </tr> <tr> <td>Pipeline LSTM + fasttext</td> <td>.350</td> <td>.297</td> <td>.342</td> </tr> <tr> <td>End-to-end LSTM + fasttext</td> <td>.378</td> <td>.315</td> <td>.384</td> </tr> <tr> <td>Pipeline CNN + fasttext</td> <td>.342</td> <td>.295</td> <td>.342</td> </tr> <tr> <td>End-to-end CNN + fasttext</td> <td>.511</td> <td>.423</td> <td>.465</td> </tr> <tr> <td>majority class baseline</td> <td>-</td> <td>.315</td> <td>.384</td> </tr> <tr> <td>GermEval baseline</td> <td>-</td> <td>.322</td> <td>.389</td> </tr> <tr> <td>GermEval best submission</td> <td>-</td> <td>.354</td> <td>.401</td> </tr> </tbody></table>
Table 1
table_1
D18-1139
5
emnlp2018
Aspect polarity. Table 1 shows the results of our experiments, as well as the results of our strong baselines. Note that the majority class baseline already provides good results. This is due to highly unbalanced data;. the aspect category “Allgemein” (“general”), e.g., constitutes 61.5% of the cases. This imbalance makes the task even more challenging. Over all architectures, we observe a comparable or better performance when using fasttext embeddings instead of word2vec or glove. This backs our hypothesis that subword features are important for processing the morphologically rich German language. Leaving everything else unchanged, we can furthermore see an increase in performance for all settings, when switching from the pipeline to an end-to-end approach. The best performance (marked in bold) is achieved by a combination of CNN and FastText embeddings, which outperforms the highly adapted winning system of the shared task.
[2, 1, 1, 2, 2, 2, 1, 2, 1, 1]
['Aspect polarity.', 'Table 1 shows the results of our experiments, as well as the results of our strong baselines.', 'Note that the majority class baseline already provides good results.', 'This is due to highly unbalanced data;.', 'the aspect category “Allgemein” (“general”), e.g., constitutes 61.5% of the cases.', 'This imbalance makes the task even more challenging.', 'Over all architectures, we observe a comparable or better performance when using fasttext embeddings instead of word2vec or glove.', 'This backs our hypothesis that subword features are important for processing the morphologically rich German language.', 'Leaving everything else unchanged, we can furthermore see an increase in performance for all settings, when switching from the pipeline to an end-to-end approach.', 'The best performance (marked in bold) is achieved by a combination of CNN and FastText embeddings, which outperforms the highly adapted winning system of the shared task.']
[None, ['Pipeline LSTM + word2vec', 'End-to-end LSTM + word2vec', 'Pipeline CNN + word2vec', 'End-to-end CNN + word2vec', 'Pipeline LSTM + glove', 'End-to-end LSTM + glove', 'Pipeline CNN + glove', 'End-to-end CNN + glove', 'Pipeline LSTM + fasttext', 'End-to-end LSTM + fasttext', 'Pipeline CNN + fasttext', 'End-to-end CNN + fasttext', 'majority class baseline', 'GermEval baseline', 'GermEval best submission'], ['majority class baseline'], None, None, None, ['Pipeline LSTM + fasttext', 'End-to-end LSTM + fasttext', 'Pipeline CNN + fasttext', 'End-to-end CNN + fasttext', 'Pipeline LSTM + word2vec', 'End-to-end LSTM + word2vec', 'Pipeline CNN + word2vec', 'End-to-end CNN + word2vec', 'Pipeline LSTM + glove', 'End-to-end LSTM + glove', 'Pipeline CNN + glove', 'End-to-end CNN + glove'], None, ['Pipeline LSTM + word2vec', 'End-to-end LSTM + word2vec', 'Pipeline CNN + word2vec', 'End-to-end CNN + word2vec', 'Pipeline LSTM + glove', 'End-to-end LSTM + glove', 'Pipeline CNN + glove', 'End-to-end CNN + glove', 'Pipeline LSTM + fasttext', 'End-to-end LSTM + fasttext', 'Pipeline CNN + fasttext', 'End-to-end CNN + fasttext'], ['End-to-end CNN + fasttext', 'GermEval baseline']]
1
D18-1139table_2
Micro-averaged F1-score for the prediction of aspect categories only (i.e. without taking polarity into account at all) as computed by the GermEval evaluation script. The results in the bottom part of the table are taken from (Wojatzki et al., 2017).
1
[['End-to-end LSTM + word2vec'], ['End-to-end CNN + word2vec'], ['End-to-end LSTM + glove'], ['End-to-end CNN + glove'], ['End-to-end LSTM + fasttext'], ['End-to-end CNN + fasttext'], ['majority class baseline'], ['GermEval baseline'], ['GermEval best submission']]
1
[['development set'], ['synchronic test set'], ['diachronic test set']]
[['.517', '.442', '.455'], ['.521', '.436', '.470'], ['.517', '.442', '.456'], ['.537', '.457', '.480'], ['.517', '.442', '.456'], ['.623', '.523', '.557'], ['-', '.442', '.456'], ['-', '.481', '.495'], ['-', '.482', '.460']]
column
['f1-score', 'f1-score', 'f1-score']
['End-to-end LSTM + word2vec', 'End-to-end CNN + word2vec', 'End-to-end LSTM + glove', 'End-to-end CNN + glove', 'End-to-end LSTM + fasttext', 'End-to-end CNN + fasttext']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>development set</th> <th>synchronic test set</th> <th>diachronic test set</th> </tr> </thead> <tbody> <tr> <td>End-to-end LSTM + word2vec</td> <td>.517</td> <td>.442</td> <td>.455</td> </tr> <tr> <td>End-to-end CNN + word2vec</td> <td>.521</td> <td>.436</td> <td>.470</td> </tr> <tr> <td>End-to-end LSTM + glove</td> <td>.517</td> <td>.442</td> <td>.456</td> </tr> <tr> <td>End-to-end CNN + glove</td> <td>.537</td> <td>.457</td> <td>.480</td> </tr> <tr> <td>End-to-end LSTM + fasttext</td> <td>.517</td> <td>.442</td> <td>.456</td> </tr> <tr> <td>End-to-end CNN + fasttext</td> <td>.623</td> <td>.523</td> <td>.557</td> </tr> <tr> <td>majority class baseline</td> <td>-</td> <td>.442</td> <td>.456</td> </tr> <tr> <td>GermEval baseline</td> <td>-</td> <td>.481</td> <td>.495</td> </tr> <tr> <td>GermEval best submission</td> <td>-</td> <td>.482</td> <td>.460</td> </tr> </tbody></table>
Table 2
table_2
D18-1139
5
emnlp2018
Aspect category only. Even though our architectures are designed for the task of joint prediction of aspect category and polarity, we can also evaluate them on the detection of aspect categories only. Table 2 shows the results for this task. First of all, we can see that the SVM-based GermEval baseline model has very decent performance as it is practically on par with the best submission for the synchronic and even outperforms the best submission on the diachronic test set. It is therefore well-suited to serve as input to the pipeline LSTM model we compare with in our main task. Comparing our architectures, we see again that fasttext embeddings always lead to equal or better performance. And even though we do not directly optimize our models for this task only, our best model (CNN+fasttext) outperforms all baselines, as well as the GermEval winning system.
[2, 2, 1, 1, 2, 1, 1]
['Aspect category only.', 'Even though our architectures are designed for the task of joint prediction of aspect category and polarity, we can also evaluate them on the detection of aspect categories only.', 'Table 2 shows the results for this task.', 'First of all, we can see that the SVM-based GermEval baseline model has very decent performance as it is practically on par with the best submission for the synchronic and even outperforms the best submission on the diachronic test set.', 'It is therefore well-suited to serve as input to the pipeline LSTM model we compare with in our main task.', 'Comparing our architectures, we see again that fasttext embeddings always lead to equal or better performance.', 'And even though we do not directly optimize our models for this task only, our best model (CNN+fasttext) outperforms all baselines, as well as the GermEval winning system.']
[None, None, None, ['GermEval baseline', 'GermEval best submission', 'synchronic test set', 'diachronic test set'], ['GermEval baseline'], ['End-to-end LSTM + fasttext', 'End-to-end CNN + fasttext'], ['End-to-end CNN + fasttext', 'majority class baseline', 'GermEval baseline', 'GermEval best submission']]
1
D18-1146table_2
Evaluation results of HSLDAs in comparison with sommeliers’ performance.
1
[['HSLDA1 Monovarietal'], ['HSLDA2 Blend'], ['HSLDA3 Balanced'], ['Sommeliers']]
2
[['F1 Scores', 'Training Set'], ['F1 Scores', 'Testing Set']]
[['71.1', '68.4'], ['62.5', '59.1'], ['59.8', '56.4'], ['NA', '62.1']]
column
['F1 Scores', 'F1 Scores']
['HSLDA1 Monovarietal', 'HSLDA2 Blend', 'HSLDA3 Balanced']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 Scores || Training Set</th> <th>F1 Scores || Testing Set</th> </tr> </thead> <tbody> <tr> <td>HSLDA1 Monovarietal</td> <td>71.1</td> <td>68.4</td> </tr> <tr> <td>HSLDA2 Blend</td> <td>62.5</td> <td>59.1</td> </tr> <tr> <td>HSLDA3 Balanced</td> <td>59.8</td> <td>56.4</td> </tr> <tr> <td>Sommeliers</td> <td>NA</td> <td>62.1</td> </tr> </tbody></table>
Table 2
table_2
D18-1146
5
emnlp2018
Table 2 shows the average F1 scores of different models versus sommeliers’ performance. Likewise, the sommeliers’ performance measures represent a conservative(ly higher) estimate since scores lower than 60% in section 1 were removed. We find the HSLDA model, especially of monovarietals, outperforms sommeliers by 6.3%, as measured by F1.
[1, 1, 1]
['Table 2 shows the average F1 scores of different models versus sommeliers’ performance.', 'Likewise, the sommeliers’ performance measures represent a conservative(ly higher) estimate since scores lower than 60% in section 1 were removed.', 'We find the HSLDA model, especially of monovarietals, outperforms sommeliers by 6.3%, as measured by F1.']
[['HSLDA1 Monovarietal', 'HSLDA2 Blend', 'HSLDA3 Balanced', 'Sommeliers', 'F1 Scores'], ['Sommeliers'], ['HSLDA1 Monovarietal', 'Sommeliers', 'Testing Set']]
1
D18-1147table_3
Emotion detection results using 10-fold cross validation. The numbers are percentages.
3
[['Method', 'Joy', 'ConvLexLSTM'], ['Method', 'Joy', 'ConvLSTM'], ['Method', 'Joy', 'CNN'], ['Method', 'Joy', 'LSTM'], ['Method', 'Joy', 'Seven-Lexicon'], ['Method', '-', 'C-ConvLSTM'], ['Method', '-', 'SWAT'], ['Method', '-', 'EmoSVM'], ['Method', 'Sad', 'ConvLexLSTM'], ['Method', 'Sad', 'ConvLSTM'], ['Method', 'Sad', 'CNN'], ['Method', 'Sad', 'LSTM'], ['Method', 'Sad', 'Seven-Lexicon'], ['Method', '-', 'C-ConvLSTM'], ['Method', '-', 'SWAT'], ['Method', '-', 'EmoSVM']]
2
[['B-DS', 'Pr'], ['B-DS', 'Re'], ['B-DS', 'F1'], ['L-DS', 'Pr'], ['L-DS', 'Re'], ['L-DS', 'F1']]
[['92.3', '94.3', '93.2', '90.4', '89.3', '89.8'], ['86.6', '88.4', '87.4', '87.0', '83.0', '85.0'], ['85.0', '84.0', '84.5', '82.2', '82.8', '82.5'], ['86.0', '86.6', '86.3', '85.0', '83.0', '84.0'], ['63.4', '87.3', '73.45', '60.0', '85.1', '70.37'], ['86.2', '87.0', '86.6', '85.0', '82.0', '83.47'], ['66.0', '68.0', '67.0', '65.5', '66.7', '66.0'], ['81.0', '82.0', '81.5', '82.0', '80.0', '81.0'], ['93.7', '91.1', '92.3', '88.0', '90.9', '89.4'], ['89.0', '87.8', '88.4', '81.0', '87.5', '84.0'], ['83.2', '83.6', '83.4', '81.7', '80.5', '81.0'], ['87.4', '85.8', '86.6', '83.2', '83.6', '83.4'], ['61.0', '84.9', '70.99', '61.0', '83.3', '70.42'], ['85.0', '83.6', '84.3', '83.7', '82.1', '82.9'], ['65.0', '66.0', '65.5', '64.0', '65.0', '64.5'], ['80.5', '81.7', '81.0', '79.0', '78.0', '78.5']]
column
['Pr', 'Re', 'F1', 'Pr', 'Re', 'F1']
['ConvLexLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B-DS || Pr</th> <th>B-DS || Re</th> <th>B-DS || F1</th> <th>L-DS || Pr</th> <th>L-DS || Re</th> <th>L-DS || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Joy || ConvLexLSTM</td> <td>92.3</td> <td>94.3</td> <td>93.2</td> <td>90.4</td> <td>89.3</td> <td>89.8</td> </tr> <tr> <td>Method || Joy || ConvLSTM</td> <td>86.6</td> <td>88.4</td> <td>87.4</td> <td>87.0</td> <td>83.0</td> <td>85.0</td> </tr> <tr> <td>Method || Joy || CNN</td> <td>85.0</td> <td>84.0</td> <td>84.5</td> <td>82.2</td> <td>82.8</td> <td>82.5</td> </tr> <tr> <td>Method || Joy || LSTM</td> <td>86.0</td> <td>86.6</td> <td>86.3</td> <td>85.0</td> <td>83.0</td> <td>84.0</td> </tr> <tr> <td>Method || Joy || Seven-Lexicon</td> <td>63.4</td> <td>87.3</td> <td>73.45</td> <td>60.0</td> <td>85.1</td> <td>70.37</td> </tr> <tr> <td>Method || - || C-ConvLSTM</td> <td>86.2</td> <td>87.0</td> <td>86.6</td> <td>85.0</td> <td>82.0</td> <td>83.47</td> </tr> <tr> <td>Method || - || SWAT</td> <td>66.0</td> <td>68.0</td> <td>67.0</td> <td>65.5</td> <td>66.7</td> <td>66.0</td> </tr> <tr> <td>Method || - || EmoSVM</td> <td>81.0</td> <td>82.0</td> <td>81.5</td> <td>82.0</td> <td>80.0</td> <td>81.0</td> </tr> <tr> <td>Method || Sad || ConvLexLSTM</td> <td>93.7</td> <td>91.1</td> <td>92.3</td> <td>88.0</td> <td>90.9</td> <td>89.4</td> </tr> <tr> <td>Method || Sad || ConvLSTM</td> <td>89.0</td> <td>87.8</td> <td>88.4</td> <td>81.0</td> <td>87.5</td> <td>84.0</td> </tr> <tr> <td>Method || Sad || CNN</td> <td>83.2</td> <td>83.6</td> <td>83.4</td> <td>81.7</td> <td>80.5</td> <td>81.0</td> </tr> <tr> <td>Method || Sad || LSTM</td> <td>87.4</td> <td>85.8</td> <td>86.6</td> <td>83.2</td> <td>83.6</td> <td>83.4</td> </tr> <tr> <td>Method || Sad || Seven-Lexicon</td> <td>61.0</td> <td>84.9</td> <td>70.99</td> <td>61.0</td> <td>83.3</td> <td>70.42</td> </tr> <tr> <td>Method || - || C-ConvLSTM</td> <td>85.0</td> <td>83.6</td> <td>84.3</td> <td>83.7</td> <td>82.1</td> <td>82.9</td> </tr> <tr> <td>Method || - || SWAT</td> <td>65.0</td> <td>66.0</td> <td>65.5</td> <td>64.0</td> <td>65.0</td> <td>64.5</td> </tr> <tr> <td>Method || - || EmoSVM</td> <td>80.5</td> <td>81.7</td> <td>81.0</td> <td>79.0</td> <td>78.0</td> <td>78.5</td> </tr> </tbody></table>
Table 3
table_3
D18-1147
4
emnlp2018
Table 3 shows the results of this comparison. As can be seen from the table, ConvLexLSTM achieves the best results consistently throughout all experiments in terms of all compared measures. This ablation experiment confirms our intuition that all components are contributing to the final emotion detection. For example, removing the seven lexicon features from ConvLexLSTM, which yields ConvLSTM, results in a drop in F1-score by 5.8% on joy in B-DS, and by 3.9% on sadness in B-DS. Still, ConvLSTM is the second performing model in terms of F1-score. These results show that our model can be successfully applied in a health domain even in the absence of health lexicons, which are often expensive to obtain. Not surprisingly, the SVM with the sevenlexicon based features (denoted as Seven-Lexicon) performs the worst among the compared models, suggesting that capturing the semantic information from text via deep neural networks improves emotion detection. Second, we compare ConvLexLSTM with three baselines: C-ConvLSTM (i.e., a character-level CNN-LSTM) (Kim et al., 2016), SWAT (Katz et al., 2007) (i.e., an emotion detection model from SemEval-2007), and EmoSVM (i.e., an SVM with a set of handcrafted features: unigrams, bigrams, POS tags, the word-emotions association lexicon by Mohammad (2012), the WordNet-Affect lexicon by Strapparava et al. (2004), and the output of the Stanford sentiment tool by Socher et al. (2013). Table 3 shows the results of this comparison as well. As can be seen, ConvLexLSTM outperforms all three baselines on both datasets, and more importantly, the character-level CNN-LSTM by Kim et al. (2016) (i.e., the C-ConvLSTM model). This result confirms our belief that applying word embedding vectors, which are trained directly on data from OHCs yields improvement in performance over character-level models. It is worth mentioning that all deep neural networks, ConvLexLSTM, ConvLSTM, CNN, LSTM, and C-ConvLSTM, that capture high-level semantic features perform better than the traditional models on emotion detection. The lexiconbased features act as a complement (for the highlevel semantic features) by looking into exact words in the text to generate appropriate features in ConvLexLSTM for emotion detection. With a paired T-test, the improvements of ConvLexLSTM over the compared models for F1-score are statistically significant for p-values < 0.05.
[1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 2, 1, 2, 2]
['Table 3 shows the results of this comparison.', 'As can be seen from the table, ConvLexLSTM achieves the best results consistently throughout all experiments in terms of all compared measures.', 'This ablation experiment confirms our intuition that all components are contributing to the final emotion detection.', 'For example, removing the seven lexicon features from ConvLexLSTM, which yields ConvLSTM, results in a drop in F1-score by 5.8% on joy in B-DS, and by 3.9% on sadness in B-DS.', 'Still, ConvLSTM is the second performing model in terms of F1-score.', 'These results show that our model can be successfully applied in a health domain even in the absence of health lexicons, which are often expensive to obtain.', 'Not surprisingly, the SVM with the sevenlexicon based features (denoted as Seven-Lexicon) performs the worst among the compared models, suggesting that capturing the semantic information from text via deep neural networks improves emotion detection.', 'Second, we compare ConvLexLSTM with three baselines: C-ConvLSTM (i.e., a character-level CNN-LSTM) (Kim et al., 2016), SWAT (Katz et al., 2007) (i.e., an emotion detection model from SemEval-2007), and EmoSVM (i.e., an SVM with a set of handcrafted features: unigrams, bigrams, POS tags, the word-emotions association lexicon by Mohammad (2012), the WordNet-Affect lexicon by Strapparava et al. (2004), and the output of the Stanford sentiment tool by Socher et al. (2013).', 'Table 3 shows the results of this comparison as well.', 'As can be seen, ConvLexLSTM outperforms all three baselines on both datasets, and more importantly, the character-level CNN-LSTM by Kim et al. (2016) (i.e., the C-ConvLSTM model).', 'This result confirms our belief that applying word embedding vectors, which are trained directly on data from OHCs yields improvement in performance over character-level models.', 'It is worth mentioning that all deep neural networks, ConvLexLSTM, ConvLSTM, CNN, LSTM, and C-ConvLSTM, that capture high-level semantic features perform better than the traditional models on emotion detection.', 'The lexiconbased features act as a complement (for the highlevel semantic features) by looking into exact words in the text to generate appropriate features in ConvLexLSTM for emotion detection.', 'With a paired T-test, the improvements of ConvLexLSTM over the compared models for F1-score are statistically significant for p-values < 0.05.']
[None, ['ConvLexLSTM', 'B-DS', 'L-DS', 'Pr', 'Re', 'F1'], None, ['ConvLexLSTM', 'ConvLSTM', 'B-DS', 'F1', 'Joy', 'Sad'], ['ConvLSTM', 'F1'], ['ConvLexLSTM'], ['Seven-Lexicon'], ['ConvLexLSTM', 'C-ConvLSTM', 'SWAT', 'EmoSVM'], None, ['C-ConvLSTM', 'SWAT', 'EmoSVM', 'B-DS', 'L-DS'], ['Seven-Lexicon'], ['ConvLexLSTM', 'ConvLSTM', 'CNN', 'LSTM', 'C-ConvLSTM', 'SWAT', 'EmoSVM'], ['ConvLexLSTM'], ['ConvLexLSTM', 'F1']]
1
D18-1148table_4
Prediction results (Pearson r, using unigrams + topics) using full 10% data vs. users with 30+ tweets. The number of tweets used in each task is listed to highlight the fact that the “User to County” tasks use less tweets than the “all” tasks.
1
[['User to County'], ['Nuser−tweets'], ['Tweet to County (all)'], ['County (all)'], ['Nall−tweets']]
1
[['Income'], ['Educat.'], ['Life Satis.'], ['Heart Disease']]
[['.82', '.88', '.47', '.75'], ['1.350B', '1.350B', '1.356B', '1.360B'], ['.72', '.81', '.36', '.71'], ['.73', '.82', '.31', '.72'], ['1.621B', '1.621B', '1.628B', '1.634B']]
column
['r', 'r', 'r', 'r']
['User to County', 'Tweet to County (all)', 'County (all)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Income</th> <th>Educat.</th> <th>Life Satis.</th> <th>Heart Disease</th> </tr> </thead> <tbody> <tr> <td>User to County</td> <td>.82</td> <td>.88</td> <td>.47</td> <td>.75</td> </tr> <tr> <td>Nuser−tweets</td> <td>1.350B</td> <td>1.350B</td> <td>1.356B</td> <td>1.360B</td> </tr> <tr> <td>Tweet to County (all)</td> <td>.72</td> <td>.81</td> <td>.36</td> <td>.71</td> </tr> <tr> <td>County (all)</td> <td>.73</td> <td>.82</td> <td>.31</td> <td>.72</td> </tr> <tr> <td>Nall−tweets</td> <td>1.621B</td> <td>1.621B</td> <td>1.628B</td> <td>1.634B</td> </tr> </tbody></table>
Table 4
table_4
D18-1148
4
emnlp2018
In Table 4 we remove the 30+ tweet requirement from the “Tweet to County” and “County” methods and compare against the “User to County” method (with the 30+ tweet requirement). Again we see the “User to County” method outperforms all others in spite of the fact that the “User to County” approach uses less data than both “all” approaches, which contains 108 million more tweets.
[1, 1]
['In Table 4 we remove the 30+ tweet requirement from the “Tweet to County” and “County” methods and compare against the “User to County” method (with the 30+ tweet requirement).', 'Again we see the “User to County” method outperforms all others in spite of the fact that the “User to County” approach uses less data than both “all” approaches, which contains 108 million more tweets.']
[['User to County', 'Tweet to County (all)', 'County (all)'], ['User to County', 'Tweet to County (all)', 'County (all)']]
1
D18-1148table_5
1% sample prediction results (Pearson r) using topics + unigrams. ∗ same counties as the 10% prediction task.
1
[['Tweet to County'], ['County'], ['User to County'], ['Nuser−tweets'], ['County (all)'], ['Nall−tweets'], ['Ncounties']]
1
[['Income'], ['Income'], ['Educat.'], ['Educat.'], ['Life Satis.'], ['Life Satis.'], ['Heart Disease'], ['Heart Disease']]
[['.71', '.62', '.77', '.71', '.35', '.32', '.64', '.63'], ['.70', '.60', '.76', '.67', '.32', '.28', '.62', '.62'], ['.76', '.70', '.79', '.74', '.39', '.28', '.66', '.66'], ['127M', '130M', '127M', '130M', '127M', '130M', '127M', '131M'], ['.75', '.67', '.83', '.77', '.37', '.34', '.68', '.66'], ['191M', '195M', '191M', '195M', '191M', '197M', '191M', '198M'], ['949', '1750', '949', '1750', '954', '1952', '960', '2041']]
column
['r', 'r', 'r', 'r', 'r', 'r', 'r', 'r']
['Nuser−tweets']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Income</th> <th>Income</th> <th>Educat.</th> <th>Educat.</th> <th>Life Satis.</th> <th>Life Satis.</th> <th>Heart Disease</th> <th>Heart Disease</th> </tr> </thead> <tbody> <tr> <td>Tweet to County</td> <td>.71</td> <td>.62</td> <td>.77</td> <td>.71</td> <td>.35</td> <td>.32</td> <td>.64</td> <td>.63</td> </tr> <tr> <td>County</td> <td>.70</td> <td>.60</td> <td>.76</td> <td>.67</td> <td>.32</td> <td>.28</td> <td>.62</td> <td>.62</td> </tr> <tr> <td>User to County</td> <td>.76</td> <td>.70</td> <td>.79</td> <td>.74</td> <td>.39</td> <td>.28</td> <td>.66</td> <td>.66</td> </tr> <tr> <td>Nuser−tweets</td> <td>127M</td> <td>130M</td> <td>127M</td> <td>130M</td> <td>127M</td> <td>130M</td> <td>127M</td> <td>131M</td> </tr> <tr> <td>County (all)</td> <td>.75</td> <td>.67</td> <td>.83</td> <td>.77</td> <td>.37</td> <td>.34</td> <td>.68</td> <td>.66</td> </tr> <tr> <td>Nall−tweets</td> <td>191M</td> <td>195M</td> <td>191M</td> <td>195M</td> <td>191M</td> <td>197M</td> <td>191M</td> <td>198M</td> </tr> <tr> <td>Ncounties</td> <td>949</td> <td>1750</td> <td>949</td> <td>1750</td> <td>954</td> <td>1952</td> <td>960</td> <td>2041</td> </tr> </tbody></table>
Table 5
table_5
D18-1148
5
emnlp2018
1% data. In Table 5 we repeat the above experiment on a 1% Twitter sample. Here we see that the “User to County” method outperforms both the “Tweet to County” and “County” methods (with all three tasks using the same number of tweets). When we compare the “User to County” and “County (all)” methods we see the “User to county” outperforming on two out of four tasks (Income and Life Satisfaction). Again, we note that the “User to County” is using less data than the “County (all)”. While, across the board, the performance increase is not as substantial as in the 10% results, we see comparable performance between “User to County” and “County (all)” methods despite the difference in the number of tweets.
[2, 1, 1, 1, 1, 1]
['1% data.', 'In Table 5 we repeat the above experiment on a 1% Twitter sample.', 'Here we see that the “User to County” method outperforms both the “Tweet to County” and “County” methods (with all three tasks using the same number of tweets).', 'When we compare the “User to County” and “County (all)” methods we see the “User to county” outperforming on two out of four tasks (Income and Life Satisfaction).', 'Again, we note that the “User to County” is using less data than the “County (all)”.', 'While, across the board, the performance increase is not as substantial as in the 10% results, we see comparable performance between “User to County” and “County (all)” methods despite the difference in the number of tweets.']
[None, None, ['User to County', 'Tweet to County', 'County'], ['User to County', 'County (all)', 'Income', 'Life Satis.'], ['User to County', 'County (all)'], ['User to County', 'County (all)']]
1
D18-1152table_3
Language modeling perplexity on PTB test set (lower is better). LSTM numbers are taken from Lei et al. (2017b). ` denotes the number of layers. Bold font indicates best performance.
6
[['Model', 'LSTM', 'l', '2', '# Params.', '24M'], ['Model', 'LSTM', 'l', '3', '# Params.', '24M'], ['Model', 'RRNN(B)', 'l', '2', '# Params.', '10M'], ['Model', 'RRNN(B)m+', 'l', '2', '# Params.', '10M'], ['Model', 'RRNN(C)', 'l', '2', '# Params.', '10M'], ['Model', 'RRNN(F)', 'l', '2', '# Params.', '10M'], ['Model', 'RRNN(B)', 'l', '3', '# Params.', '24M'], ['Model', 'RRNN(B)m+', 'l', '3', '# Params.', '24M'], ['Model', 'RRNN(C)', 'l', '3', '# Params.', '24M'], ['Model', 'RRNN(F)', 'l', '3', '# Params.', '24M']]
1
[['Dev.'], ['Test']]
[['73.3', '71.4'], ['78.8', '76.2'], ['73.1', '69.2'], ['75.1', '71.7'], ['72.5', '69.5'], ['69.5', '66.3'], ['68.7', '65.2'], ['70.8', '66.9'], ['70.0', '67.0'], ['66.0', '63.1']]
column
['perplexity', 'perplexity']
['RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'RRNN(F)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev.</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM || l || 2 || # Params. || 24M</td> <td>73.3</td> <td>71.4</td> </tr> <tr> <td>Model || LSTM || l || 3 || # Params. || 24M</td> <td>78.8</td> <td>76.2</td> </tr> <tr> <td>Model || RRNN(B) || l || 2 || # Params. || 10M</td> <td>73.1</td> <td>69.2</td> </tr> <tr> <td>Model || RRNN(B)m+ || l || 2 || # Params. || 10M</td> <td>75.1</td> <td>71.7</td> </tr> <tr> <td>Model || RRNN(C) || l || 2 || # Params. || 10M</td> <td>72.5</td> <td>69.5</td> </tr> <tr> <td>Model || RRNN(F) || l || 2 || # Params. || 10M</td> <td>69.5</td> <td>66.3</td> </tr> <tr> <td>Model || RRNN(B) || l || 3 || # Params. || 24M</td> <td>68.7</td> <td>65.2</td> </tr> <tr> <td>Model || RRNN(B)m+ || l || 3 || # Params. || 24M</td> <td>70.8</td> <td>66.9</td> </tr> <tr> <td>Model || RRNN(C) || l || 3 || # Params. || 24M</td> <td>70.0</td> <td>67.0</td> </tr> <tr> <td>Model || RRNN(F) || l || 3 || # Params. || 24M</td> <td>66.0</td> <td>63.1</td> </tr> </tbody></table>
Table 3
table_3
D18-1152
8
emnlp2018
Results. Following Collins et al. (2017) and Melis et al. (2018), we compare models controlling for parameter budget. Table 3 summarizes language modeling perplexities on PTB test set. The middle block compares all models with two layers and 10M trainable parameters. RRNN(B) and RRNN(C) achieve roughly the same performance; interpolating both unigram and bigram features, RRNN(F) outperforms others by more than 2.9 test perplexity. For the three-layer and 24M setting (the bottom block), we observe similar trends, except that RRNN(C) slightly underperforms RRNN(B). Here RRNN(F) outperforms others by more than 2.1 perplexity. Using a max-plus semiring, RRNN(B)m+ underperforms RRNN(B) under both settings. Possible reasons could be the suboptimal design choice for computing input representations in the former (ยง5.2). Finally, most compared models outperform the LSTM baselines, whose numbers are taken from Lei et al. (2017b).
[2, 2, 1, 1, 1, 1, 1, 1, 2, 1]
['Results.', 'Following Collins et al. (2017) and Melis et al. (2018), we compare models controlling for parameter budget.', 'Table 3 summarizes language modeling perplexities on PTB test set.', 'The middle block compares all models with two layers and 10M trainable parameters.', 'RRNN(B) and RRNN(C) achieve roughly the same performance; interpolating both unigram and bigram features, RRNN(F) outperforms others by more than 2.9 test perplexity.', 'For the three-layer and 24M setting (the bottom block), we observe similar trends, except that RRNN(C) slightly underperforms RRNN(B).', 'Here RRNN(F) outperforms others by more than 2.1 perplexity.', 'Using a max-plus semiring, RRNN(B)m+ underperforms RRNN(B) under both settings.', 'Possible reasons could be the suboptimal design choice for computing input representations in the former (ยง5.2).', 'Finally, most compared models outperform the LSTM baselines, whose numbers are taken from Lei et al. (2017b).']
[None, ['# Params.'], None, ['RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'RRNN(F)', 'l', '2', '# Params.', '10M'], ['RRNN(B)', 'RRNN(C)', 'RRNN(F)', 'RRNN(B)m+', 'Test', 'l', '2', '# Params.', '10M'], ['RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'RRNN(F)', 'l', '3', '# Params.', '24M'], ['RRNN(F)', 'RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'l', '3', '# Params.', '24M'], ['RRNN(B)m+', 'RRNN(B)', 'l', '2', '3', '# Params.', '10M', '24M'], None, ['RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'RRNN(F)', 'LSTM']]
1
D18-1156table_1
Overall performance comparing to the state-of-the-art methods with golden-standard entities.
2
[['Method', 'Cross-Event'], ['Method', 'JointBeam'], ['Method', 'DMCNN'], ['Method', 'PSL'], ['Method', 'JRNN'], ['Method', 'dbRNN'], ['Method', 'JMEE']]
2
[['Trigger Identification (%)', 'P'], ['Trigger Identification (%)', 'R'], ['Trigger Identification (%)', 'F1'], ['Trigger Classification (%)', 'P'], ['Trigger Classification (%)', 'R'], ['Trigger Classification (%)', 'F1'], ['Argument Identification (%)', 'P'], ['Argument Identification (%)', 'R'], ['Argument Identification (%)', 'F1'], ['Argument Role (%)', 'P'], ['Argument Role (%)', 'R'], ['Argument Role (%)', 'F1']]
[['N/A', 'N/A', 'N/A', '68.7', '68.9', '68.8', '50.9', '49.7', '50.3', '45.1', '44.1', '44.6'], ['76.9', '65.0', '70.4', '73.7', '62.3', '67.5', '69.8', '47.9', '56.8', '64.7', '44.4', '52.7'], ['80.4', '67.7', '73.5', '75.6', '63.6', '69.1', '68.8', '51.9', '59.1', '62.2', '46.9', '53.5'], ['N/A', 'N/A', 'N/A', '75.3', '64.4', '69.4', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A'], ['68.5', '75.7', '71.9', '66.0', '73.0', '69.3', '61.4', '64.2', '62.8', '54.2', '56.7', '55.4'], ['N/A', 'N/A', 'N/A', '74.1', '69.8', '71.9', '71.3', '64.5', '67.7', '66.2', '52.8', '58.7'], ['80.2', '72.1', '75.9', '76.3', '71.3', '73.7', '71.4', '65.6', '68.4', '66.8', '54.9', '60.3']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['JMEE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger Identification (%) || P</th> <th>Trigger Identification (%) || R</th> <th>Trigger Identification (%) || F1</th> <th>Trigger Classification (%) || P</th> <th>Trigger Classification (%) || R</th> <th>Trigger Classification (%) || F1</th> <th>Argument Identification (%) || P</th> <th>Argument Identification (%) || R</th> <th>Argument Identification (%) || F1</th> <th>Argument Role (%) || P</th> <th>Argument Role (%) || R</th> <th>Argument Role (%) || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Cross-Event</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>68.7</td> <td>68.9</td> <td>68.8</td> <td>50.9</td> <td>49.7</td> <td>50.3</td> <td>45.1</td> <td>44.1</td> <td>44.6</td> </tr> <tr> <td>Method || JointBeam</td> <td>76.9</td> <td>65.0</td> <td>70.4</td> <td>73.7</td> <td>62.3</td> <td>67.5</td> <td>69.8</td> <td>47.9</td> <td>56.8</td> <td>64.7</td> <td>44.4</td> <td>52.7</td> </tr> <tr> <td>Method || DMCNN</td> <td>80.4</td> <td>67.7</td> <td>73.5</td> <td>75.6</td> <td>63.6</td> <td>69.1</td> <td>68.8</td> <td>51.9</td> <td>59.1</td> <td>62.2</td> <td>46.9</td> <td>53.5</td> </tr> <tr> <td>Method || PSL</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>75.3</td> <td>64.4</td> <td>69.4</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Method || JRNN</td> <td>68.5</td> <td>75.7</td> <td>71.9</td> <td>66.0</td> <td>73.0</td> <td>69.3</td> <td>61.4</td> <td>64.2</td> <td>62.8</td> <td>54.2</td> <td>56.7</td> <td>55.4</td> </tr> <tr> <td>Method || dbRNN</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>74.1</td> <td>69.8</td> <td>71.9</td> <td>71.3</td> <td>64.5</td> <td>67.7</td> <td>66.2</td> <td>52.8</td> <td>58.7</td> </tr> <tr> <td>Method || JMEE</td> <td>80.2</td> <td>72.1</td> <td>75.9</td> <td>76.3</td> <td>71.3</td> <td>73.7</td> <td>71.4</td> <td>65.6</td> <td>68.4</td> <td>66.8</td> <td>54.9</td> <td>60.3</td> </tr> </tbody></table>
Table 1
table_1
D18-1156
7
emnlp2018
Table 1 shows the overall performance comparing to the above state-of-the-art methods with golden-standard entities. From the table, we can see that our JMEE framework achieves the best F1 scores for both trigger classification and argumentrelated subtasks among all the compared methods. There is a significant gain with the trigger classification and argument role labeling performances, which is 2% higher over the best-reported models. These results demonstrate the effectivenesses of our method to incorporate with the graph convolution and syntactic shortcut arcs.
[1, 1, 1, 2]
['Table 1 shows the overall performance comparing to the above state-of-the-art methods with golden-standard entities.', 'From the table, we can see that our JMEE framework achieves the best F1 scores for both trigger classification and argumentrelated subtasks among all the compared methods.', 'There is a significant gain with the trigger classification and argument role labeling performances, which is 2% higher over the best-reported models.', 'These results demonstrate the effectivenesses of our method to incorporate with the graph convolution and syntactic shortcut arcs.']
[['JMEE', 'Cross-Event', 'JointBeam', 'DMCNN', 'PSL', 'JRNN', 'dbRNN'], ['JMEE', 'F1', 'Trigger Classification (%)', 'Argument Identification (%)', 'Argument Role (%)', 'Cross-Event', 'JointBeam', 'DMCNN', 'PSL', 'JRNN', 'dbRNN'], ['JMEE', 'Trigger Classification (%)', 'Argument Role (%)', 'dbRNN'], ['JMEE']]
1
D18-1158table_2
Performance of different ED systems. 1/1 means one sentence that only has one event and 1/N means that one sentence has multiple events.
2
[['Method', 'LSTM+Softmax'], ['Method', 'LSTM+CRF'], ['Method', 'LSTM+TLSTM'], ['Method', 'LSTM+HTLSTM'], ['Method', 'LSTM+HTLSTM+Bias']]
1
[['1/1'], ['1/N'], ['all']]
[['74.7', '44.6', '66.8'], ['75.1', '49.5', '68.5'], ['76.8', '51.2', '70.2'], ['77.9', '57.3', '72.4'], ['78.4', '59.5', '73.3']]
column
['accuracy', 'accuracy', 'accuracy']
['LSTM+HTLSTM+Bias']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>1/1</th> <th>1/N</th> <th>all</th> </tr> </thead> <tbody> <tr> <td>Method || LSTM+Softmax</td> <td>74.7</td> <td>44.6</td> <td>66.8</td> </tr> <tr> <td>Method || LSTM+CRF</td> <td>75.1</td> <td>49.5</td> <td>68.5</td> </tr> <tr> <td>Method || LSTM+TLSTM</td> <td>76.8</td> <td>51.2</td> <td>70.2</td> </tr> <tr> <td>Method || LSTM+HTLSTM</td> <td>77.9</td> <td>57.3</td> <td>72.4</td> </tr> <tr> <td>Method || LSTM+HTLSTM+Bias</td> <td>78.4</td> <td>59.5</td> <td>73.3</td> </tr> </tbody></table>
Table 2
table_2
D18-1158
7
emnlp2018
Table 2 shows the results. And we have the the following observations:. 1) Compared with LSTM+Softmax, LSTM-based collective ED methods (LSTM+CRF, LSTM+TLSTM, LSTM+HTLSTM, LSTM+HTLSTM+Bias) achieves a better performance. Surprisingly, the LSTM+HTLSTM+Bias yields a 14.9% improvement on the sentence contains multiple events over the LSTM+Softmax. It proves neural tagging schema is effective for ED task especially for the sentences contain multiple events. 2) The LSTM+TLSTM achieve better performances than LSTM+CRF. And the LSTM+HTLSTM achieve better performances than LSTM+TLSTM. The results prove the effectiveness of the TLSTM layer and HTLSTM layer. 3) Compared with LSTM+HTLSTM, the LSTM+HTLSTM+Bias gains a 0.9% improvement on all sentence. It demonstrates the effectiveness of our proposed bias objective function.
[1, 2, 1, 1, 2, 1, 1, 2, 1, 2]
['Table 2 shows the results.', 'And we have the the following observations:.', '1) Compared with LSTM+Softmax, LSTM-based collective ED methods (LSTM+CRF, LSTM+TLSTM, LSTM+HTLSTM, LSTM+HTLSTM+Bias) achieves a better performance.', 'Surprisingly, the LSTM+HTLSTM+Bias yields a 14.9% improvement on the sentence contains multiple events over the LSTM+Softmax.', 'It proves neural tagging schema is effective for ED task especially for the sentences contain multiple events.', '2) The LSTM+TLSTM achieve better performances than LSTM+CRF.', 'And the LSTM+HTLSTM achieve better performances than LSTM+TLSTM.', 'The results prove the effectiveness of the TLSTM layer and HTLSTM layer.', '3) Compared with LSTM+HTLSTM, the LSTM+HTLSTM+Bias gains a 0.9% improvement on all sentence.', 'It demonstrates the effectiveness of our proposed bias objective function.']
[None, None, ['LSTM+Softmax', 'LSTM+CRF', 'LSTM+TLSTM', 'LSTM+HTLSTM', 'LSTM+HTLSTM+Bias'], ['LSTM+HTLSTM+Bias', 'LSTM+Softmax', '1/N'], ['LSTM+HTLSTM+Bias'], ['LSTM+TLSTM', 'LSTM+CRF'], ['LSTM+HTLSTM', 'LSTM+TLSTM'], ['LSTM+HTLSTM', 'LSTM+TLSTM'], ['LSTM+HTLSTM', 'LSTM+HTLSTM+Bias', 'all'], ['LSTM+HTLSTM']]
1
D18-1159table_5
Experimental results involving analyzing PPs as valency patterns.
1
[['Baseline'], ['PP MTL'], ['PP MTL + Joint Decoding'], ['Core + PP MTL'], ['Core + PP MTL + Joint Decoding'], ['Core + Func. + PP MTL'], ['Core + Func. + PP MTL + Joint Decoding']]
1
[['UAS'], ['LAS'], ['Core P'], ['Core R'], ['Core F'], ['Func. P'], ['Func. R'], ['Func. F'], ['PP P'], ['PP R'], ['PP F']]
[['87.59', '83.64', '80.87', '81.31', '81.08', '91.99', '92.43', '92.20', '77.29', '77.99', '77.62'], ['87.67', '83.70', '80.61', '81.23', '80.91', '92.03', '92.50', '92.26', '78.30', '78.38', '78.32'], ['87.68', '83.69', '79.93', '81.50', '80.69', '91.92', '92.51', '92.21', '80.59', '77.68', '79.04'], ['87.70', '83.77', '81.62', '81.81', '81.71', '91.93', '92.52', '92.22', '77.93', '78.25', '78.08'], ['87.80', '83.91', '84.18', '81.97', '83.05', '91.68', '92.65', '92.16', '79.71', '78.03', '78.83'], ['87.67', '83.75', '81.35', '81.68', '81.50', '92.18', '92.61', '92.39', '77.99', '78.22', '78.08'], ['87.81', '83.94', '83.88', '81.97', '82.90', '92.78', '92.63', '92.70', '79.54', '78.11', '78.78']]
column
['UAS', 'LAS', 'Core P', 'Core R', 'Core F', 'Func. P', 'Func. R', 'Func. F', 'PP P', 'PP R', 'PP F']
['PP MTL + Joint Decoding', 'Core + PP MTL + Joint Decoding', 'Core + Func. + PP MTL + Joint Decoding']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> <th>Core P</th> <th>Core R</th> <th>Core F</th> <th>Func. P</th> <th>Func. R</th> <th>Func. F</th> <th>PP P</th> <th>PP R</th> <th>PP F</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>87.59</td> <td>83.64</td> <td>80.87</td> <td>81.31</td> <td>81.08</td> <td>91.99</td> <td>92.43</td> <td>92.20</td> <td>77.29</td> <td>77.99</td> <td>77.62</td> </tr> <tr> <td>PP MTL</td> <td>87.67</td> <td>83.70</td> <td>80.61</td> <td>81.23</td> <td>80.91</td> <td>92.03</td> <td>92.50</td> <td>92.26</td> <td>78.30</td> <td>78.38</td> <td>78.32</td> </tr> <tr> <td>PP MTL + Joint Decoding</td> <td>87.68</td> <td>83.69</td> <td>79.93</td> <td>81.50</td> <td>80.69</td> <td>91.92</td> <td>92.51</td> <td>92.21</td> <td>80.59</td> <td>77.68</td> <td>79.04</td> </tr> <tr> <td>Core + PP MTL</td> <td>87.70</td> <td>83.77</td> <td>81.62</td> <td>81.81</td> <td>81.71</td> <td>91.93</td> <td>92.52</td> <td>92.22</td> <td>77.93</td> <td>78.25</td> <td>78.08</td> </tr> <tr> <td>Core + PP MTL + Joint Decoding</td> <td>87.80</td> <td>83.91</td> <td>84.18</td> <td>81.97</td> <td>83.05</td> <td>91.68</td> <td>92.65</td> <td>92.16</td> <td>79.71</td> <td>78.03</td> <td>78.83</td> </tr> <tr> <td>Core + Func. + PP MTL</td> <td>87.67</td> <td>83.75</td> <td>81.35</td> <td>81.68</td> <td>81.50</td> <td>92.18</td> <td>92.61</td> <td>92.39</td> <td>77.99</td> <td>78.22</td> <td>78.08</td> </tr> <tr> <td>Core + Func. + PP MTL + Joint Decoding</td> <td>87.81</td> <td>83.94</td> <td>83.88</td> <td>81.97</td> <td>82.90</td> <td>92.78</td> <td>92.63</td> <td>92.70</td> <td>79.54</td> <td>78.11</td> <td>78.78</td> </tr> </tbody></table>
Table 5
table_5
D18-1159
8
emnlp2018
Table 5 presents the results for different combinations of valency relation subsets. We find that PP-attachment decisions are generally harder to make, compared with core and functional relations. Including them during training distracts other parsing objectives (compare Core + PP with only analyzing Core in ยง6)). However, they do permit improvements on precision for PP attachment by 3.30, especially with our proposed joint decoding. This demonstrates the usage of our algorithm outside the traditional notions of valency, it can be a general method for training parsers to focus on specific subsets of syntactic relations.
[1, 1, 2, 1, 2]
['Table 5 presents the results for different combinations of valency relation subsets.', 'We find that PP-attachment decisions are generally harder to make, compared with core and functional relations.', 'Including them during training distracts other parsing objectives (compare Core + PP with only analyzing Core in ยง6)).', 'However, they do permit improvements on precision for PP attachment by 3.30, especially with our proposed joint decoding.', 'This demonstrates the usage of our algorithm outside the traditional notions of valency, it can be a general method for training parsers to focus on specific subsets of syntactic relations.']
[['UAS', 'LAS', 'Core P', 'Core R', 'Core F', 'Func. P', 'Func. R', 'Func. F', 'PP P', 'PP R', 'PP F'], ['PP P', 'PP R', 'PP F', 'Core P', 'Core R', 'Core F', 'Func. P', 'Func. R', 'Func. F'], None, ['PP P', 'Baseline', 'PP MTL + Joint Decoding', 'Core + PP MTL + Joint Decoding', 'Core + Func. + PP MTL + Joint Decoding'], None]
1
D18-1167table_5
Human accuracy on test set based on different sources. As expected, humans get the best performance when given both videos and subtitles.
2
[['VQA source', 'Question'], ['VQA source', 'Video and Question'], ['VQA source', 'Subtitle and Question'], ['VQA source', 'Video Subtitle and Question']]
1
[['Human accuracy on test.']]
[['31.84'], ['61.73'], ['72.88'], ['89.41']]
column
['Human accuracy on test.']
['Video and Question', 'Subtitle and Question', 'Video Subtitle and Question']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Human accuracy on test.</th> </tr> </thead> <tbody> <tr> <td>VQA source || Question</td> <td>31.84</td> </tr> <tr> <td>VQA source || Video and Question</td> <td>61.73</td> </tr> <tr> <td>VQA source || Subtitle and Question</td> <td>72.88</td> </tr> <tr> <td>VQA source || Video Subtitle and Question</td> <td>89.41</td> </tr> </tbody></table>
Table 5
table_5
D18-1167
5
emnlp2018
Human Evaluation on Usefulness of Video and Subtitle in Dataset:. To gain a better understandng of the roles of videos and subtitles in the our dataset, we perform a human study, asking different groups of workers to complete the QA task in settings while observing different sources (subsets) of information:. • Question only. • Video and Question. • Subtitle and Question. • Video, Subtitle, and Question. We made sure the workers that have written the questions did not participate in this study and that workers see only one of the above settings for answering each question. Human accuracy on our test set under these 4 settings are reported in Table 5. As expected, compared to human accuracy based only on question-answer pairs (Q), adding videos (V+Q), or subtitles (S+Q) significantly improves human performance. Adding both videos and subtitles (V+S+Q) brings the accuracy to 89.41%. This indicates that in order to answer the questions correctly, both visual and textual understanding are essential. We also observe that workers obtain 31.84% accuracy given questionanswer pairs only, which is higher than random guessing (20%). We ascribe this to people’s prior knowledge about the shows. Note, timestamp annotations are not provided in these experiments.
[2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 2, 1, 2, 2]
['Human Evaluation on Usefulness of Video and Subtitle in Dataset:.', 'To gain a better understandng of the roles of videos and subtitles in the our dataset, we perform a human study, asking different groups of workers to complete the QA task in settings while observing different sources (subsets) of information:.', '• Question only.', '• Video and Question.', '• Subtitle and Question.', '• Video, Subtitle, and Question.', 'We made sure the workers that have written the questions did not participate in this study and that workers see only one of the above settings for answering each question.', 'Human accuracy on our test set under these 4 settings are reported in Table 5.', 'As expected, compared to human accuracy based only on question-answer pairs (Q), adding videos (V+Q), or subtitles (S+Q) significantly improves human performance.', 'Adding both videos and subtitles (V+S+Q) brings the accuracy to 89.41%.', 'This indicates that in order to answer the questions correctly, both visual and textual understanding are essential.', 'We also observe that workers obtain 31.84% accuracy given questionanswer pairs only, which is higher than random guessing (20%).', 'We ascribe this to people’s prior knowledge about the shows.', 'Note, timestamp annotations are not provided in these experiments.']
[None, None, ['Question'], ['Video and Question'], ['Subtitle and Question'], ['Video Subtitle and Question'], None, None, ['Question', 'Video and Question', 'Subtitle and Question', 'Human accuracy on test.'], ['Video Subtitle and Question', 'Human accuracy on test.'], ['Video Subtitle and Question'], ['Question', 'Human accuracy on test.'], None, None]
1
D18-1168table_6
Comparison of different model performance on TEMPO HL on the test set. “MLLC Global” indicates our model with global context and “MLLC B/A” indicated MLLC with before/after context.
1
[['Frequeny Prior'], ['MCN'], ['TALL + TEF'], ['MLLC - Global'], ['MLLC - B/A'], ['MLLC (Ours)'], ['MLLC (Ours) Context Sup. Test']]
3
[['TEMPO - Human Language (HL)', 'DiDeMo', 'R@1'], ['TEMPO - Human Language (HL)', 'DiDeMo', 'mIoU'], ['TEMPO - Human Language (HL)', 'Before', 'R@1'], ['TEMPO - Human Language (HL)', 'Before', 'mIoU'], ['TEMPO - Human Language (HL)', 'After', 'R@1'], ['TEMPO - Human Language (HL)', 'After', 'mIoU'], ['TEMPO - Human Language (HL)', 'Then', 'R@1'], ['TEMPO - Human Language (HL)', 'Then', 'mIoU'], ['TEMPO - Human Language (HL)', 'While', 'R@1'], ['TEMPO - Human Language (HL)', 'While', 'mIoU'], ['TEMPO - Human Language (HL)', 'Average', 'R@1'], ['TEMPO - Human Language (HL)', 'Average', ' R@5'], ['TEMPO - Human Language (HL)', 'Average', 'mIoU']]
[['19.43', '25.44', '29.31', '51.92', '0.00', '0.00', '0.00', '7.84', '4.74', '12.27', '10.69', '37.56', '19.50'], ['26.07', '39.92', '26.79', '51.40', '14.93', '34.28', '18.55', '47.92', '10.70', '35.47', '19.4', '70.88', '41.80'], ['21.79', '33.55', '25.91', '49.26', '14.43', '32.62', '2.52', '31.13', '8.1', '28.14', '14.55', '60.69', '34.94'], ['27.01', '41.72', '27.42', '52.22', '14.10', '34.33', '18.40', '49.17', '10.86', '35.36', '19.56', '71.23', '42.56'], ['26.47', '40.39', '31.95', '55.89', '14.93', '34.78', '17.36', '47.52', '11.32', '35.52', '20.40', '70.97', '42.82'], ['27.38', '42.45', '32.33', '56.91', '14.43', '37.33', '19.58', '50.39', '10.39', '35.95', '20.82', '71.68', '44.57'], ['27.39', '42.25', '52.58', '80.37', '36.48', '75.79', '36.05', '70.51', '10.39', '35.87', '32.58', '79.86', '60.96']]
column
['R@1', 'mIoU', 'R@1', 'mIoU', 'R@1', 'mIoU', 'R@1', 'mIoU', 'R@1', 'mIoU', 'R@1', ' R@5', 'mIoU']
['MLLC (Ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TEMPO - Human Language (HL) || DiDeMo || R@1</th> <th>TEMPO - Human Language (HL) || DiDeMo || mIoU</th> <th>TEMPO - Human Language (HL) || Before || R@1</th> <th>TEMPO - Human Language (HL) || Before || mIoU</th> <th>TEMPO - Human Language (HL) || After || R@1</th> <th>TEMPO - Human Language (HL) || After || mIoU</th> <th>TEMPO - Human Language (HL) || Then || R@1</th> <th>TEMPO - Human Language (HL) || Then || mIoU</th> <th>TEMPO - Human Language (HL) || While || R@1</th> <th>TEMPO - Human Language (HL) || While || mIoU</th> <th>TEMPO - Human Language (HL) || Average || R@1</th> <th>TEMPO - Human Language (HL) || Average || R@5</th> <th>TEMPO - Human Language (HL) || Average || mIoU</th> </tr> </thead> <tbody> <tr> <td>Frequeny Prior</td> <td>19.43</td> <td>25.44</td> <td>29.31</td> <td>51.92</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> <td>7.84</td> <td>4.74</td> <td>12.27</td> <td>10.69</td> <td>37.56</td> <td>19.50</td> </tr> <tr> <td>MCN</td> <td>26.07</td> <td>39.92</td> <td>26.79</td> <td>51.40</td> <td>14.93</td> <td>34.28</td> <td>18.55</td> <td>47.92</td> <td>10.70</td> <td>35.47</td> <td>19.4</td> <td>70.88</td> <td>41.80</td> </tr> <tr> <td>TALL + TEF</td> <td>21.79</td> <td>33.55</td> <td>25.91</td> <td>49.26</td> <td>14.43</td> <td>32.62</td> <td>2.52</td> <td>31.13</td> <td>8.1</td> <td>28.14</td> <td>14.55</td> <td>60.69</td> <td>34.94</td> </tr> <tr> <td>MLLC - Global</td> <td>27.01</td> <td>41.72</td> <td>27.42</td> <td>52.22</td> <td>14.10</td> <td>34.33</td> <td>18.40</td> <td>49.17</td> <td>10.86</td> <td>35.36</td> <td>19.56</td> <td>71.23</td> <td>42.56</td> </tr> <tr> <td>MLLC - B/A</td> <td>26.47</td> <td>40.39</td> <td>31.95</td> <td>55.89</td> <td>14.93</td> <td>34.78</td> <td>17.36</td> <td>47.52</td> <td>11.32</td> <td>35.52</td> <td>20.40</td> <td>70.97</td> <td>42.82</td> </tr> <tr> <td>MLLC (Ours)</td> <td>27.38</td> <td>42.45</td> <td>32.33</td> <td>56.91</td> <td>14.43</td> <td>37.33</td> <td>19.58</td> <td>50.39</td> <td>10.39</td> <td>35.95</td> <td>20.82</td> <td>71.68</td> <td>44.57</td> </tr> <tr> <td>MLLC (Ours) Context Sup. Test</td> <td>27.39</td> <td>42.25</td> <td>52.58</td> <td>80.37</td> <td>36.48</td> <td>75.79</td> <td>36.05</td> <td>70.51</td> <td>10.39</td> <td>35.87</td> <td>32.58</td> <td>79.86</td> <td>60.96</td> </tr> </tbody></table>
Table 6
table_6
D18-1168
9
emnlp2018
Results: TEMPO - HL. Table 6 compares performance on TEMPO - HL. We compare our best-performing model from training on the TEMPOTL (strongly supervised MLLC and conTEF) to prior work (MCN and TALL) and to MLLC with global and before/after context. Performance on TEMPO-HL is considerably lower than TEMPOTL suggesting that TEMPO-HL is harder than TEMPO-TL. On TEMPO - HL, we observe similar trends as on TEMPO-TL. When considering all sentence types, MLLC has the best performance across all metrics. In particular, our model has the strongest performance for all sentence types considering the mIoU metric. In addition to performing better on temporal words, our model also performs better on the original DiDeMo dataset. As was seen in TEMPO-TL, including before/after context performs better than our model trained with global context for both “before” and “after” words. The final row of Table 6 shows an upper bound in which the ground truth context is used at test time instead of the latent context. We note that results improve for “before”, “after”, and “then”, suggesting that learning to better localize context will improve results for these sentence types.
[2, 1, 2, 1, 1, 1, 1, 1, 0, 1, 1]
['Results: TEMPO - HL.', 'Table 6 compares performance on TEMPO - HL.', 'We compare our best-performing model from training on the TEMPOTL (strongly supervised MLLC and conTEF) to prior work (MCN and TALL) and to MLLC with global and before/after context.', 'Performance on TEMPO-HL is considerably lower than TEMPOTL suggesting that TEMPO-HL is harder than TEMPO-TL.', 'On TEMPO - HL, we observe similar trends as on TEMPO-TL.', 'When considering all sentence types, MLLC has the best performance across all metrics.', 'In particular, our model has the strongest performance for all sentence types considering the mIoU metric.', 'In addition to performing better on temporal words, our model also performs better on the original DiDeMo dataset.', 'As was seen in TEMPO-TL, including before/after context performs better than our model trained with global context for both “before” and “after” words.', 'The final row of Table 6 shows an upper bound in which the ground truth context is used at test time instead of the latent context.', 'We note that results improve for “before”, “after”, and “then”, suggesting that learning to better localize context will improve results for these sentence types.']
[None, None, ['MLLC (Ours)', 'MCN', 'TALL + TEF', 'MLLC - Global', 'MLLC - B/A'], ['TEMPO - Human Language (HL)'], ['TEMPO - Human Language (HL)'], ['MLLC - Global', 'MLLC - B/A', 'MLLC (Ours)'], ['MLLC (Ours)', 'mIoU'], ['MLLC (Ours)', 'DiDeMo'], None, ['MLLC (Ours) Context Sup. Test'], ['MLLC (Ours) Context Sup. Test', 'Before', 'After', 'Then']]
1
D18-1173table_2
Results of domain specific Named Entity Recognition. P, R, F1 respectively denotes precision, recall and F1 score
2
[['Model', 'Word2Vec'], ['Model', 'GloVe'], ['Model', 'N2V'], ['Model', 'SUM'], ['Model', 'DAREP'], ['Model', 'CRE'], ['Model', 'Mem2Vec']]
3
[['Task', 'AnatEM', 'P'], ['Task', 'AnatEM', 'R'], ['Task', 'AnatEM', 'F1'], ['Task', 'BioNLP', 'P'], ['Task', 'BioNLP', 'R'], ['Task', 'BioNLP', 'F1'], ['Task', 'NCBI', 'P'], ['Task', 'NCBI', 'R'], ['Task', 'NCBI', 'F1']]
[['76.12', '69.80', '72.82', '73.13', '54.79', '62.64', '75.22', '75.37', '74.39'], ['75.83', '67.04', '71.14', '72.58', '53.35', '61.50', '75.76', '72.33', '74.01'], ['76.81', '66.8', '71.46', '73.91', '54.21', '62.54', '72.45', '74.37', '73.30'], ['77.06', '69.01', '72.81', '74.36', '58.58', '62.25', '74.89', '74.02', '74.45'], ['79.03', '67.95', '73.07', '77.18', '54.19', '63.67', '78.76', '75.60', '77.15'], ['80.04', '67.90', '73.47', '76.74', '56.98', '65.40', '78.98', '76.63', '77.79'], ['81.23', '67.90', '73.96', '76.70', '57.81', '65.92', '79.56', '76.63', '78.06']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['Mem2Vec']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Task || AnatEM || P</th> <th>Task || AnatEM || R</th> <th>Task || AnatEM || F1</th> <th>Task || BioNLP || P</th> <th>Task || BioNLP || R</th> <th>Task || BioNLP || F1</th> <th>Task || NCBI || P</th> <th>Task || NCBI || R</th> <th>Task || NCBI || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Word2Vec</td> <td>76.12</td> <td>69.80</td> <td>72.82</td> <td>73.13</td> <td>54.79</td> <td>62.64</td> <td>75.22</td> <td>75.37</td> <td>74.39</td> </tr> <tr> <td>Model || GloVe</td> <td>75.83</td> <td>67.04</td> <td>71.14</td> <td>72.58</td> <td>53.35</td> <td>61.50</td> <td>75.76</td> <td>72.33</td> <td>74.01</td> </tr> <tr> <td>Model || N2V</td> <td>76.81</td> <td>66.8</td> <td>71.46</td> <td>73.91</td> <td>54.21</td> <td>62.54</td> <td>72.45</td> <td>74.37</td> <td>73.30</td> </tr> <tr> <td>Model || SUM</td> <td>77.06</td> <td>69.01</td> <td>72.81</td> <td>74.36</td> <td>58.58</td> <td>62.25</td> <td>74.89</td> <td>74.02</td> <td>74.45</td> </tr> <tr> <td>Model || DAREP</td> <td>79.03</td> <td>67.95</td> <td>73.07</td> <td>77.18</td> <td>54.19</td> <td>63.67</td> <td>78.76</td> <td>75.60</td> <td>77.15</td> </tr> <tr> <td>Model || CRE</td> <td>80.04</td> <td>67.90</td> <td>73.47</td> <td>76.74</td> <td>56.98</td> <td>65.40</td> <td>78.98</td> <td>76.63</td> <td>77.79</td> </tr> <tr> <td>Model || Mem2Vec</td> <td>81.23</td> <td>67.90</td> <td>73.96</td> <td>76.70</td> <td>57.81</td> <td>65.92</td> <td>79.56</td> <td>76.63</td> <td>78.06</td> </tr> </tbody></table>
Table 2
table_2
D18-1173
8
emnlp2018
Named Entity Recognition. Table 2 shows the results of domain specific named entity recognition. Used for pre-training embeddings, Mem2Vec achieves higher F1-score than all the baselines. It first surpasses CRE and DAREP that only bring slight improvements over Word2Vec. CRE and DAREP are both methods which relies on words with cooccurance patterns in source and target domain as the pivots for cross-domain transfer. This indicates the advantage of Mem2Vec over the traditional word frequency based methods in fast mapping cases where word cooccurrence pattern is not clear.
[2, 1, 1, 1, 2, 2]
['Named Entity Recognition.', 'Table 2 shows the results of domain specific named entity recognition.', 'Used for pre-training embeddings, Mem2Vec achieves higher F1-score than all the baselines.', 'It first surpasses CRE and DAREP that only bring slight improvements over Word2Vec.', 'CRE and DAREP are both methods which relies on words with cooccurance patterns in source and target domain as the pivots for cross-domain transfer.', 'This indicates the advantage of Mem2Vec over the traditional word frequency based methods in fast mapping cases where word cooccurrence pattern is not clear.']
[None, None, ['Mem2Vec', 'F1', 'Word2Vec', 'GloVe', 'N2V', 'SUM', 'DAREP', 'CRE'], ['DAREP', 'CRE', 'Word2Vec'], ['DAREP', 'CRE'], ['Mem2Vec']]
1
D18-1176table_2
Sentiment classification accuracy results on the binary SST task. For DCG we compare against their best single sentence model (Looks et al., 2017). *=multiple different embedding sets (see Section 4). Number of parameters included in parenthesis. Results averaged over ten runs with different random seeds.
2
[['Model', 'Const. Tree LSTM (Tai et al. 2015)'], ['Model', 'DMN (Kumar et al. 2016)'], ['Model', 'DCG (Looks et al. 2017)'], ['Model', 'NSE (Munkhdalai and Yu 2017)'], ['Model', 'GloVe BiLSTM-Max (4.1M)'], ['Model', 'FastText BiLSTM-Max (4.1M)'], ['Model', 'Naive baseline (5.4M)'], ['Model', 'Unweighted DME (4.1M)'], ['Model', 'DME (4.1M)'], ['Model', 'CDME (4.1M)'], ['Model', 'CDME*-Softmax (4.6M)'], ['Model', 'CDME*-Sigmoid (4.6M)']]
1
[['SST']]
[['88.0'], ['88.6'], ['89.4'], ['89.7'], ['88.0±.1'], ['86.7±.3'], ['88.5±.4'], ['89.0±.2'], ['88.7±.6'], ['89.2±.4'], ['89.3±.5'], ['89.8±.4']]
column
['accuracy']
['DME (4.1M)', 'CDME (4.1M)', 'CDME*-Softmax (4.6M)', 'CDME*-Sigmoid (4.6M)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST</th> </tr> </thead> <tbody> <tr> <td>Model || Const. Tree LSTM (Tai et al. 2015)</td> <td>88.0</td> </tr> <tr> <td>Model || DMN (Kumar et al. 2016)</td> <td>88.6</td> </tr> <tr> <td>Model || DCG (Looks et al. 2017)</td> <td>89.4</td> </tr> <tr> <td>Model || NSE (Munkhdalai and Yu 2017)</td> <td>89.7</td> </tr> <tr> <td>Model || GloVe BiLSTM-Max (4.1M)</td> <td>88.0±.1</td> </tr> <tr> <td>Model || FastText BiLSTM-Max (4.1M)</td> <td>86.7±.3</td> </tr> <tr> <td>Model || Naive baseline (5.4M)</td> <td>88.5±.4</td> </tr> <tr> <td>Model || Unweighted DME (4.1M)</td> <td>89.0±.2</td> </tr> <tr> <td>Model || DME (4.1M)</td> <td>88.7±.6</td> </tr> <tr> <td>Model || CDME (4.1M)</td> <td>89.2±.4</td> </tr> <tr> <td>Model || CDME*-Softmax (4.6M)</td> <td>89.3±.5</td> </tr> <tr> <td>Model || CDME*-Sigmoid (4.6M)</td> <td>89.8±.4</td> </tr> </tbody></table>
Table 2
table_2
D18-1176
5
emnlp2018
5.2 Results. Table 2 shows a similar pattern as we observed the naive baseline outperforms the with NLI:. the naive baseline outperforms the single-embedding encoders;. the DME methods outperform the naive baseline, with the contextualized version appearing to work best. Finally, we experiment with replacing φ in Eq. 1 and 2 with a sigmoid gate instead of a softmax, and observe improved performance on this task, outperforming the comparable models listed in the table. These results further strengthen the point that having multiple different embeddings helps, and that we can learn to combine those different embeddings efficiently, in interpretable ways.
[2, 1, 1, 1, 1, 2]
['5.2 Results.', 'Table 2 shows a similar pattern as we observed the naive baseline outperforms the with NLI:.', 'the naive baseline outperforms the single-embedding encoders;.', 'the DME methods outperform the naive baseline, with the contextualized version appearing to work best.', 'Finally, we experiment with replacing φ in Eq. 1 and 2 with a sigmoid gate instead of a softmax, and observe improved performance on this task, outperforming the comparable models listed in the table.', 'These results further strengthen the point that having multiple different embeddings helps, and that we can learn to combine those different embeddings efficiently, in interpretable ways.']
[None, ['Naive baseline (5.4M)'], ['Naive baseline (5.4M)', 'GloVe BiLSTM-Max (4.1M)', 'FastText BiLSTM-Max (4.1M)'], ['DME (4.1M)', 'Naive baseline (5.4M)'], ['CDME*-Sigmoid (4.6M)', 'CDME*-Softmax (4.6M)', 'Const. Tree LSTM (Tai et al. 2015)', 'DMN (Kumar et al. 2016)', 'DCG (Looks et al. 2017)', 'NSE (Munkhdalai and Yu 2017)', 'GloVe BiLSTM-Max (4.1M)', 'FastText BiLSTM-Max (4.1M)', 'Naive baseline (5.4M)', 'Unweighted DME (4.1M)', 'DME (4.1M)', 'CDME (4.1M)'], None]
1
D18-1176table_3
Image and caption retrieval results (R@1 and R@10) on Flickr30k dataset, compared to VSE++ baseline (Faghri et al., 2017). VSE++ numbers in the table are with ResNet features and random cropping, but no fine-tuning. Number of parameters included in parenthesis; averaged over five runs with std omitted for brevity.
2
[['Model | R@:', 'VSE++'], ['Model | R@:', 'FastText (15M)'], ['Model | R@:', 'ImageNet (29M)'], ['Model | R@:', 'Naive (32M)'], ['Model | R@:', 'Unweighted DME (15M)'], ['Model | R@:', 'DME (15M)'], ['Model | R@:', 'CDME (15M)']]
2
[['Image', '1'], ['Image', '10'], ['Caption', '1'], ['Caption', '10']]
[['32.3', '72.1', '43.7', '82.1'], ['35.6', '74.7', '47.1', '82.7'], ['25.6', '63.1', '36.6', '72.2'], ['34.4', '73.9', '46.4', '82.2'], ['35.9', '75.0', '48.9', '83.7'], ['36.5', '75.5', '49.7', '83.6'], ['36.5', '75.6', '49.0', '83.8']]
column
['R@1', 'R@10', 'R@1', 'R@10']
['DME (15M)', 'CDME (15M)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Image || 1</th> <th>Image || 10</th> <th>Caption || 1</th> <th>Caption || 10</th> </tr> </thead> <tbody> <tr> <td>Model | R@: || VSE++</td> <td>32.3</td> <td>72.1</td> <td>43.7</td> <td>82.1</td> </tr> <tr> <td>Model | R@: || FastText (15M)</td> <td>35.6</td> <td>74.7</td> <td>47.1</td> <td>82.7</td> </tr> <tr> <td>Model | R@: || ImageNet (29M)</td> <td>25.6</td> <td>63.1</td> <td>36.6</td> <td>72.2</td> </tr> <tr> <td>Model | R@: || Naive (32M)</td> <td>34.4</td> <td>73.9</td> <td>46.4</td> <td>82.2</td> </tr> <tr> <td>Model | R@: || Unweighted DME (15M)</td> <td>35.9</td> <td>75.0</td> <td>48.9</td> <td>83.7</td> </tr> <tr> <td>Model | R@: || DME (15M)</td> <td>36.5</td> <td>75.5</td> <td>49.7</td> <td>83.6</td> </tr> <tr> <td>Model | R@: || CDME (15M)</td> <td>36.5</td> <td>75.6</td> <td>49.0</td> <td>83.8</td> </tr> </tbody></table>
Table 3
table_3
D18-1176
5
emnlp2018
6.2 Results. Table 3 shows the results, comparing against VSE++. First, note that the ImageNet-only embeddings don’t work as well as the FastText ones, which is most likely due to poorer coverage. We observe that DME outperforms naive and FastText-only, and outperforms VSE++ by a large margin. These findings confirm the intuition that knowing what things look like (i.e., having a wordlevel visual representation) improves performance in visual retrieval tasks (i.e., where we need to find relevant images for phrases or sentences) — something that sounds obvious but has not really been explored before, to our knowledge. This showcases DME’s usefulness for fusing embeddings in multi-modal tasks.
[2, 1, 1, 1, 2, 2]
['6.2 Results.', 'Table 3 shows the results, comparing against VSE++.', 'First, note that the ImageNet-only embeddings don’t work as well as the FastText ones, which is most likely due to poorer coverage.', 'We observe that DME outperforms naive and FastText-only, and outperforms VSE++ by a large margin.', 'These findings confirm the intuition that knowing what things look like (i.e., having a wordlevel visual representation) improves performance in visual retrieval tasks (i.e., where we need to find relevant images for phrases or sentences) — something that sounds obvious but has not really been explored before, to our knowledge.', 'This showcases DME’s usefulness for fusing embeddings in multi-modal tasks.']
[None, ['VSE++'], ['FastText (15M)', 'ImageNet (29M)'], ['DME (15M)', 'FastText (15M)', 'Naive (32M)'], None, ['DME (15M)']]
1
D18-1177table_2
Results on word similarity task. Reported are the Spearman’s rank order correlation between model prediction and human judgment (higher is better and bolds highlight the best methods). See text for details.
2
[['Models', 'CNN'], ['Models', 'VAE'], ['Models', 'SGNS'], ['Models', 'CNN⊕SGNS'], ['Models', 'VAE⊕SGNS'], ['Models', 'V-SGNS'], ['Models', 'IV-SGNS(LINEAR)'], ['Models', 'IV-SGNS(NONLINEAR)'], ['Models', 'PIXIE+'], ['Models', 'PIXIE⊕']]
3
[['Semantic/taxonomic similarity', 'SEMSIM', '100%'], ['Semantic/taxonomic similarity', 'SEMSIM', '98%'], ['Semantic/taxonomic similarity', 'SimLex', '100%'], ['Semantic/taxonomic similarity', 'SimLex', '39%'], ['Semantic/taxonomic similarity', 'SIM', '100%'], ['Semantic/taxonomic similarity', 'SIM', '44%'], ['Semantic/taxonomic similarity', 'EN-RG', '100%'], ['Semantic/taxonomic similarity', 'EN-RG', '72%'], ['Semantic/taxonomic similarity', 'EN-MC', '100%'], ['Semantic/taxonomic similarity', 'EN-MC', '73%'], ['General relatedness', 'MEN', '100%'], ['General relatedness', 'MEN', '54%'], ['General relatedness', 'REL', '100%'], ['General relatedness', 'REL', '53%'], ['General relatedness', 'MTurk', '100%'], ['General relatedness', 'MTurk', '26%'], ['Visual similarity', 'VISSIM', '100%'], ['Visual similarity', 'VISSIM', '98%'], ['REL+SIM', 'WORDSIM', '100%'], ['REL+SIM', 'WORDSIM', '39%']]
[['-', '0.49', '-', '0.41', '-', '0.49', '-', '0.54', '-', '0.46', '-', '0.54', '-', '0.20', '-', '0.18', '-', '0.53', '-', '0.28'], ['-', '0.65', '-', '0.43', '-', '0.51', '-', '0.56', '-', '0.55', '-', '0.62', '-', '0.22', '-', '0.40', '-', '0.62', '-', '0.37'], ['0.50', '0.50', '0.33', '0.35', '0.66', '0.66', '0.60', '0.55', '0.60', '0.52', '0.65', '0.67', '0.56', '0.51', '0.65', '0.63', '0.38', '0.38', '061', '0.60'], ['-', '0.67', '-', '0.48', '-', '0.65', '-', '0.60', '-', '0.55', '-', '0.74', '-', '0.44', '-', '0.51', '-', '0.63', '-', '0.56'], ['-', '0.70', '-', '0.51', '-', '0.67', '-', '0.61', '-', '0.60', '-', '0.76', '-', '0.45', '-', '0.55', '-', '0.63', '-', '0.56'], ['0.58', '0.58', '0.29', '0.30', '0.66', '0.71', '0.73', '0.73', '0.69', '0.69', '0.64', '0.65', '0.51', '0.52', '0.60', '0.65', '0.42', '0.42', '0.59', '0.64'], ['0.49', '0.50', '0.31', '0.33', '0.55', '0.61', '0.58', '0.56', '0.59', '0.65', '0.60', '0.62', '0.41', '0.38', '0.57', '0.71', '0.36', '0.37', '0.46', '0.51'], ['0.44', '0.44', '0.30', '0.32', '0.53', '0.59', '0.54', '0.53', '0.59', '0.63', '0.57', '0.59', '0.40', '0.37', '0.56', '0.71', '0.32', '0.33', '0.44', '0.48'], ['0.63', '0.63', '0.35', '0.48', '0.63', '0.72', '0.65', '0.60', '0.62', '0.62', '0.64', '0.73', '0.46', '0.56', '0.55', '0.55', '0.54', '0.54', '0.50', '0.59'], ['0.71', '0.71', '0.39', '0.53', '0.68', '0.71', '0.73', '0.73', '0.69', '0.71', '0.68', '0.76', '0.52', '0.59', '0.60', '0.59', '0.60', '0.61', '0.58', '0.65']]
column
['Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'General relatedness', 'General relatedness', 'General relatedness', 'General relatedness', 'General relatedness', 'General relatedness', 'Visual similarity', 'Visual similarity', 'REL+SIM', 'REL+SIM']
['PIXIE+', 'PIXIE⊕']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Semantic/taxonomic similarity || SEMSIM || 100%</th> <th>Semantic/taxonomic similarity || SEMSIM || 98%</th> <th>Semantic/taxonomic similarity || SimLex || 100%</th> <th>Semantic/taxonomic similarity || SimLex || 39%</th> <th>Semantic/taxonomic similarity || SIM || 100%</th> <th>Semantic/taxonomic similarity || SIM || 44%</th> <th>Semantic/taxonomic similarity || EN-RG || 100%</th> <th>Semantic/taxonomic similarity || EN-RG || 72%</th> <th>Semantic/taxonomic similarity || EN-MC || 100%</th> <th>Semantic/taxonomic similarity || EN-MC || 73%</th> <th>General relatedness || MEN || 100%</th> <th>General relatedness || MEN || 54%</th> <th>General relatedness || REL || 100%</th> <th>General relatedness || REL || 53%</th> <th>General relatedness || MTurk || 100%</th> <th>General relatedness || MTurk || 26%</th> <th>Visual similarity || VISSIM || 100%</th> <th>Visual similarity || VISSIM || 98%</th> <th>REL+SIM || WORDSIM || 100%</th> <th>REL+SIM || WORDSIM || 39%</th> </tr> </thead> <tbody> <tr> <td>Models || CNN</td> <td>-</td> <td>0.49</td> <td>-</td> <td>0.41</td> <td>-</td> <td>0.49</td> <td>-</td> <td>0.54</td> <td>-</td> <td>0.46</td> <td>-</td> <td>0.54</td> <td>-</td> <td>0.20</td> <td>-</td> <td>0.18</td> <td>-</td> <td>0.53</td> <td>-</td> <td>0.28</td> </tr> <tr> <td>Models || VAE</td> <td>-</td> <td>0.65</td> <td>-</td> <td>0.43</td> <td>-</td> <td>0.51</td> <td>-</td> <td>0.56</td> <td>-</td> <td>0.55</td> <td>-</td> <td>0.62</td> <td>-</td> <td>0.22</td> <td>-</td> <td>0.40</td> <td>-</td> <td>0.62</td> <td>-</td> <td>0.37</td> </tr> <tr> <td>Models || SGNS</td> <td>0.50</td> <td>0.50</td> <td>0.33</td> <td>0.35</td> <td>0.66</td> <td>0.66</td> <td>0.60</td> <td>0.55</td> <td>0.60</td> <td>0.52</td> <td>0.65</td> <td>0.67</td> <td>0.56</td> <td>0.51</td> <td>0.65</td> <td>0.63</td> <td>0.38</td> <td>0.38</td> <td>061</td> <td>0.60</td> </tr> <tr> <td>Models || CNN⊕SGNS</td> <td>-</td> <td>0.67</td> <td>-</td> <td>0.48</td> <td>-</td> <td>0.65</td> <td>-</td> <td>0.60</td> <td>-</td> <td>0.55</td> <td>-</td> <td>0.74</td> <td>-</td> <td>0.44</td> <td>-</td> <td>0.51</td> <td>-</td> <td>0.63</td> <td>-</td> <td>0.56</td> </tr> <tr> <td>Models || VAE⊕SGNS</td> <td>-</td> <td>0.70</td> <td>-</td> <td>0.51</td> <td>-</td> <td>0.67</td> <td>-</td> <td>0.61</td> <td>-</td> <td>0.60</td> <td>-</td> <td>0.76</td> <td>-</td> <td>0.45</td> <td>-</td> <td>0.55</td> <td>-</td> <td>0.63</td> <td>-</td> <td>0.56</td> </tr> <tr> <td>Models || V-SGNS</td> <td>0.58</td> <td>0.58</td> <td>0.29</td> <td>0.30</td> <td>0.66</td> <td>0.71</td> <td>0.73</td> <td>0.73</td> <td>0.69</td> <td>0.69</td> <td>0.64</td> <td>0.65</td> <td>0.51</td> <td>0.52</td> <td>0.60</td> <td>0.65</td> <td>0.42</td> <td>0.42</td> <td>0.59</td> <td>0.64</td> </tr> <tr> <td>Models || IV-SGNS(LINEAR)</td> <td>0.49</td> <td>0.50</td> <td>0.31</td> <td>0.33</td> <td>0.55</td> <td>0.61</td> <td>0.58</td> <td>0.56</td> <td>0.59</td> <td>0.65</td> <td>0.60</td> <td>0.62</td> <td>0.41</td> <td>0.38</td> <td>0.57</td> <td>0.71</td> <td>0.36</td> <td>0.37</td> <td>0.46</td> <td>0.51</td> </tr> <tr> <td>Models || IV-SGNS(NONLINEAR)</td> <td>0.44</td> <td>0.44</td> <td>0.30</td> <td>0.32</td> <td>0.53</td> <td>0.59</td> <td>0.54</td> <td>0.53</td> <td>0.59</td> <td>0.63</td> <td>0.57</td> <td>0.59</td> <td>0.40</td> <td>0.37</td> <td>0.56</td> <td>0.71</td> <td>0.32</td> <td>0.33</td> <td>0.44</td> <td>0.48</td> </tr> <tr> <td>Models || PIXIE+</td> <td>0.63</td> <td>0.63</td> <td>0.35</td> <td>0.48</td> <td>0.63</td> <td>0.72</td> <td>0.65</td> <td>0.60</td> <td>0.62</td> <td>0.62</td> <td>0.64</td> <td>0.73</td> <td>0.46</td> <td>0.56</td> <td>0.55</td> <td>0.55</td> <td>0.54</td> <td>0.54</td> <td>0.50</td> <td>0.59</td> </tr> <tr> <td>Models || PIXIE⊕</td> <td>0.71</td> <td>0.71</td> <td>0.39</td> <td>0.53</td> <td>0.68</td> <td>0.71</td> <td>0.73</td> <td>0.73</td> <td>0.69</td> <td>0.71</td> <td>0.68</td> <td>0.76</td> <td>0.52</td> <td>0.59</td> <td>0.60</td> <td>0.59</td> <td>0.60</td> <td>0.61</td> <td>0.58</td> <td>0.65</td> </tr> </tbody></table>
Table 2
table_2
D18-1177
6
emnlp2018
4.2.1 Main results. The results across different datasets are shown in Table 2. We perform evaluations under two settings: by considering (i) word similarity between visual words only and (ii) between all words (column 100% in Table 2). For the models CNN, VAE and their concatenation with SGNS embeddings, the latter setting is not applicable. The two last rows correspond to the multimodal embeddings inferred from our model. In particular, PIXIE+ (resp. PIXIE⊕) represents the multimodal embeddings built using Eq. (10) (resp. Eq. (11)). Overall, we note that PIXIE⊕ offers the best performance in almost all situations. This provides strong empirical support for the proposed model. Below, we discuss the above results in more depth to better understand them and characterize the circumstances in which our model performs better. How relevant is our formulation?. Except PIXIE and V-SGNS, most of the multimodal competing methods rely on independently pre-computed linguistic embeddings. As Table 2 shows, PIXIE and V-SGNS are often the best performing multimodal models, which provides empirical evidence that accounting for perceptual information while learning word embeddings from text is beneficial. Moreover, the superior performance of PIXIE⊕ over V-SGNS suggests that our model does a better job at combining perception and language to learn word representations. Joint learning is beneficial. PIXIE⊕ outperforms VAE⊕SGNS in almost all cases, which demonstrates the importance of joint learning. Where does our approach perform better?. On datasets that focus on semantic/taxonomic similarity, our approach dominates all other methods. On datasets focusing on general relatedness, our approach obtains mixed results. While dominating other approaches on MEN, it tends to perform worst than SGNS on MTurk and REL (under the 100% setting). One possible explanation is that general relatedness tends to focus more on “extrapolating” from one word to another word (such as SWAN is related to LAKE), while our approach better models more concrete relationships (such as SWAN is related to GOOSE). The low performance of CNN and VAE confirms this hypothesis. On the VISSIM dataset focusing on visual similarity, both CNN⊕SGNS and VAE⊕SGNS perform the best, strongly suggesting that visual and linguistic data are complementary. Our approach comes very close to these two methods. Note that our learning objective is to jointly explain visual features and word-context co-occurrences.
[2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 2, 1, 2, 1, 1, 1, 2, 1, 1, 2, 2]
['4.2.1 Main results.', 'The results across different datasets are shown in Table 2.', 'We perform evaluations under two settings: by considering (i) word similarity between visual words only and (ii) between all words (column 100% in Table 2).', 'For the models CNN, VAE and their concatenation with SGNS embeddings, the latter setting is not applicable.', 'The two last rows correspond to the multimodal embeddings inferred from our model.', 'In particular, PIXIE+ (resp. PIXIE⊕) represents the multimodal embeddings built using Eq. (10) (resp. Eq. (11)).', 'Overall, we note that PIXIE⊕ offers the best performance in almost all situations.', 'This provides strong empirical support for the proposed model.', 'Below, we discuss the above results in more depth to better understand them and characterize the circumstances in which our model performs better.', 'How relevant is our formulation?.', 'Except PIXIE and V-SGNS, most of the multimodal competing methods rely on independently pre-computed linguistic embeddings.', 'As Table 2 shows, PIXIE and V-SGNS are often the best performing multimodal models, which provides empirical evidence that accounting for perceptual information while learning word embeddings from text is beneficial.', 'Moreover, the superior performance of PIXIE⊕ over V-SGNS suggests that our model does a better job at combining perception and language to learn word representations.', 'Joint learning is beneficial.', 'PIXIE⊕ outperforms VAE⊕SGNS in almost all cases, which demonstrates the importance of joint learning.', 'Where does our approach perform better?.', 'On datasets that focus on semantic/taxonomic similarity, our approach dominates all other methods.', 'On datasets focusing on general relatedness, our approach obtains mixed results.', 'While dominating other approaches on MEN, it tends to perform worst than SGNS on MTurk and REL (under the 100% setting).', 'One possible explanation is that general relatedness tends to focus more on “extrapolating” from one word to another word (such as SWAN is related to LAKE), while our approach better models more concrete relationships (such as SWAN is related to GOOSE).', 'The low performance of CNN and VAE confirms this hypothesis.', 'On the VISSIM dataset focusing on visual similarity, both CNN⊕SGNS and VAE⊕SGNS perform the best, strongly suggesting that visual and linguistic data are complementary.', 'Our approach comes very close to these two methods.', 'Note that our learning objective is to jointly explain visual features and word-context co-occurrences.']
[None, ['SEMSIM', 'SimLex', 'SIM', 'EN-RG', 'EN-MC', 'MEN', 'REL', 'MTurk', 'VISSIM', 'WORDSIM'], ['98%', '39%', '44%', '72%', '73%', '54%', '53%', '26%', '100%'], ['CNN', 'VAE', 'CNN⊕SGNS', 'VAE⊕SGNS'], ['PIXIE+', 'PIXIE⊕'], ['PIXIE+', 'PIXIE⊕'], ['PIXIE⊕'], ['PIXIE⊕'], None, None, ['CNN⊕SGNS', 'VAE⊕SGNS', 'IV-SGNS(LINEAR)', 'IV-SGNS(NONLINEAR)'], ['PIXIE+', 'PIXIE⊕', 'V-SGNS'], ['PIXIE⊕', 'V-SGNS'], None, ['PIXIE⊕', 'VAE⊕SGNS', 'SEMSIM', 'SimLex', 'SIM', 'EN-RG', 'EN-MC', 'MEN', 'REL', 'VISSIM', 'WORDSIM'], None, ['Semantic/taxonomic similarity', 'PIXIE+', 'PIXIE⊕'], ['General relatedness', 'PIXIE+', 'PIXIE⊕'], ['MEN', 'PIXIE+', 'PIXIE⊕', 'REL', 'MTurk', 'SGNS'], ['General relatedness'], ['CNN', 'VAE', 'General relatedness'], ['Visual similarity', 'VISSIM', 'CNN⊕SGNS', 'VAE⊕SGNS'], ['PIXIE+', 'PIXIE⊕', 'CNN⊕SGNS', 'VAE⊕SGNS'], None]
1
D18-1177table_6
Results for image (I) ↔ sentence (S) retrieval.
2
[['Models', 'SGNS'], ['Models', 'V-SGNS'], ['Models', 'IV-SGNS (LINEAR)'], ['Models', 'PIXIE+'], ['Models', 'PIXIE⊕']]
2
[['I → S', 'K=1'], ['I → S', 'K=5'], ['I → S', 'K=10'], ['S → I', 'K=1'], ['S → I', 'K=5'], ['S → I', 'K=10']]
[['23.1', '49.0', '61.6', '16.6', '41.0', '53.8'], ['21.9', '51.7', '64.2', '16.2', '42.0', '54.8'], ['22.7', '50.5', '61.7', '17.1', '42.6', '55.4'], ['24.2', '52.5', '65.4', '17.5', '43.8', '56.2'], ['25.7', '55.7', '67.7', '18.4', '44.9', '56.9']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['PIXIE+', 'PIXIE⊕']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>I → S || K=1</th> <th>I → S || K=5</th> <th>I → S || K=10</th> <th>S → I || K=1</th> <th>S → I || K=5</th> <th>S → I || K=10</th> </tr> </thead> <tbody> <tr> <td>Models || SGNS</td> <td>23.1</td> <td>49.0</td> <td>61.6</td> <td>16.6</td> <td>41.0</td> <td>53.8</td> </tr> <tr> <td>Models || V-SGNS</td> <td>21.9</td> <td>51.7</td> <td>64.2</td> <td>16.2</td> <td>42.0</td> <td>54.8</td> </tr> <tr> <td>Models || IV-SGNS (LINEAR)</td> <td>22.7</td> <td>50.5</td> <td>61.7</td> <td>17.1</td> <td>42.6</td> <td>55.4</td> </tr> <tr> <td>Models || PIXIE+</td> <td>24.2</td> <td>52.5</td> <td>65.4</td> <td>17.5</td> <td>43.8</td> <td>56.2</td> </tr> <tr> <td>Models || PIXIE⊕</td> <td>25.7</td> <td>55.7</td> <td>67.7</td> <td>18.4</td> <td>44.9</td> <td>56.9</td> </tr> </tbody></table>
Table 6
table_6
D18-1177
8
emnlp2018
Results. Table 6 summarizes the results. The evaluation metrics are accuracies at top-K (K=1, 5, or 10) retrieved sentences or images. Our model consistently outperforms SGNS and other competing multimodal methods, which provides additional support for the benefits of our approach.
[2, 1, 1, 1]
['Results.', 'Table 6 summarizes the results.', 'The evaluation metrics are accuracies at top-K (K=1, 5, or 10) retrieved sentences or images.', 'Our model consistently outperforms SGNS and other competing multimodal methods, which provides additional support for the benefits of our approach.']
[None, None, ['K=1', 'K=5', 'K=10'], ['PIXIE+', 'PIXIE⊕', 'SGNS', 'V-SGNS', 'IV-SGNS (LINEAR)']]
1
D18-1182table_3
Performance (%Correct, %Wrong, %Abstained) of the different odd-man-out solvers on the
4
[['Embedding Map', 'ELMo clusters (K = 5)', 'Training Tokens', '1B+2B'], ['Embedding Map', 'w2v.googlenews', 'Training Tokens', '100B'], ['Embedding Map', 'glove.commoncrawl2', 'Training Tokens', '840B'], ['Embedding Map', 'glove.commoncrawl1', 'Training Tokens', '42B'], ['Embedding Map', 'glove.wikipedia', 'Training Tokens', '6B'], ['Embedding Map', 'Neelakantan', 'Training Tokens', '1B'], ['Embedding Map', 'w2v.freebase', 'Training Tokens', '100B'], ['Embedding Map', 'WordNet', 'Training Tokens', '-']]
2
[['AnomiaCommon', 'C'], ['AnomiaCommon', 'W'], ['AnomiaCommon', 'A'], ['AnomiaProper', 'C'], ['AnomiaProper', 'W'], ['AnomiaProper', 'A'], ['Crowdsourced', 'C'], ['Crowdsourced', 'W'], ['Crowdsourced', 'A']]
[['76.7', '13.9', '9.4', '42.6', '17.8', '39.6', '55.5', '18.8', '25.6'], ['61.9', '25.2', '12.9', '40.1', '14.9', '45.0', '46.3', '28.8', '24.9'], ['60.9', '23.8', '15.4', '32.2', '14.4', '53.5', '47.1', '28.4', '24.6'], ['57.4', '29.2', '13.4', '30.7', '17.8', '51.5', '40.1', '36.3', '23.7'], ['54.5', '24.3', '21.3', '29.2', '10.9', '59.9', '42.7', '29.0', '28.4'], ['35.2', '25.7', '39.1', '18.3', '14.4', '67.3', '32.6', '27.3', '40.2'], ['22.3', '28.7', '49.0', '34.2', '14.9', '51.0', '9.9', '11.3', '78.3'], ['40.6', '13.4', '46.0', '0.5', '0.0', '99.5', '22.0', '15.1', '63.0']]
column
['C', 'W', 'A', 'C', 'W', 'A', 'C', 'W', 'A']
['ELMo clusters (K = 5)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AnomiaCommon || C</th> <th>AnomiaCommon || W</th> <th>AnomiaCommon || A</th> <th>AnomiaProper || C</th> <th>AnomiaProper || W</th> <th>AnomiaProper || A</th> <th>Crowdsourced || C</th> <th>Crowdsourced || W</th> <th>Crowdsourced || A</th> </tr> </thead> <tbody> <tr> <td>Embedding Map || ELMo clusters (K = 5) || Training Tokens || 1B+2B</td> <td>76.7</td> <td>13.9</td> <td>9.4</td> <td>42.6</td> <td>17.8</td> <td>39.6</td> <td>55.5</td> <td>18.8</td> <td>25.6</td> </tr> <tr> <td>Embedding Map || w2v.googlenews || Training Tokens || 100B</td> <td>61.9</td> <td>25.2</td> <td>12.9</td> <td>40.1</td> <td>14.9</td> <td>45.0</td> <td>46.3</td> <td>28.8</td> <td>24.9</td> </tr> <tr> <td>Embedding Map || glove.commoncrawl2 || Training Tokens || 840B</td> <td>60.9</td> <td>23.8</td> <td>15.4</td> <td>32.2</td> <td>14.4</td> <td>53.5</td> <td>47.1</td> <td>28.4</td> <td>24.6</td> </tr> <tr> <td>Embedding Map || glove.commoncrawl1 || Training Tokens || 42B</td> <td>57.4</td> <td>29.2</td> <td>13.4</td> <td>30.7</td> <td>17.8</td> <td>51.5</td> <td>40.1</td> <td>36.3</td> <td>23.7</td> </tr> <tr> <td>Embedding Map || glove.wikipedia || Training Tokens || 6B</td> <td>54.5</td> <td>24.3</td> <td>21.3</td> <td>29.2</td> <td>10.9</td> <td>59.9</td> <td>42.7</td> <td>29.0</td> <td>28.4</td> </tr> <tr> <td>Embedding Map || Neelakantan || Training Tokens || 1B</td> <td>35.2</td> <td>25.7</td> <td>39.1</td> <td>18.3</td> <td>14.4</td> <td>67.3</td> <td>32.6</td> <td>27.3</td> <td>40.2</td> </tr> <tr> <td>Embedding Map || w2v.freebase || Training Tokens || 100B</td> <td>22.3</td> <td>28.7</td> <td>49.0</td> <td>34.2</td> <td>14.9</td> <td>51.0</td> <td>9.9</td> <td>11.3</td> <td>78.3</td> </tr> <tr> <td>Embedding Map || WordNet || Training Tokens || -</td> <td>40.6</td> <td>13.4</td> <td>46.0</td> <td>0.5</td> <td>0.0</td> <td>99.5</td> <td>22.0</td> <td>15.1</td> <td>63.0</td> </tr> </tbody></table>
Table 3
table_3
D18-1182
6
emnlp2018
5.2 Word Embeddings Solvers. Table 3 shows the results of embedding-based solvers on the Anomia and crowdsourced datasets, using several different pre-trained embedding maps. We find the best performance on ANOMIACOMMON and ANOMIAPROPER using the word2vec vectors trained on 100 billion tokens from the Google News corpus. Based on this tuning, we fixed the value of K to 5, and repeated the experiments from Section 5 (see the first row in Table 3). ELMo sense vectors clearly outperform all previous baselines on all Odd-Man-Out datasets. This improved performance can be attributed both to ELMo’s better ability to capture context, as well as to the finer sense representation, as opposed to the single representation per word in most of the other baselines.
[2, 1, 1, 1, 1, 2]
['5.2 Word Embeddings Solvers.', 'Table 3 shows the results of embedding-based solvers on the Anomia and crowdsourced datasets, using several different pre-trained embedding maps.', 'We find the best performance on ANOMIACOMMON and ANOMIAPROPER using the word2vec vectors trained on 100 billion tokens from the Google News corpus.', 'Based on this tuning, we fixed the value of K to 5, and repeated the experiments from Section 5 (see the first row in Table 3).', 'ELMo sense vectors clearly outperform all previous baselines on all Odd-Man-Out datasets.', 'This improved performance can be attributed both to ELMo’s better ability to capture context, as well as to the finer sense representation, as opposed to the single representation per word in most of the other baselines.']
[None, ['w2v.googlenews', 'glove.commoncrawl2', 'glove.commoncrawl1', 'glove.wikipedia', 'Neelakantan', 'w2v.freebase', 'WordNet'], ['w2v.googlenews', '100B', 'AnomiaCommon', 'AnomiaProper'], ['ELMo clusters (K = 5)'], ['ELMo clusters (K = 5)', 'w2v.googlenews', 'glove.commoncrawl2', 'glove.commoncrawl1', 'glove.wikipedia', 'Neelakantan', 'w2v.freebase', 'WordNet'], ['ELMo clusters (K = 5)']]
1
D18-1183table_3
Experimental results on simile sentence classification. SC: simile sentence classification; CE: component extraction; LM: language modeling.
2
[['Model', 'Baseline1'], ['Model', 'Baseline2'], ['Model', 'Singletask (SC)'], ['Model', 'Multitask (SC+CE)'], ['Model', 'Multitask (SC+LM)'], ['Model', 'Multitask (SC+CE+LM)']]
2
[['Simile Classification', 'P'], ['Simile Classification', 'R'], ['Simile Classification', 'F1']]
[['0.6523', '0.4752', '0.5498'], ['0.7661', '0.7832', '0.7745'], ['0.7751', '0.8895', '0.8284'], ['0.8056', '0.8886', '0.8450'], ['0.8021', '0.9105', '0.8525'], ['0.8084', '0.9220', '0.8615']]
column
['P', 'R', 'F1']
['Singletask (SC)', 'Multitask (SC+CE)', 'Multitask (SC+LM)', 'Multitask (SC+CE+LM)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Simile Classification || P</th> <th>Simile Classification || R</th> <th>Simile Classification || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline1</td> <td>0.6523</td> <td>0.4752</td> <td>0.5498</td> </tr> <tr> <td>Model || Baseline2</td> <td>0.7661</td> <td>0.7832</td> <td>0.7745</td> </tr> <tr> <td>Model || Singletask (SC)</td> <td>0.7751</td> <td>0.8895</td> <td>0.8284</td> </tr> <tr> <td>Model || Multitask (SC+CE)</td> <td>0.8056</td> <td>0.8886</td> <td>0.8450</td> </tr> <tr> <td>Model || Multitask (SC+LM)</td> <td>0.8021</td> <td>0.9105</td> <td>0.8525</td> </tr> <tr> <td>Model || Multitask (SC+CE+LM)</td> <td>0.8084</td> <td>0.9220</td> <td>0.8615</td> </tr> </tbody></table>
Table 3
table_3
D18-1183
7
emnlp2018
5.2.2 Results. Table 3 shows the performance of the systems. The results are reported with the precision (P), recall (R), and their harmonic mean F1 score (F1). The two feature based methods perform differently. Baseline1 performs poorly. The reason may be that the classification depends on the performance of component extraction, while even our best component extractor performs far from perfect, which brings error propagation. In addition, classifying with component related features only ignores much context, which further decreases the performance. Baseline2 considers context windows and outperforms baseline1 largely. This confirms our intuition that context information implies the semantic of simile expression. Furthermore, we have other observations:. (1) neural network based approaches largely outperform feature-based classifiers;. (2) multitask learning approaches outperform every single task approach and other baselines. Both the component extraction and the language modeling task contribute for simile sentence classification. Component extraction improves the precision and language modeling improves both the precision and the recall. Combining them together can achieve the best performance. The improvement of F1 score can reach to 3.3% compared with the best single task model.
[2, 1, 1, 1, 1, 2, 2, 1, 2, 1, 1, 1, 2, 2, 1, 1]
['5.2.2 Results.', 'Table 3 shows the performance of the systems.', 'The results are reported with the precision (P), recall (R), and their harmonic mean F1 score (F1).', 'The two feature based methods perform differently.', 'Baseline1 performs poorly.', 'The reason may be that the classification depends on the performance of component extraction, while even our best component extractor performs far from perfect, which brings error propagation.', 'In addition, classifying with component related features only ignores much context, which further decreases the performance.', 'Baseline2 considers context windows and outperforms baseline1 largely.', 'This confirms our intuition that context information implies the semantic of simile expression.', 'Furthermore, we have other observations:.', '(1) neural network based approaches largely outperform feature-based classifiers;.', '(2) multitask learning approaches outperform every single task approach and other baselines.', 'Both the component extraction and the language modeling task contribute for simile sentence classification.', 'Component extraction improves the precision and language modeling improves both the precision and the recall.', 'Combining them together can achieve the best performance.', 'The improvement of F1 score can reach to 3.3% compared with the best single task model.']
[None, None, ['P', 'R', 'F1'], ['Baseline1', 'Baseline2'], ['Baseline1'], ['Baseline1'], ['Baseline1'], ['Baseline2', 'Baseline1'], None, None, ['Baseline1', 'Baseline2', 'Singletask (SC)', 'Multitask (SC+CE)', 'Multitask (SC+LM)', 'Multitask (SC+CE+LM)'], ['Multitask (SC+CE)', 'Multitask (SC+LM)', 'Singletask (SC)', 'Baseline1', 'Baseline2'], ['Multitask (SC+CE)', 'Multitask (SC+LM)'], ['Multitask (SC+CE)', 'Multitask (SC+LM)', 'P', 'R'], ['Multitask (SC+CE+LM)'], ['Multitask (SC+CE+LM)', 'Singletask (SC)', 'F1']]
1
D18-1183table_4
Experimental results on component extraction. Experiments on dataset of simile sentences assume that the sentence classifier is perfect. CE: component extraction; SC: simile sentence classification; LM: language modeling.
2
[['Model', 'Rule based'], ['Model', 'CRF'], ['Model', 'Singletask (CE)'], ['Model', 'RandomForest → CRF'], ['Model', 'SingleSC → SingleCE'], ['Model', 'Multitask (CE+SC)'], ['Model', 'Multitask (CE+LM)'], ['Model', 'Multitask (CE+SC+LM)'], ['Model', 'Optimized pipeline']]
2
[['Gold simile sentences', 'P'], ['Gold simile sentences', 'R'], ['Gold simile sentences', 'F1'], ['Whole test set', 'P'], ['Whole test set', 'R'], ['Whole test set', 'F1']]
[['0.4094', '0.1805', '0.2505', '-', '-', '-'], ['0.5619', '0.5907', '0.5760', '0.3157', '0.3698', '0.3406'], ['0.7297', '0.7854', '0.7564', '0.5580', '0.6489', '0.5998'], ['-', '-', '-', '0.4591', '0.4980', '0.4778'], ['-', '-', '-', '0.5720', '0.7074', '0.6325'], ['-', '-', '-', '0.5409', '0.6400', '0.5861'], ['0.7530', '0.7876', '0.7699', '0.5741', '0.7015', '0.6306'], ['-', '-', '-', '0.5599', '0.6989', '0.6211'], ['-', '-', '-', '0.6160', '0.7361', '0.6707']]
column
['P', 'R', 'F1', 'P', 'R', 'F1']
['Singletask (CE)', 'RandomForest → CRF', 'SingleSC → SingleCE', 'Multitask (CE+SC)', 'Multitask (CE+LM)', 'Multitask (CE+SC+LM)', 'Optimized pipeline']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Gold simile sentences || P</th> <th>Gold simile sentences || R</th> <th>Gold simile sentences || F1</th> <th>Whole test set || P</th> <th>Whole test set || R</th> <th>Whole test set || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Rule based</td> <td>0.4094</td> <td>0.1805</td> <td>0.2505</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || CRF</td> <td>0.5619</td> <td>0.5907</td> <td>0.5760</td> <td>0.3157</td> <td>0.3698</td> <td>0.3406</td> </tr> <tr> <td>Model || Singletask (CE)</td> <td>0.7297</td> <td>0.7854</td> <td>0.7564</td> <td>0.5580</td> <td>0.6489</td> <td>0.5998</td> </tr> <tr> <td>Model || RandomForest → CRF</td> <td>-</td> <td>-</td> <td>-</td> <td>0.4591</td> <td>0.4980</td> <td>0.4778</td> </tr> <tr> <td>Model || SingleSC → SingleCE</td> <td>-</td> <td>-</td> <td>-</td> <td>0.5720</td> <td>0.7074</td> <td>0.6325</td> </tr> <tr> <td>Model || Multitask (CE+SC)</td> <td>-</td> <td>-</td> <td>-</td> <td>0.5409</td> <td>0.6400</td> <td>0.5861</td> </tr> <tr> <td>Model || Multitask (CE+LM)</td> <td>0.7530</td> <td>0.7876</td> <td>0.7699</td> <td>0.5741</td> <td>0.7015</td> <td>0.6306</td> </tr> <tr> <td>Model || Multitask (CE+SC+LM)</td> <td>-</td> <td>-</td> <td>-</td> <td>0.5599</td> <td>0.6989</td> <td>0.6211</td> </tr> <tr> <td>Model || Optimized pipeline</td> <td>-</td> <td>-</td> <td>-</td> <td>0.6160</td> <td>0.7361</td> <td>0.6707</td> </tr> </tbody></table>
Table 4
table_4
D18-1183
8
emnlp2018
Table 4 shows the results of various systems and settings on two test sets. The first dataset consists of all manually labeled simile sentences in the test set and the second dataset is the whole test set. We want to compare how component extraction systems work when they know whether a sentence contains a simile or not. We report and discuss the results from the following aspects. The effect of simile sentence classification. First, we can compare the results in the middle column and the rightmost column in Table 4. It is clear that the component extraction systems work much better when they know whether a sentence contains a simile or not. Second, we can see that both pipelines (the feature-based and the neural network based) achieve a better performance compared with extracting components directly using either the CRF model or the neural single task model. Third, Multitask(CE+SC) doesn’t bring significant improvements compared with the single task neural model. These observations indicate that simile sentence classification is suitable to be a pre-processing for simile component classification. It is necessary to further study how to use high level predictions (sentence classification) to learn better representations for consistently improving local predictions (simile component extraction). Rule based, feature-based and neural models. We can see that even on gold simile sentences, the rule based method doesn’t work well. The poor performance of the rule based approach is due to the following reasons. First, the rule-based method is difficult to deal with complex sentence structures. It often fails when there are multiple subordinate clauses. Second, the comparator “像” in Chinese has multiple syntactic roles, sometimes is used as a verb, sometimes is used as a preposition. Third, the accuracy of Chinese dependency parser still has room to be improved. The CRF method performs significantly better, because it considers more contextual signals. Our neural single task model achieves large improvements on both datasets. This verifies the effectiveness of the end-to-end approach. Neural models can see a long range of context and learn features automatically. The word embeddings learned on external resources implicitly have semantic domain information, which is not only useful for generalization but also important for figurative language processing. The effect of language modeling. Surprisingly, using language modeling as an auxiliary task is very useful, especially when dealing with noisy sentences. It gains a 1.3% F1 improvement on the gold simile sentences due to the improvement on the precision and a 3% F1 improvement on the whole test set due to a large improvement on the recall. Generally, language modeling may help learn better task specific representations, especially when data size is limited (Rei, 2017). Another reason may be that language modeling aims to make local predictions, the same as simile component extraction. As shown in Table 4, the optimized pipeline performs better than the strongest multitask learning setting. However, in all settings, the precision scores are lower compared with the recall scores. This indicates that compared with identifying surface patterns, distinguishing metaphorical from literal meanings is much harder and more external knowledge should be incorporated.
[1, 1, 2, 0, 2, 1, 2, 1, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 1, 2, 2, 1, 1, 2]
['Table 4 shows the results of various systems and settings on two test sets.', 'The first dataset consists of all manually labeled simile sentences in the test set and the second dataset is the whole test set.', 'We want to compare how component extraction systems work when they know whether a sentence contains a simile or not.', 'We report and discuss the results from the following aspects.', 'The effect of simile sentence classification.', 'First, we can compare the results in the middle column and the rightmost column in Table 4.', 'It is clear that the component extraction systems work much better when they know whether a sentence contains a simile or not.', 'Second, we can see that both pipelines (the feature-based and the neural network based) achieve a better performance compared with extracting components directly using either the CRF model or the neural single task model.', 'Third, Multitask(CE+SC) doesn’t bring significant improvements compared with the single task neural model.', 'These observations indicate that simile sentence classification is suitable to be a pre-processing for simile component classification.', 'It is necessary to further study how to use high level predictions (sentence classification) to learn better representations for consistently improving local predictions (simile component extraction).', 'Rule based, feature-based and neural models.', 'We can see that even on gold simile sentences, the rule based method doesn’t work well.', 'The poor performance of the rule based approach is due to the following reasons.', 'First, the rule-based method is difficult to deal with complex sentence structures.', 'It often fails when there are multiple subordinate clauses.', 'Second, the comparator “像” in Chinese has multiple syntactic roles, sometimes is used as a verb, sometimes is used as a preposition.', 'Third, the accuracy of Chinese dependency parser still has room to be improved.', 'The CRF method performs significantly better, because it considers more contextual signals.', 'Our neural single task model achieves large improvements on both datasets.', 'This verifies the effectiveness of the end-to-end approach.', 'Neural models can see a long range of context and learn features automatically.', 'The word embeddings learned on external resources implicitly have semantic domain information, which is not only useful for generalization but also important for figurative language processing.', 'The effect of language modeling.', 'Surprisingly, using language modeling as an auxiliary task is very useful, especially when dealing with noisy sentences.', 'It gains a 1.3% F1 improvement on the gold simile sentences due to the improvement on the precision and a 3% F1 improvement on the whole test set due to a large improvement on the recall.', 'Generally, language modeling may help learn better task specific representations, especially when data size is limited (Rei, 2017).', 'Another reason may be that language modeling aims to make local predictions, the same as simile component extraction.', 'As shown in Table 4, the optimized pipeline performs better than the strongest multitask learning setting.', 'However, in all settings, the precision scores are lower compared with the recall scores.', 'This indicates that compared with identifying surface patterns, distinguishing metaphorical from literal meanings is much harder and more external knowledge should be incorporated.']
[['Rule based', 'CRF', 'Singletask (CE)', 'RandomForest → CRF', 'SingleSC → SingleCE', 'Multitask (CE+SC)', 'Multitask (CE+LM)', 'Multitask (CE+SC+LM)', 'Optimized pipeline', 'Gold simile sentences', 'Whole test set'], ['Gold simile sentences', 'Whole test set'], ['Rule based', 'CRF', 'Singletask (CE)', 'RandomForest → CRF', 'SingleSC → SingleCE', 'Multitask (CE+SC)', 'Multitask (CE+LM)', 'Multitask (CE+SC+LM)', 'Optimized pipeline'], None, None, ['Gold simile sentences', 'Whole test set'], ['Gold simile sentences', 'Whole test set'], ['CRF', 'Singletask (CE)', 'RandomForest → CRF', 'SingleSC → SingleCE'], ['Singletask (CE)', 'Multitask (CE+SC)', 'Multitask (CE+LM)', 'Multitask (CE+SC+LM)'], None, None, None, ['Gold simile sentences', 'Rule based'], ['Rule based'], ['Rule based'], ['Rule based'], ['Rule based'], ['Rule based'], ['CRF'], ['Singletask (CE)', 'Gold simile sentences', 'Whole test set'], None, None, None, None, ['Multitask (CE+LM)'], ['Multitask (CE+LM)', 'Singletask (CE)', 'F1', 'P', 'R', 'Gold simile sentences', 'Whole test set'], ['Multitask (CE+LM)'], ['Multitask (CE+LM)'], ['Optimized pipeline', 'Multitask (CE+LM)'], ['Singletask (CE)', 'RandomForest → CRF', 'SingleSC → SingleCE', 'Multitask (CE+SC)', 'Multitask (CE+LM)', 'Multitask (CE+SC+LM)', 'Optimized pipeline', 'P', 'R'], None]
1
D18-1184table_1
Results of our models (top) and previously proposed systems (bottom) on the TREC-QA test set.
2
[['Models', 'Word-level Attention'], ['Models', 'Simple Span Alignment'], ['Models', 'Simple Span Alignment + External Parser'], ['Models', 'Structured Alignment (Shared Parameters)'], ['Models', 'Structured Alignment (Separated Parameters)'], ['Models', 'QA-LSTM (Tan et al. 2016b)'], ['Models', 'Attentive Pooling Network (Santos et al. 2016)'], ['Models', 'Pairwise Word Interaction (He and Lin 2016)'], ['Models', 'Lexical Decomposition and Composition (Wang et al. 2016)'], ['Models', 'Noise-Contrastive Estimation (Rao et al. 2016)'], ['Models', 'BiMPM (Wang et al. 2017b)']]
1
[['MAP'], ['MRR']]
[['0.764', '0.842'], ['0.772', '0.851'], ['0.780', '0.846'], ['0.780', '0.860'], ['0.786', '0.860'], ['0.730', '0.824'], ['0.753', '0.851'], ['0.777', '0.836'], ['0.771', '0.845'], ['0.801', '0.877'], ['0.802', '0.875']]
column
['MAP', 'MRR']
['Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>Models || Word-level Attention</td> <td>0.764</td> <td>0.842</td> </tr> <tr> <td>Models || Simple Span Alignment</td> <td>0.772</td> <td>0.851</td> </tr> <tr> <td>Models || Simple Span Alignment + External Parser</td> <td>0.780</td> <td>0.846</td> </tr> <tr> <td>Models || Structured Alignment (Shared Parameters)</td> <td>0.780</td> <td>0.860</td> </tr> <tr> <td>Models || Structured Alignment (Separated Parameters)</td> <td>0.786</td> <td>0.860</td> </tr> <tr> <td>Models || QA-LSTM (Tan et al. 2016b)</td> <td>0.730</td> <td>0.824</td> </tr> <tr> <td>Models || Attentive Pooling Network (Santos et al. 2016)</td> <td>0.753</td> <td>0.851</td> </tr> <tr> <td>Models || Pairwise Word Interaction (He and Lin 2016)</td> <td>0.777</td> <td>0.836</td> </tr> <tr> <td>Models || Lexical Decomposition and Composition (Wang et al. 2016)</td> <td>0.771</td> <td>0.845</td> </tr> <tr> <td>Models || Noise-Contrastive Estimation (Rao et al. 2016)</td> <td>0.801</td> <td>0.877</td> </tr> <tr> <td>Models || BiMPM (Wang et al. 2017b)</td> <td>0.802</td> <td>0.875</td> </tr> </tbody></table>
Table 1
table_1
D18-1184
5
emnlp2018
Experimental results are listed in Table 1. We measure performance by the mean average precision (MAP) and mean reciprocal rank (MRR) using the standard TREC evaluation script. In the first block of Table 1, we compare our model and variants thereof against several baselines. The first baseline is the Word-level Decomposable Attention model strengthened with a bidirectional LSTM for obtaining a contextualized representation for each word. The second baseline is a Simple Span Alignment model; we use an MLP layer over the LSTM outputs to calculate the unnormalized scores and replace the inside-outside algorithm with a simple softmax function to obtain the probability distribution over all candidate spans. We also introduce a pipelined baseline where we extract constituents from trees parsed by the CoreNLP (Manning et al. 2014) constituency parser, and use the Simple Span Alignment model to only align these constituents. As shown in Table 1, we use two variants of the Structured Alignment model, since the structure of the question and the answer sentence may be different;the first model shares parameters across the question and the answer for computing the structures, while the second one uses separate parameters. We view the sentence selection task as a binary classification problem and the final ranking is based on the predicted probability of the sentence containing the correct answer (positive label). We apply dropout to the output of the BiLSTM with dropout ratio set to 0.2. All parameters (including word embeddings) are updated with AdaGrad (Duchi et al. 2011), and the learning rate is set to 0.05. Table 1 (second block) also reports the performance of various comparison systems and stateof-the-art models. As can be seen, on both MAP and MRR metrics, structured alignment models perform better than the decomposable attention model, showing that structural bias is helpful for matching a question to the correct answer sentence. We also observe that using separate parameters achieves higher scores on both metrics. The simple span alignment model obtains results similar to the decomposable attention model, suggesting that the shallow softmax distribution is ineffective for capturing structural information and may even introduce redundant noise. The pipelined model with an external parser also slightly improves upon the baseline, but still cannot outperform the end-to-end trained structured alignment model which achieves results comparable with several strong baselines with fewer parameters. As mentioned earlier, our model could be used as a plug-in component for other more complex models, and may boost their performance by modeling the latent structures. At the same time, the structured alignment can provide better interpretability for sentence matching tasks, which is a defect of most neural models.
[1, 1, 1, 2, 2, 2, 1, 2, 2, 2, 1, 1, 1, 1, 1, 2, 2]
['Experimental results are listed in Table 1.', 'We measure performance by the mean average precision (MAP) and mean reciprocal rank (MRR) using the standard TREC evaluation script.', 'In the first block of Table 1, we compare our model and variants thereof against several baselines.', 'The first baseline is the Word-level Decomposable Attention model strengthened with a bidirectional LSTM for obtaining a contextualized representation for each word.', 'The second baseline is a Simple Span Alignment model; we use an MLP layer over the LSTM outputs to calculate the unnormalized scores and replace the inside-outside algorithm with a simple softmax function to obtain the probability distribution over all candidate spans.', 'We also introduce a pipelined baseline where we extract constituents from trees parsed by the CoreNLP (Manning et al. 2014) constituency parser, and use the Simple Span Alignment model to only align these constituents.', 'As shown in Table 1, we use two variants of the Structured Alignment model, since the structure of the question and the answer sentence may be different;the first model shares parameters across the question and the answer for computing the structures, while the second one uses separate parameters.', 'We view the sentence selection task as a binary classification problem and the final ranking is based on the predicted probability of the sentence containing the correct answer (positive label).', 'We apply dropout to the output of the BiLSTM with dropout ratio set to 0.2.', 'All parameters (including word embeddings) are updated with AdaGrad (Duchi et al. 2011), and the learning rate is set to 0.05.', 'Table 1 (second block) also reports the performance of various comparison systems and stateof-the-art models.', 'As can be seen, on both MAP and MRR metrics, structured alignment models perform better than the decomposable attention model, showing that structural bias is helpful for matching a question to the correct answer sentence.', 'We also observe that using separate parameters achieves higher scores on both metrics.', 'The simple span alignment model obtains results similar to the decomposable attention model, suggesting that the shallow softmax distribution is ineffective for capturing structural information and may even introduce redundant noise.', 'The pipelined model with an external parser also slightly improves upon the baseline, but still cannot outperform the end-to-end trained structured alignment model which achieves results comparable with several strong baselines with fewer parameters.', 'As mentioned earlier, our model could be used as a plug-in component for other more complex models, and may boost their performance by modeling the latent structures.', 'At the same time, the structured alignment can provide better interpretability\nfor sentence matching tasks, which is a defect of\nmost neural models.']
[None, ['MAP', 'MRR'], ['Word-level Attention', 'Simple Span Alignment', 'Simple Span Alignment + External Parser', 'Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'], ['Word-level Attention'], ['Simple Span Alignment'], ['Simple Span Alignment + External Parser'], ['Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'], ['Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'], ['Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'], ['Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'], ['QA-LSTM (Tan et al. 2016b)', 'Attentive Pooling Network (Santos et al. 2016)', 'Pairwise Word Interaction (He and Lin 2016)', 'Lexical Decomposition and Composition (Wang et al. 2016)', 'Noise-Contrastive Estimation (Rao et al. 2016)', 'BiMPM (Wang et al. 2017b)'], ['MAP', 'MRR', 'Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'], ['Structured Alignment (Separated Parameters)'], ['Word-level Attention', 'Simple Span Alignment'], ['Simple Span Alignment + External Parser', 'Word-level Attention', 'Structured Alignment (Shared Parameters)'], ['Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'], ['Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)']]
1
D18-1185table_2
Performance comparison (accuracy) on MultiNLI and SciTail. Models with †, # and (cid:91) are reported from (Weissenborn, 2017), (Khot et al., 2018) and (Williams et al., 2017) respectively.
2
[['Model', 'Majority'], ['Model', 'NGRAM#'], ['Model', 'CBOW♭'], ['Model', 'BiLSTM♭'], ['Model', 'ESIM#♭'], ['Model', 'DecompAtt# -'], ['Model', 'DGEM#'], ['Model', 'DGEM + Edge#'], ['Model', 'ESIM†'], ['Model', 'ESIM + Read†'], ['Model', 'CAFE'], ['Model', 'CAFE Ensemble']]
2
[['MultiNLI', 'Match'], ['MultiNLI', 'Mismatch'], ['SciTail', '-']]
[['36.5', '35.6', '60.3'], ['-', '-', '70.6'], ['65.2', '64.8', '-'], ['69.8', '69.4', '-'], ['72.4', '72.1', '70.6'], ['-', '-', '72.3'], ['-', '-', '70.8'], ['-', '-', '77.3'], ['76.3', '75.8', '-'], ['77.8', '77.0', '-'], ['78.7', '77.9', '83.3'], ['80.2', '79.0', '-']]
column
['accuracy', 'accuracy', 'accuracy']
['CAFE', 'CAFE Ensemble']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MultiNLI || Match</th> <th>MultiNLI || Mismatch</th> <th>SciTail || -</th> </tr> </thead> <tbody> <tr> <td>Model || Majority</td> <td>36.5</td> <td>35.6</td> <td>60.3</td> </tr> <tr> <td>Model || NGRAM#</td> <td>-</td> <td>-</td> <td>70.6</td> </tr> <tr> <td>Model || CBOW♭</td> <td>65.2</td> <td>64.8</td> <td>-</td> </tr> <tr> <td>Model || BiLSTM♭</td> <td>69.8</td> <td>69.4</td> <td>-</td> </tr> <tr> <td>Model || ESIM#♭</td> <td>72.4</td> <td>72.1</td> <td>70.6</td> </tr> <tr> <td>Model || DecompAtt# -</td> <td>-</td> <td>-</td> <td>72.3</td> </tr> <tr> <td>Model || DGEM#</td> <td>-</td> <td>-</td> <td>70.8</td> </tr> <tr> <td>Model || DGEM + Edge#</td> <td>-</td> <td>-</td> <td>77.3</td> </tr> <tr> <td>Model || ESIM†</td> <td>76.3</td> <td>75.8</td> <td>-</td> </tr> <tr> <td>Model || ESIM + Read†</td> <td>77.8</td> <td>77.0</td> <td>-</td> </tr> <tr> <td>Model || CAFE</td> <td>78.7</td> <td>77.9</td> <td>83.3</td> </tr> <tr> <td>Model || CAFE Ensemble</td> <td>80.2</td> <td>79.0</td> <td>-</td> </tr> </tbody></table>
Table 2
table_2
D18-1185
7
emnlp2018
Table 2 reports our results on the MultiNLI and SciTail datasets. On MultiNLI, CAFE significantly outperforms ESIM, a strong state-of-the-art model on both settings. We also outperform the ESIM + Read model (Weissenborn, 2017). An ensemble of CAFE models achieve competitive result on the MultiNLI dataset. On SciTail, our proposed CAFE model achieves state-of-the-art performance. The performance gain over strong baselines such as DecompAtt and ESIM are ≈ 10% − 13% in terms of accuracy. CAFE also outperforms DGEM, which uses a graph-based attention for improved performance, by a significant margin of 5%. As such, empirical results demonstrate the effectiveness of our proposed CAFE model on the challenging SciTail dataset.
[1, 1, 1, 1, 1, 1, 1, 1]
['Table 2 reports our results on the MultiNLI and SciTail datasets.', 'On MultiNLI, CAFE significantly outperforms ESIM, a strong state-of-the-art model on both settings.', 'We also outperform the ESIM + Read model (Weissenborn, 2017).', 'An ensemble of CAFE models achieve competitive result on the MultiNLI dataset.', 'On SciTail, our proposed CAFE model achieves state-of-the-art performance.', 'The performance gain over strong baselines such as DecompAtt and ESIM are ≈ 10% − 13% in terms of accuracy.', 'CAFE also outperforms DGEM, which uses a graph-based attention for improved performance, by a significant margin of 5%.', 'As such, empirical results demonstrate the effectiveness of our proposed CAFE model on the challenging SciTail dataset.']
[['MultiNLI', 'SciTail'], ['MultiNLI', 'CAFE', 'ESIM†', 'ESIM#♭'], ['MultiNLI', 'CAFE', 'ESIM + Read†'], ['CAFE Ensemble', 'MultiNLI'], ['CAFE', 'SciTail'], ['CAFE', 'SciTail', 'DecompAtt# -', 'ESIM#♭'], ['SciTail', 'CAFE', 'DGEM + Edge#'], ['CAFE', 'SciTail']]
1
D18-1186table_2
Performance on SNLI dataset.
2
[['Models', 'Handcrafted features (Bowman et al. 2015)'], ['Models', 'LSTM with attention (Rocktaschel et al. 2015)'], ['Models', 'Match-LSTM (Wang and Jiang 2016)'], ['Models', 'Decomposable attention model (Parikh et al. 2016)'], ['Models', 'BiMPM (Zhiguo Wang 2017)'], ['Models', 'NTI-SLSTM-LSTM (Munkhdalai and Yu 2017)'], ['Models', 'Re-read LSTM (Sha et al. 2016)'], ['Models', 'DIIN (Gong et al. 2017)'], ['Models', 'ESIM (Chen et al. 2017a)'], ['Models', 'CIN'], ['Models', 'ESIM (Chen et al. 2017a) (Ensemble)'], ['Models', 'BiMPM (Zhiguo Wang 2017) (Ensemble)'], ['Models', 'DIIN (Gong et al. 2017) (Ensemble)'], ['Models', 'CIN (Ensemble)']]
1
[['Train'], ['Test']]
[['99.7', '78.2'], ['85.3', '83.5'], ['92.0', '86.1'], ['90.5', '86.8'], ['90.9', '87.5'], ['88.5', '87.3'], ['90.7', '87.5'], ['91.2', '88.0'], ['92.6', '88.0'], ['93.2', '88.0'], ['93.5', '88.6'], ['93.2', '88.8'], ['92.3', '88.9'], ['94.3', '89.1']]
column
['accuracy', 'accuracy']
['CIN', 'CIN (Ensemble)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Models || Handcrafted features (Bowman et al. 2015)</td> <td>99.7</td> <td>78.2</td> </tr> <tr> <td>Models || LSTM with attention (Rocktaschel et al. 2015)</td> <td>85.3</td> <td>83.5</td> </tr> <tr> <td>Models || Match-LSTM (Wang and Jiang 2016)</td> <td>92.0</td> <td>86.1</td> </tr> <tr> <td>Models || Decomposable attention model (Parikh et al. 2016)</td> <td>90.5</td> <td>86.8</td> </tr> <tr> <td>Models || BiMPM (Zhiguo Wang 2017)</td> <td>90.9</td> <td>87.5</td> </tr> <tr> <td>Models || NTI-SLSTM-LSTM (Munkhdalai and Yu 2017)</td> <td>88.5</td> <td>87.3</td> </tr> <tr> <td>Models || Re-read LSTM (Sha et al. 2016)</td> <td>90.7</td> <td>87.5</td> </tr> <tr> <td>Models || DIIN (Gong et al. 2017)</td> <td>91.2</td> <td>88.0</td> </tr> <tr> <td>Models || ESIM (Chen et al. 2017a)</td> <td>92.6</td> <td>88.0</td> </tr> <tr> <td>Models || CIN</td> <td>93.2</td> <td>88.0</td> </tr> <tr> <td>Models || ESIM (Chen et al. 2017a) (Ensemble)</td> <td>93.5</td> <td>88.6</td> </tr> <tr> <td>Models || BiMPM (Zhiguo Wang 2017) (Ensemble)</td> <td>93.2</td> <td>88.8</td> </tr> <tr> <td>Models || DIIN (Gong et al. 2017) (Ensemble)</td> <td>92.3</td> <td>88.9</td> </tr> <tr> <td>Models || CIN (Ensemble)</td> <td>94.3</td> <td>89.1</td> </tr> </tbody></table>
Table 2
table_2
D18-1186
7
emnlp2018
SNLI. Table 2 shows the results of different models on the train set and test set of SNLI. The first row gives a baseline model with handcrafted features presented by Bowman et al. (2015). All the other models are attention-based neural networks. Wang and Jiang (2016) exploits the long short-term memory (LSTM) for NLI. Parikh et al. (2016) uses attention to decompose the problem into subproblems that can be solved separately. Chen et al. (2017a) incorporates the chain LSTM and tree LSTM jointly. Zhiguo Wang (2017) proposes a bilateral multi-perspective matching for NLI. In Table 2, the second block gives the single models. As we can see, our proposed model CIN achieves 88.0% in accuracy on SNLI test set. Compared to the previous work, CIN obtains competitive performance. To further improve the performance of NLI systems, researchers have built ensemble models. Ensemble systems obtained the best performance on SNLI. Our ensemble model obtains 89.1% in accuracy and outperforms the current state-of-the-art model. Overall, single model of CIN performs competitively well and outperforms the previous models on ensemble scenarios for the natural language inference task.
[2, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 2, 1, 1, 1]
['SNLI.', 'Table 2 shows the results of different models on the train set and test set of SNLI.', 'The first row gives a baseline model with handcrafted features presented by Bowman et al. (2015).', 'All the other models are attention-based neural networks.', 'Wang and Jiang (2016) exploits the long short-term memory (LSTM) for NLI.', 'Parikh et al. (2016) uses attention to decompose the problem into subproblems that can be solved separately.', 'Chen et al. (2017a) incorporates the chain LSTM and tree LSTM jointly.', 'Zhiguo Wang (2017) proposes a bilateral multi-perspective matching for NLI.', 'In Table 2, the second block gives the single models.', 'As we can see, our proposed model CIN achieves 88.0% in accuracy on SNLI test set.', 'Compared to the previous work, CIN obtains competitive performance.', 'To further improve the performance of NLI systems, researchers have built ensemble models.', 'Ensemble systems obtained the best performance on SNLI.', 'Our ensemble model obtains 89.1% in accuracy and outperforms the current state-of-the-art model.', 'Overall, single model of CIN performs competitively well and outperforms the previous models on ensemble scenarios for the natural language inference task.']
[None, ['Handcrafted features (Bowman et al. 2015)', 'LSTM with attention (Rocktaschel et al. 2015)', 'Match-LSTM (Wang and Jiang 2016)', 'Decomposable attention model (Parikh et al. 2016)', 'BiMPM (Zhiguo Wang 2017)', 'NTI-SLSTM-LSTM (Munkhdalai and Yu 2017)', 'Re-read LSTM (Sha et al. 2016)', 'DIIN (Gong et al. 2017)', 'ESIM (Chen et al. 2017a)', 'CIN', 'ESIM (Chen et al. 2017a) (Ensemble)', 'BiMPM (Zhiguo Wang 2017) (Ensemble)', 'DIIN (Gong et al. 2017) (Ensemble)', 'CIN (Ensemble)', 'Train', 'Test'], ['Handcrafted features (Bowman et al. 2015)'], ['LSTM with attention (Rocktaschel et al. 2015)', 'Match-LSTM (Wang and Jiang 2016)', 'Decomposable attention model (Parikh et al. 2016)', 'BiMPM (Zhiguo Wang 2017)', 'NTI-SLSTM-LSTM (Munkhdalai and Yu 2017)', 'Re-read LSTM (Sha et al. 2016)', 'DIIN (Gong et al. 2017)', 'ESIM (Chen et al. 2017a)', 'CIN', 'ESIM (Chen et al. 2017a) (Ensemble)', 'BiMPM (Zhiguo Wang 2017) (Ensemble)', 'DIIN (Gong et al. 2017) (Ensemble)', 'CIN (Ensemble)'], ['Match-LSTM (Wang and Jiang 2016)'], ['Decomposable attention model (Parikh et al. 2016)'], ['ESIM (Chen et al. 2017a) (Ensemble)'], ['BiMPM (Zhiguo Wang 2017)'], ['LSTM with attention (Rocktaschel et al. 2015)', 'Match-LSTM (Wang and Jiang 2016)', 'Decomposable attention model (Parikh et al. 2016)', 'BiMPM (Zhiguo Wang 2017)', 'NTI-SLSTM-LSTM (Munkhdalai and Yu 2017)', 'Re-read LSTM (Sha et al. 2016)', 'DIIN (Gong et al. 2017)', 'ESIM (Chen et al. 2017a)', 'CIN'], ['CIN', 'Test'], ['CIN'], ['ESIM (Chen et al. 2017a) (Ensemble)', 'BiMPM (Zhiguo Wang 2017) (Ensemble)', 'DIIN (Gong et al. 2017) (Ensemble)', 'CIN (Ensemble)'], ['ESIM (Chen et al. 2017a) (Ensemble)', 'BiMPM (Zhiguo Wang 2017) (Ensemble)', 'DIIN (Gong et al. 2017) (Ensemble)', 'CIN (Ensemble)'], ['CIN (Ensemble)', 'Test'], ['CIN']]
1
D18-1186table_3
Performance on MultiNLI test set.
2
[['Models', 'BiLSTM (Williams et al. 2017)'], ['Models', 'InnerAtt (Balazs et al. 2017)'], ['Models', 'ESIM (Chen et al. 2017a)'], ['Models', 'Gated-Att BiLSTM (Chen et al. 2017b)'], ['Models', 'ESIM (Chen et al. 2017a)'], ['Models', 'CIN']]
1
[['Match'], ['Mismatch']]
[['67.0', '67.6'], ['72.1', '72.1'], ['72.3', '72.1'], ['73.2', '73.6'], ['76.3', '75.8'], ['77.0', '77.6']]
column
['accuracy', 'accuracy']
['CIN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Match</th> <th>Mismatch</th> </tr> </thead> <tbody> <tr> <td>Models || BiLSTM (Williams et al. 2017)</td> <td>67.0</td> <td>67.6</td> </tr> <tr> <td>Models || InnerAtt (Balazs et al. 2017)</td> <td>72.1</td> <td>72.1</td> </tr> <tr> <td>Models || ESIM (Chen et al. 2017a)</td> <td>72.3</td> <td>72.1</td> </tr> <tr> <td>Models || Gated-Att BiLSTM (Chen et al. 2017b)</td> <td>73.2</td> <td>73.6</td> </tr> <tr> <td>Models || ESIM (Chen et al. 2017a)</td> <td>76.3</td> <td>75.8</td> </tr> <tr> <td>Models || CIN</td> <td>77.0</td> <td>77.6</td> </tr> </tbody></table>
Table 3
table_3
D18-1186
7
emnlp2018
MultiNLI. Table 3 shows the performance of different models on MultiNLI. The original aim of this dataset is to evaluate the quality of sentence representations. Recently this dataset is also used to evaluate the interaction model involving attention mechanism. The first line of Table 3 gives a baseline model without interaction. The second block of Table 3 gives the attention-based models. The proposed model, CIN, achieves the accuracies of 77.0% and 77.6% on the match and mismatch test sets respectively. The results show that our model outperforms the other models.
[2, 1, 2, 2, 1, 1, 1, 2]
['MultiNLI.', 'Table 3 shows the performance of different models on MultiNLI.', 'The original aim of this dataset is to evaluate the quality of sentence representations.', 'Recently this dataset is also used to evaluate the interaction model involving attention mechanism.', 'The first line of Table 3 gives a baseline model without interaction.', 'The second block of Table 3 gives the attention-based models.', 'The proposed model, CIN, achieves the accuracies of 77.0% and 77.6% on the match and mismatch test sets respectively.', 'The results show that our model outperforms the other models.']
[None, ['BiLSTM (Williams et al. 2017)', 'InnerAtt (Balazs et al. 2017)', 'ESIM (Chen et al. 2017a)', 'Gated-Att BiLSTM (Chen et al. 2017b)', 'CIN'], None, None, ['BiLSTM (Williams et al. 2017)'], ['InnerAtt (Balazs et al. 2017)', 'ESIM (Chen et al. 2017a)', 'Gated-Att BiLSTM (Chen et al. 2017b)'], ['CIN', 'Match', 'Mismatch'], ['CIN', 'BiLSTM (Williams et al. 2017)', 'InnerAtt (Balazs et al. 2017)', 'ESIM (Chen et al. 2017a)', 'Gated-Att BiLSTM (Chen et al. 2017b)']]
1
D18-1186table_4
Performance on Quora question pair dataset.
2
[['Models', 'Siamese-CNN'], ['Models', 'Multi-Perspective CNN'], ['Models', 'Siamese-LSTM'], ['Models', 'Multi-Perspective-LSTM'], ['Models', 'L.D.C'], ['Models', 'BiMPM (Zhiguo Wang 2017)'], ['Models', 'CIN']]
1
[['Test']]
[['79.60'], ['81.38'], ['82.58'], ['83.21'], ['85.55'], ['88.17'], ['88.62']]
column
['accuracy']
['CIN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Models || Siamese-CNN</td> <td>79.60</td> </tr> <tr> <td>Models || Multi-Perspective CNN</td> <td>81.38</td> </tr> <tr> <td>Models || Siamese-LSTM</td> <td>82.58</td> </tr> <tr> <td>Models || Multi-Perspective-LSTM</td> <td>83.21</td> </tr> <tr> <td>Models || L.D.C</td> <td>85.55</td> </tr> <tr> <td>Models || BiMPM (Zhiguo Wang 2017)</td> <td>88.17</td> </tr> <tr> <td>Models || CIN</td> <td>88.62</td> </tr> </tbody></table>
Table 4
table_4
D18-1186
7
emnlp2018
Quora. Table 4 shows the performance of different models on the Quora test set. The baselines on Table 4 are all implemented in Zhiguo Wang (2017). The Siamese-CNN model and SiameseLSTM model encode sentences with CNN and LSTM respectively, and then predict the relationship between them based on the cosine similarity. Multi-Perspective-CNN and MultiPerspective-LSTM are transformed from SiameseCNN and Siamese-LSTM respectively by replacing the cosine similarity calculation layer with their multi-perspective cosine matching function. The L.D.C is a general compare-aggregate framework that performs word-level matching followed by a aggregation of convolution neural networks. As we can see, our model outperforms the base lines and achieve 88.62% in the test sets of Quora corpus.
[2, 1, 1, 2, 2, 2, 1]
['Quora.', 'Table 4 shows the performance of different models on the Quora test set.', 'The baselines on Table 4 are all implemented in Zhiguo Wang (2017).', 'The Siamese-CNN model and SiameseLSTM model encode sentences with CNN and LSTM respectively, and then predict the relationship between them based on the cosine similarity.', 'Multi-Perspective-CNN and MultiPerspective-LSTM are transformed from SiameseCNN and Siamese-LSTM respectively by replacing the cosine similarity calculation layer with their multi-perspective cosine matching function.', 'The L.D.C is a general compare-aggregate framework that performs word-level matching followed by a aggregation of convolution neural networks.', 'As we can see, our model outperforms the base lines and achieve 88.62% in the test sets of Quora\ncorpus.']
[None, ['Siamese-CNN', 'Multi-Perspective CNN', 'Siamese-LSTM', 'Multi-Perspective-LSTM', 'L.D.C', 'BiMPM (Zhiguo Wang 2017)', 'CIN', 'Test'], ['Siamese-CNN', 'Multi-Perspective CNN', 'Siamese-LSTM', 'Multi-Perspective-LSTM', 'L.D.C', 'BiMPM (Zhiguo Wang 2017)'], ['Siamese-CNN', 'Siamese-LSTM'], ['Multi-Perspective CNN', 'Multi-Perspective-LSTM'], ['L.D.C'], ['CIN', 'Test', 'Siamese-CNN', 'Multi-Perspective CNN', 'Siamese-LSTM', 'Multi-Perspective-LSTM', 'L.D.C', 'BiMPM (Zhiguo Wang 2017)']]
1
D18-1194table_1
Evaluation of results on the test set.
1
[['Pipeline'], ['Variant (a)'], ['Variant (b)'], ['Variant (c)'], ['Our model']]
2
[['S metric', 'Precision'], ['S metric', 'Recall'], ['S metric', 'F1'], ['BLEU INST', '-'], ['MAE SPR', '-'], ['MAE FACT', '-']]
[['35.08', '30.10', '32.39', '15.03', 'N/A', 'N/A'], ['39.31', '32.93', '35.84', '16.74', '0.75', '1.11'], ['42.76', '33.20', '37.38', '17.71', '0.74', '1.14'], ['41.74', '33.28', '37.03', '18.01', '0.80', '1.14'], ['45.33', '33.88', '38.78', '19.61', '0.71', '1.06']]
column
['S metric', 'S metric', 'S metric', 'BLEU INST', 'MAE SPR', 'MAE FACT']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S metric || Precision</th> <th>S metric || Recall</th> <th>S metric || F1</th> <th>BLEU INST || -</th> <th>MAE SPR || -</th> <th>MAE FACT || -</th> </tr> </thead> <tbody> <tr> <td>Pipeline</td> <td>35.08</td> <td>30.10</td> <td>32.39</td> <td>15.03</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Variant (a)</td> <td>39.31</td> <td>32.93</td> <td>35.84</td> <td>16.74</td> <td>0.75</td> <td>1.11</td> </tr> <tr> <td>Variant (b)</td> <td>42.76</td> <td>33.20</td> <td>37.38</td> <td>17.71</td> <td>0.74</td> <td>1.14</td> </tr> <tr> <td>Variant (c)</td> <td>41.74</td> <td>33.28</td> <td>37.03</td> <td>18.01</td> <td>0.80</td> <td>1.14</td> </tr> <tr> <td>Our model</td> <td>45.33</td> <td>33.88</td> <td>38.78</td> <td>19.61</td> <td>0.71</td> <td>1.06</td> </tr> </tbody></table>
Table 1
table_1
D18-1194
7
emnlp2018
6.3 Results. Table 1 reports the experimental results on the test set. Results on the in-domain test set are similar and shown in Appendix D. In Table 1, S metric (defined in Section 4) measures the similarity between predicted and reference graph representations. Based on the optimal variable mapping provided by the S metric, we are able to evaluate our model and the variants in different aspects: BLEUINST measures the BLEU score of all matched instance edges; MAESPR measures the mean absolute error of SPR property scores of all matched argument edges; and MAEFACT measures the mean absolute error of factuality scores of all matched attribute edges. Overall, our proposed model outperforms the variants in every aspect. Variants (a) and (b) use simple heuristics to solve coreference, and achieve reasonable results: they both employ sequence-tosequence models to predict graph representations, which can be considered a replica of state-ofthe-art approaches for structured prediction (Choe and Charniak, 2016; Barzdins and Gosko, 2016; Peng et al., 2017). Compared to our model which employs the coreference annotating mechanism, these two variants suffer notable loss in the precision of S metric. As a result, their performance drops on the other metrics. Variant (c) only uses the encoder-side information for token representation, resulting in significant loss in MAESPR and MAEFACT. In the pipeline approach, each component is trained independently. During test, residual errors from each component are propagated through the pipeline. As expected, it shows a significant performance drop.
[2, 1, 2, 2, 2, 1, 2, 1, 1, 2, 2, 2, 1]
['6.3 Results.', 'Table 1 reports the experimental results on the test set.', 'Results on the in-domain test set are similar and shown in Appendix D.', 'In Table 1, S metric (defined in Section 4) measures the similarity between predicted and reference graph representations.', 'Based on the optimal variable mapping provided by the S metric, we are able to evaluate our model and the variants in different aspects: BLEUINST measures the BLEU score of all matched instance edges; MAESPR measures the mean absolute error of SPR property scores of all matched argument edges; and MAEFACT measures the mean absolute error of factuality scores of all matched attribute edges.', 'Overall, our proposed model outperforms the variants in every aspect.', 'Variants (a) and (b) use simple heuristics to solve coreference, and achieve reasonable results: they both employ sequence-tosequence models to predict graph representations, which can be considered a replica of state-ofthe-art approaches for structured prediction (Choe and Charniak, 2016; Barzdins and Gosko, 2016; Peng et al., 2017).', 'Compared to our model which employs the coreference annotating mechanism, these two variants suffer notable loss in the precision of S metric.', 'As a result, their performance drops on the other metrics.', 'Variant (c) only uses the encoder-side information for token representation, resulting in significant loss in MAESPR and MAEFACT.', 'In the pipeline approach, each component is trained independently.', 'During test, residual errors from each component are propagated through the pipeline.', 'As expected, it shows a significant performance drop.']
[None, None, None, ['S metric'], ['BLEU INST', 'MAE SPR', 'MAE FACT'], ['Our model', 'Variant (a)', 'Variant (b)', 'Variant (c)', 'S metric', 'BLEU INST', 'MAE SPR', 'MAE FACT'], ['Variant (a)', 'Variant (b)'], ['Our model', 'Variant (a)', 'Variant (b)', 'S metric', 'Precision'], ['Variant (a)', 'Variant (b)', 'F1'], ['MAE SPR', 'MAE FACT'], ['Pipeline'], ['Pipeline'], ['Pipeline']]
1
D18-1199table_4
The performance of MinV+NN and models without soft label on all the idioms in the two corpora
2
[['Model', 'Gibbs'], ['Model', 'EM'], ['Model', 'MinV+NN']]
1
[['Avg. Ff ig'], ['Avg.Acc']]
[['0.58 (0.31 ? 0.78)', '0.57 (0.4 ? 0.78)'], ['0.56 (0.31 ? 0.71)', '0.6 (0.42 ? 0.77)'], ['0.68 (0.41 ? 0.83)', '0.67 (0.55 ? 0.86)']]
column
['Avg. Ff ig', 'Avg.Acc']
['MinV+NN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Avg. Ff ig</th> <th>Avg.Acc</th> </tr> </thead> <tbody> <tr> <td>Model || Gibbs</td> <td>0.58 (0.31 ? 0.78)</td> <td>0.57 (0.4 ? 0.78)</td> </tr> <tr> <td>Model || EM</td> <td>0.56 (0.31 ? 0.71)</td> <td>0.6 (0.42 ? 0.77)</td> </tr> <tr> <td>Model || MinV+NN</td> <td>0.68 (0.41 ? 0.83)</td> <td>0.67 (0.55 ? 0.86)</td> </tr> </tbody></table>
Table 4
table_4
D18-1199
8
emnlp2018
Table 4 shows the performances of the new models, which are all worse than our full models MinV +infGibbs and MinV +infEM. This highlights the advantage of integrating distributional semantic information and local features into one single learning procedure. Without the informed prior (encoded by the soft labels), the Gibbs sampling and EM algorithms only seek to maximize the probability of the observed data, and may fail to learn the underlying usage structure. The model MinV +NN is not as competitive as our full models. It is too sensitive to the selected instances. Even though the training examples are instances that MinV is the most confident about, there are still mislabelled instances. These ”noisy training examples” would lead the NN classifier to make unreliable predictions.
[1, 2, 2, 1, 2, 2, 2]
['Table 4 shows the performances of the new models, which are all worse than our full models MinV +infGibbs and MinV +infEM.', 'This highlights the advantage of integrating distributional semantic information and local features into one single learning procedure.', 'Without the informed prior (encoded by the soft labels), the Gibbs sampling and EM algorithms only seek to maximize the probability of the observed data, and may fail to learn the underlying usage structure.', 'The model MinV +NN is not as competitive as our full models.', 'It is too sensitive to the selected instances.', 'Even though the training examples are instances that MinV is the most confident about, there are still mislabelled instances.', 'These ”noisy training examples” would lead the NN classifier to make unreliable predictions.']
[['Gibbs', 'EM', 'MinV+NN'], None, ['Gibbs', 'EM'], ['MinV+NN'], ['MinV+NN'], ['MinV+NN'], ['MinV+NN']]
1
D18-1201table_1
Results on development set (all metrics except MR are x100). M3GM lines use TRANSE as their association model. In M3GMαr, the graph component is tuned post-hoc against the local component per relation.
3
[['System', '-', 'RULE'], ['System', '1', 'DISTMULT'], ['System', '2', 'BILIN'], ['System', '3', 'TRANSE'], ['System', '4', 'M3GM'], ['System', '5', 'M3GMαr']]
1
[['MR'], ['MRR'], ['H@10'], ['H@1']]
[['13396', '35.26', '35.27', '35.23'], ['1111', '43.29', '50.73', '39.67'], ['738', '45.36', '52.93', '41.37'], ['2231', '46.07', '55.65', '41.41'], ['2231', '47.94', '57.72', '43.26'], ['2231', '48.30', '57.59', '43.78']]
column
['MR', 'MRR', 'H@10', 'H@1']
['M3GM', 'M3GMαr']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR</th> <th>MRR</th> <th>H@10</th> <th>H@1</th> </tr> </thead> <tbody> <tr> <td>System || - || RULE</td> <td>13396</td> <td>35.26</td> <td>35.27</td> <td>35.23</td> </tr> <tr> <td>System || 1 || DISTMULT</td> <td>1111</td> <td>43.29</td> <td>50.73</td> <td>39.67</td> </tr> <tr> <td>System || 2 || BILIN</td> <td>738</td> <td>45.36</td> <td>52.93</td> <td>41.37</td> </tr> <tr> <td>System || 3 || TRANSE</td> <td>2231</td> <td>46.07</td> <td>55.65</td> <td>41.41</td> </tr> <tr> <td>System || 4 || M3GM</td> <td>2231</td> <td>47.94</td> <td>57.72</td> <td>43.26</td> </tr> <tr> <td>System || 5 || M3GMαr</td> <td>2231</td> <td>48.30</td> <td>57.59</td> <td>43.78</td> </tr> </tbody></table>
Table 1
table_1
D18-1201
7
emnlp2018
5 Results. Table 1 presents the results on the development set. Lines 1-3 depict the results for local models using averaged FastText embedding initialization, showing that the best performance in terms of MRR and top-rank hits is achieved by TRANSE. Mean Rank does not align with the other metrics;. this is an interpretable tradeoff, as both BILIN and DISTMULT have an inherent preference for correlated synset embeddings, giving a stronger fallback for cases where the relation embedding is completely off, but allowing less freedom for separating strong cases from correlated false positives, compared to a translational objective. Effect of global score. There is a clear advantage to re-ranking the top local candidates using the score signal from the M3GM model (line 4). These results are further improved when the graph score is weighted against the association component per relation (line 5). We obtain similar improvements when re-ranking the predictions from DISTMULT and BILIN.
[2, 1, 1, 1, 2, 2, 1, 1, 2]
['5 Results.', 'Table 1 presents the results on the development set.', 'Lines 1-3 depict the results for local models using averaged FastText embedding initialization, showing that the best performance in terms of MRR and top-rank hits is achieved by TRANSE.', 'Mean Rank does not align with the other metrics;.', 'this is an interpretable tradeoff, as both BILIN and DISTMULT have an inherent preference for correlated synset embeddings, giving a stronger fallback for cases where the relation embedding is completely off, but allowing less freedom for separating strong cases from correlated false positives, compared to a translational objective.', 'Effect of global score.', 'There is a clear advantage to re-ranking the top local candidates using the score signal from the M3GM model (line 4).', 'These results are further improved when the graph score is weighted against the association component per relation (line 5).', 'We obtain similar improvements when re-ranking the predictions from DISTMULT and BILIN.']
[None, None, ['DISTMULT', 'BILIN', 'TRANSE', 'MRR', 'H@10', 'H@1'], ['MR', 'MRR', 'H@10', 'H@1'], ['DISTMULT', 'BILIN'], None, ['M3GM'], ['M3GMαr'], ['DISTMULT', 'BILIN']]
1
D18-1201table_2
Main results on test set. † These models were not re-implemented, and are reported as in Nguyen et al. (2018) and in Dettmers et al. (2018).
2
[['System', 'RULE'], ['System', 'COMPLEX†'], ['System', 'CONVE†'], ['System', 'CONVKB†'], ['System', 'TRANSE'], ['System', 'M3GMαr']]
1
[['MR'], ['MRR'], ['H@10'], ['H@1']]
[['13396', '35.26', '35.26', '35.26'], ['5261', '44', '51', '41'], ['5277', '46', '48', '39'], ['2554', '24.8', '52.5', ''], ['2195', '46.59', '55.55', '42.26'], ['2193', '49.83', '59.02', '45.37']]
column
['MR', 'MRR', 'H@10', 'H@1']
['M3GMαr']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR</th> <th>MRR</th> <th>H@10</th> <th>H@1</th> </tr> </thead> <tbody> <tr> <td>System || RULE</td> <td>13396</td> <td>35.26</td> <td>35.26</td> <td>35.26</td> </tr> <tr> <td>System || COMPLEX†</td> <td>5261</td> <td>44</td> <td>51</td> <td>41</td> </tr> <tr> <td>System || CONVE†</td> <td>5277</td> <td>46</td> <td>48</td> <td>39</td> </tr> <tr> <td>System || CONVKB†</td> <td>2554</td> <td>24.8</td> <td>52.5</td> <td></td> </tr> <tr> <td>System || TRANSE</td> <td>2195</td> <td>46.59</td> <td>55.55</td> <td>42.26</td> </tr> <tr> <td>System || M3GMαr</td> <td>2193</td> <td>49.83</td> <td>59.02</td> <td>45.37</td> </tr> </tbody></table>
Table 2
table_2
D18-1201
7
emnlp2018
Table 2 shows that our main results transfer onto the test set, with even a slightly larger margin. This could be the result of the greater edge density of the combined training and dev graphs, which enhance the global coherence of the graph structure captured by M3GM features. To support this theory, we tested the M3GM model trained on only the training set, and its test set performance was roughly one point worse on all metrics, as compared with the model trained on the training+dev data.
[1, 2, 2]
['Table 2 shows that our main results transfer onto the test set, with even a slightly larger margin.', 'This could be the result of the greater edge density of the combined training and dev graphs, which enhance the global coherence of the graph structure captured by M3GM features.', 'To support this theory, we tested the M3GM model trained on only the training set, and its test set performance was roughly one point worse on all metrics, as compared with the model trained on the training+dev data.']
[None, ['M3GMαr'], ['M3GMαr']]
1
D18-1205table_5
Comparsion results of sentence selection.
2
[['Method', 'SummaRuNNer-abs'], ['Method', 'SummaRuNNer'], ['Method', 'OurExtractive'], ['Method', '– distS'], ['Method', '– distS&gateF']]
1
[['Rouge-1'], ['Rouge-2'], ['Rouge-L']]
[['37.5', '14.5', '33.4'], ['39.6', '16.2', '35.3'], ['40.41', '18.30', '36.30'], ['37.06', '16.55', '33.23'], ['36.25', '16.22', '32.59']]
column
['Rouge-1', 'Rouge-2', 'Rouge-L']
['OurExtractive']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rouge-1</th> <th>Rouge-2</th> <th>Rouge-L</th> </tr> </thead> <tbody> <tr> <td>Method || SummaRuNNer-abs</td> <td>37.5</td> <td>14.5</td> <td>33.4</td> </tr> <tr> <td>Method || SummaRuNNer</td> <td>39.6</td> <td>16.2</td> <td>35.3</td> </tr> <tr> <td>Method || OurExtractive</td> <td>40.41</td> <td>18.30</td> <td>36.30</td> </tr> <tr> <td>Method || – distS</td> <td>37.06</td> <td>16.55</td> <td>33.23</td> </tr> <tr> <td>Method || – distS&amp;gateF</td> <td>36.25</td> <td>16.22</td> <td>32.59</td> </tr> </tbody></table>
Table 5
table_5
D18-1205
7
emnlp2018
Results in Table 5 show that our simple extractive method OurExtractive significantly outperforms state-of-the-art neural extractive baselines, which demonstrates the effectiveness of the information selection component in our model. Moreover, OurExtractive significantly outperforms the two comparison systems which remove different components of our model one by one. The results show that both the gated global information filtering and distant supervision training are effective for improving information selection in document summarization. Our proposed method effectively combines the strengths of extractive methods and abstractive methods into a unified framework.
[1, 1, 2, 2]
['Results in Table 5 show that our simple extractive method OurExtractive significantly outperforms state-of-the-art neural extractive baselines, which demonstrates the effectiveness of the information selection component in our model.', 'Moreover, OurExtractive significantly outperforms the two comparison systems which remove different components of our model one by one.', 'The results show that both the gated global information filtering and distant supervision training are effective for improving information selection in document summarization.', 'Our proposed method effectively combines the strengths of extractive methods and abstractive methods into a unified framework.']
[['OurExtractive', 'SummaRuNNer-abs', 'SummaRuNNer'], ['OurExtractive', '– distS', '– distS&gateF'], ['– distS', '– distS&gateF'], ['OurExtractive']]
1
D18-1206table_1
Comparison of summarization datasets with respect to overall corpus size, size of training, validation, and test set, average document (source) and summary (target) length (in terms of words and sentences), and vocabulary size on both on source and target. For CNN and DailyMail, we used the original splits of Hermann et al. (2015) and followed Narayan et al. (2018b) to preprocess them. For NY Times (Sandhaus, 2008), we used the splits and pre-processing steps of Paulus et al. (2018). For the vocabulary, we lowercase tokens.
2
[['Datasets', 'CNN'], ['Datasets', 'DailyMail'], ['Datasets', 'NY Times'], ['Datasets', 'XSum']]
2
[['# docs', 'train'], ['# docs', 'val'], ['# docs', 'test'], ['avg. document length', 'words'], ['avg. document length', 'sentences'], ['avg. summary length', 'words'], ['avg. summary length', 'sentences'], ['vocabulary size', 'document'], ['vocabulary size', 'summary']]
[['90266', '1220', '1093', '760.50', '33.98', '45.70', '3.59', '343,516', '89,051'], ['196961', '12148', '10397', '653.33', '29.33', '54.65', '3.86', '563,663', '179,966'], ['589284', '32736', '32739', '800.04', '35.55', '45.54', '2.44', '1,399,358', '294,011'], ['204045', '11332', '11334', '431.07', '19.77', '23.26', '1.00', '399,147', '81,092']]
column
['# docs', '# docs', '# docs', 'avg. document length', 'avg. document length', 'avg. summary length', 'avg. summary length', 'vocabulary size', 'vocabulary size']
['XSum']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th># docs || train</th> <th># docs || val</th> <th># docs || test</th> <th>avg. document length || words</th> <th>avg. document length || sentences</th> <th>avg. summary length || words</th> <th>avg. summary length || sentences</th> <th>vocabulary size || document</th> <th>vocabulary size || summary</th> </tr> </thead> <tbody> <tr> <td>Datasets || CNN</td> <td>90266</td> <td>1220</td> <td>1093</td> <td>760.50</td> <td>33.98</td> <td>45.70</td> <td>3.59</td> <td>343,516</td> <td>89,051</td> </tr> <tr> <td>Datasets || DailyMail</td> <td>196961</td> <td>12148</td> <td>10397</td> <td>653.33</td> <td>29.33</td> <td>54.65</td> <td>3.86</td> <td>563,663</td> <td>179,966</td> </tr> <tr> <td>Datasets || NY Times</td> <td>589284</td> <td>32736</td> <td>32739</td> <td>800.04</td> <td>35.55</td> <td>45.54</td> <td>2.44</td> <td>1,399,358</td> <td>294,011</td> </tr> <tr> <td>Datasets || XSum</td> <td>204045</td> <td>11332</td> <td>11334</td> <td>431.07</td> <td>19.77</td> <td>23.26</td> <td>1.00</td> <td>399,147</td> <td>81,092</td> </tr> </tbody></table>
Table 1
table_1
D18-1206
3
emnlp2018
Table 1 compares XSum with the CNN, DailyMail, and NY Times benchmarks. As can be seen, XSum contains a substantial number of training instances, similar to DailyMail; documents and summaries in XSum are shorter in relation to other datasets but the vocabulary size is sufficiently large, comparable to CNN.
[1, 1]
['Table 1 compares XSum with the CNN, DailyMail, and NY Times benchmarks.', 'As can be seen, XSum contains a substantial number of training instances, similar to DailyMail; documents and summaries in XSum are shorter in relation to other datasets but the vocabulary size is sufficiently large, comparable to CNN.']
[['CNN', 'DailyMail', 'NY Times', 'XSum'], ['XSum', '# docs', 'avg. summary length', 'vocabulary size', 'DailyMail', 'CNN']]
1
D18-1206table_2
Corpus bias towards extractive methods in the CNN, DailyMail, NY Times, and XSum datasets. We show the proportion of novel n-grams in gold summaries. We also report ROUGE scores for the LEAD baseline and the extractive oracle system EXT-ORACLE. Results are computed on the test set.
2
[['Datasets', 'CNN'], ['Datasets', 'DailyMail'], ['Datasets', 'NY Times'], ['Datasets', 'XSum']]
2
[['% of novel n-grams in gold summary', 'unigrams'], ['% of novel n-grams in gold summary', 'bigrams'], ['% of novel n-grams in gold summary', 'trigrams'], ['% of novel n-grams in gold summary', '4-grams'], ['LEAD', 'R1'], ['LEAD', 'R2'], ['LEAD', 'RL'], ['EXT-ORACLE', 'R1'], ['EXT-ORACLE', 'R2'], ['EXT-ORACLE', 'RL']]
[['16.75', '54.33', '72.42', '80.37', '29.15', '11.13', '25.95', '50.38', '28.55', '46.58'], ['17.03', '53.78', '72.14', '80.28', '40.68', '18.36', '37.25', '55.12', '30.55', '51.24'], ['22.64', '55.59', '71.93', '80.16', '31.85', '15.86', '23.75', '52.08', '31.59', '46.72'], ['35.76', '83.45', '95.50', '98.49', '16.30', '1.61', '11.95', '29.79', '8.81', '22.65']]
column
['unigrams', 'bigrams', 'trigrams', '4-grams', 'R1', 'R2', 'RL', 'R1', 'R2', 'RL']
['XSum']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>% of novel n-grams in gold summary || unigrams</th> <th>% of novel n-grams in gold summary || bigrams</th> <th>% of novel n-grams in gold summary || trigrams</th> <th>% of novel n-grams in gold summary || 4-grams</th> <th>LEAD || R1</th> <th>LEAD || R2</th> <th>LEAD || RL</th> <th>EXT-ORACLE || R1</th> <th>EXT-ORACLE || R2</th> <th>EXT-ORACLE || RL</th> </tr> </thead> <tbody> <tr> <td>Datasets || CNN</td> <td>16.75</td> <td>54.33</td> <td>72.42</td> <td>80.37</td> <td>29.15</td> <td>11.13</td> <td>25.95</td> <td>50.38</td> <td>28.55</td> <td>46.58</td> </tr> <tr> <td>Datasets || DailyMail</td> <td>17.03</td> <td>53.78</td> <td>72.14</td> <td>80.28</td> <td>40.68</td> <td>18.36</td> <td>37.25</td> <td>55.12</td> <td>30.55</td> <td>51.24</td> </tr> <tr> <td>Datasets || NY Times</td> <td>22.64</td> <td>55.59</td> <td>71.93</td> <td>80.16</td> <td>31.85</td> <td>15.86</td> <td>23.75</td> <td>52.08</td> <td>31.59</td> <td>46.72</td> </tr> <tr> <td>Datasets || XSum</td> <td>35.76</td> <td>83.45</td> <td>95.50</td> <td>98.49</td> <td>16.30</td> <td>1.61</td> <td>11.95</td> <td>29.79</td> <td>8.81</td> <td>22.65</td> </tr> </tbody></table>
Table 2
table_2
D18-1206
3
emnlp2018
Table 2 provides empirical analysis supporting our claim that XSum is less biased toward extractive methods compared to other summarization datasets. We report the percentage of novel n-grams in the target gold summaries that do not appear in their source documents. There are 36% novel unigrams in the XSum reference summaries compared to 17% in CNN, 17% in DailyMail, and 23% in NY Times. This indicates that XSum summaries are more abstractive. The proportion of novel constructions grows for larger n-grams across datasets, however, it is much steeper in XSum whose summaries exhibit approximately 83% novel bigrams, 96% novel trigrams, and 98% novel 4-grams (comparison datasets display around 47?55% new bigrams, 58?72% new trigrams, and 63?80% novel 4-grams). We further evaluated two extractive methods on these datasets. LEAD is often used as a strong lower bound for news summarization (Nenkova, 2005) and creates a summary by selecting the first few sentences or words in the document. We extracted the first 3 sentences for CNN documents and the first 4 sentences for DailyMail (Narayan et al., 2018b). Following previous work (Durrett et al., 2016; Paulus et al., 2018), we obtained LEAD summaries based on the first 100 words for NY Times documents. For XSum, we selected the first sentence in the document (excluding the oneline summary) to generate the LEAD. Our second method, EXT-ORACLE, can be viewed as an upper bound for extractive models (Nallapati et al., 2017; Narayan et al., 2018b). It creates an oracle summary by selecting the best possible set of sentences in the document that gives the highest ROUGE (Lin and Hovy, 2003) with respect to the gold summary. For XSum, we simply selected the single-best sentence in the document as summary. Table 2 reports the performance of the two extractive methods using ROUGE-1 (R1), ROUGE2 (R2), and ROUGE-L (RL) with the gold summaries as reference. The LEAD baseline performs extremely well on CNN, DailyMail and NY Times confirming that they are biased towards extractive methods. EXT-ORACLE further shows that improved sentence selection would bring further performance gains to extractive approaches. Abstractive systems trained on these datasets often have a hard time beating the LEAD, let alone EXTORACLE, or display a low degree of novelty in their summaries (See et al., 2017; Tan and Wan, 2017; Paulus et al., 2018; Pasunuru and Bansal, 2018; Celikyilmaz et al., 2018). Interestingly, LEAD and EXT-ORACLE perform poorly on XSum underlying the fact that it is less biased towards extractive methods.
[1, 1, 1, 1, 1, 1, 2, 2, 0, 2, 2, 2, 2, 1, 1, 1, 2, 1]
['Table 2 provides empirical analysis supporting our claim that XSum is less biased toward extractive methods compared to other summarization datasets.', 'We report the percentage of novel n-grams in the target gold summaries that do not appear in their source documents.', 'There are 36% novel unigrams in the XSum reference summaries compared to 17% in CNN, 17% in DailyMail, and 23% in NY Times.', 'This indicates that XSum summaries are more abstractive.', 'The proportion of novel constructions grows for larger n-grams across datasets, however, it is much steeper in XSum whose summaries exhibit approximately 83% novel bigrams, 96% novel trigrams, and 98% novel 4-grams (comparison datasets display around 47?55% new bigrams, 58?72% new trigrams, and 63?80% novel 4-grams).', 'We further evaluated two extractive methods on these datasets.', 'LEAD is often used as a strong lower bound for news summarization (Nenkova, 2005) and creates a summary by selecting the first few sentences or words in the document.', 'We extracted the first 3 sentences for CNN documents and the first 4 sentences for DailyMail (Narayan et al., 2018b).', 'Following previous work (Durrett et al., 2016; Paulus et al., 2018), we obtained LEAD summaries based on the first 100 words for NY Times documents.', 'For XSum, we selected the first sentence in the document (excluding the oneline summary) to generate the LEAD.', 'Our second method, EXT-ORACLE, can be viewed as an upper bound for extractive models (Nallapati et al., 2017; Narayan et al., 2018b).', 'It creates an oracle summary by selecting the best possible set of sentences in the document that gives the highest ROUGE (Lin and Hovy, 2003) with respect to the gold summary.', 'For XSum, we simply selected the single-best sentence in the document as summary.', 'Table 2 reports the performance of the two extractive methods using ROUGE-1 (R1), ROUGE2 (R2), and ROUGE-L (RL) with the gold summaries as reference.', 'The LEAD baseline performs extremely well on CNN, DailyMail and NY Times confirming that they are biased towards extractive methods.', 'EXT-ORACLE further shows that improved sentence selection would bring further performance gains to extractive approaches.', 'Abstractive systems trained on these datasets often have a hard time beating the LEAD, let alone EXTORACLE, or display a low degree of novelty in their summaries (See et al., 2017; Tan and Wan, 2017; Paulus et al., 2018; Pasunuru and Bansal, 2018; Celikyilmaz et al., 2018).', 'Interestingly, LEAD and EXT-ORACLE perform poorly on XSum underlying the fact that it is less biased towards extractive methods.']
[['CNN', 'DailyMail', 'NY Times', 'XSum'], ['% of novel n-grams in gold summary'], ['CNN', 'DailyMail', 'NY Times', 'XSum', 'unigrams'], ['XSum'], ['XSum', 'bigrams', 'trigrams', '4-grams'], ['LEAD', 'EXT-ORACLE'], ['LEAD'], ['LEAD', 'CNN', 'DailyMail'], None, ['XSum', 'LEAD'], ['EXT-ORACLE'], ['EXT-ORACLE'], ['XSum', 'EXT-ORACLE'], ['LEAD', 'EXT-ORACLE', 'R1', 'R2', 'RL'], ['LEAD', 'CNN', 'DailyMail', 'NY Times'], ['EXT-ORACLE'], ['LEAD', 'EXT-ORACLE'], ['XSum', 'LEAD', 'EXT-ORACLE']]
1
D18-1206table_4
ROUGE results on XSum test set. We report ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) F1 scores. Extractive systems are in the upper block, RNN-based abstractive systems are in the middle block, and convolutional abstractive systems are in the bottom block.
2
[['Models', 'Random'], ['Models', 'LEAD'], ['Models', 'EXT-ORACLE'], ['Models', 'SEQ2SEQ'], ['Models', 'PTGEN'], ['Models', 'PTGEN-COVG'], ['Models', 'CONVS2S'], ['Models', 'T-CONVS2SS (enct)'], ['Models', 'T-CONVS2S (enct dectD)'], ['Models', 'T-CONVS2S (enc(t tD))'], ['Models', 'T-CONVS2S (enc(t tD) dectD)']]
1
[['R1'], ['R2'], ['RL']]
[['15.16', '1.78', '11.27'], ['16.30', '1.60', '11.95'], ['29.79', '8.81', '22.66'], ['28.42', '8.77', '22.48'], ['29.70', '9.21', '23.24'], ['28.10', '8.02', '21.72'], ['31.27', '11.07', '25.23'], ['31.71', '11.38', '25.56'], ['31.71', '11.34', '25.61'], ['31.61', '11.30', '25.51'], ['31.89', '11.54', '25.75']]
column
['R1', 'R2', 'R3']
['T-CONVS2SS (enct)', 'T-CONVS2S (enct dectD)', 'T-CONVS2S (enc(t tD))', 'T-CONVS2S (enc(t tD) dectD)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>RL</th> </tr> </thead> <tbody> <tr> <td>Models || Random</td> <td>15.16</td> <td>1.78</td> <td>11.27</td> </tr> <tr> <td>Models || LEAD</td> <td>16.30</td> <td>1.60</td> <td>11.95</td> </tr> <tr> <td>Models || EXT-ORACLE</td> <td>29.79</td> <td>8.81</td> <td>22.66</td> </tr> <tr> <td>Models || SEQ2SEQ</td> <td>28.42</td> <td>8.77</td> <td>22.48</td> </tr> <tr> <td>Models || PTGEN</td> <td>29.70</td> <td>9.21</td> <td>23.24</td> </tr> <tr> <td>Models || PTGEN-COVG</td> <td>28.10</td> <td>8.02</td> <td>21.72</td> </tr> <tr> <td>Models || CONVS2S</td> <td>31.27</td> <td>11.07</td> <td>25.23</td> </tr> <tr> <td>Models || T-CONVS2SS (enct')</td> <td>31.71</td> <td>11.38</td> <td>25.56</td> </tr> <tr> <td>Models || T-CONVS2S (enct' dectD)</td> <td>31.71</td> <td>11.34</td> <td>25.61</td> </tr> <tr> <td>Models || T-CONVS2S (enc(t' tD))</td> <td>31.61</td> <td>11.30</td> <td>25.51</td> </tr> <tr> <td>Models || T-CONVS2S (enc(t' tD) dectD)</td> <td>31.89</td> <td>11.54</td> <td>25.75</td> </tr> </tbody></table>
Table 4
table_4
D18-1206
7
emnlp2018
Automatic Evaluation. We report results using automatic metrics in Table 4. We evaluated summarization quality using F1 ROUGE (Lin and Hovy, 2003). Unigram and bigram overlap (ROUGE-1 and ROUGE-2) are a proxy for assessing informativeness and the longest common subsequence (ROUGE-L) represents fluency. On the XSum dataset, SEQ2SEQ outperforms the LEAD and RANDOM baselines by a large margin. PTGEN, a SEQ2SEQ model with a “copying” mechanism outperforms EXT-ORACLE, a “perfect” extractive system on ROUGE-2 and ROUGE-L. This is in sharp contrast to the performance of these models on CNN/DailyMail (See et al., 2017) and Newsroom datasets (Grusky et al., 2018), where they fail to outperform the LEAD. The result provides further evidence that XSum is a good testbed for abstractive summarization. PTGEN-COVG, the best performing abstractive system on the CNN/DailyMail datasets, does not do well. We believe that the coverage mechanism is more useful when generating multi-line summaries and is basically redundant for extreme summarization. CONVS2S, the convolutional variant of SEQ2SEQ, significantly outperforms all RNN-based abstractive systems. We hypothesize that its superior performance stems from the ability to better represent document content (i.e., by capturing long-range dependencies). Table 4 shows several variants of T-CONVS2S including an encoder network enriched with information about how topical a word is on its own (enct') or in the document (enc(t',tD)). We also experimented with various decoders by conditioning every prediction on the topic of the document, basically encouraging the summary to be in the same theme as the document (dectD) or letting the decoder decide the theme of the summary. Interestingly, all four T-CONVS2S variants outperform CONVS2S. T-CONVS2S performs best when both encoder and decoder are constrained by the document topic (enc(t',tD),dectD).
[2, 1, 1, 2, 1, 1, 2, 2, 1, 2, 1, 2, 1, 1, 1, 1]
['Automatic Evaluation.', 'We report results using automatic metrics in Table 4.', 'We evaluated summarization quality using F1 ROUGE (Lin and Hovy, 2003).', 'Unigram and bigram overlap (ROUGE-1 and ROUGE-2) are a proxy for assessing informativeness and the longest common subsequence (ROUGE-L) represents fluency.', 'On the XSum dataset, SEQ2SEQ outperforms the LEAD and RANDOM baselines by a large margin.', 'PTGEN, a SEQ2SEQ model with a “copying” mechanism outperforms EXT-ORACLE, a “perfect” extractive system on ROUGE-2 and ROUGE-L.', 'This is in sharp contrast to the performance of these models on CNN/DailyMail (See et al., 2017) and Newsroom datasets (Grusky et al., 2018), where they fail to outperform the LEAD.', 'The result provides further evidence that XSum is a good testbed for abstractive summarization.', 'PTGEN-COVG, the best performing abstractive system on the CNN/DailyMail datasets, does not do well.', 'We believe that the coverage mechanism is more useful when generating multi-line summaries and is basically redundant for extreme summarization.', 'CONVS2S, the convolutional variant of SEQ2SEQ, significantly outperforms all RNN-based abstractive systems.', 'We hypothesize that its superior performance stems from the ability to better represent document content (i.e., by capturing long-range dependencies).', "Table 4 shows several variants of T-CONVS2S including an encoder network enriched with information about how topical a word is on its own (enct') or in the document (enc(t',tD)).", 'We also experimented with various decoders by conditioning every prediction on the topic of the document, basically encouraging the summary to be in the same theme as the document (dectD) or letting the decoder decide the theme of the summary.', 'Interestingly, all four T-CONVS2S variants outperform CONVS2S.', "T-CONVS2S performs best when both encoder and decoder are constrained by the document topic (enc(t',tD),dectD)."]
[None, None, ['R1', 'R2', 'RL'], ['R1', 'R2', 'RL'], ['SEQ2SEQ', 'Random', 'LEAD'], ['PTGEN', 'EXT-ORACLE', 'R2', 'RL'], ['PTGEN', 'EXT-ORACLE', 'LEAD'], None, ['PTGEN-COVG'], ['PTGEN-COVG'], ['CONVS2S', 'SEQ2SEQ', 'PTGEN', 'PTGEN-COVG'], ['CONVS2S'], ['T-CONVS2SS (enct)', 'T-CONVS2S (enc(t tD))'], ['T-CONVS2S (enct dectD)', 'T-CONVS2S (enc(t tD) dectD)'], ['CONVS2S', 'T-CONVS2SS (enct)', 'T-CONVS2S (enct dectD)', 'T-CONVS2S (enc(t tD))', 'T-CONVS2S (enc(t tD) dectD)'], ['T-CONVS2S (enc(t tD) dectD)']]
1
D18-1208table_3
ROUGE-2 recall across sentence extractors when using fixed pretrained embeddings or when embeddings are updated during training. In both cases embeddings are initialized with pretrained GloVe embeddings. All extractors use the averaging sentence encoder. When both learned and fixed settings are bolded, there is no signifcant performance difference. RNN extractor is omitted for space but is similar to Seq2Seq. Difference in scores shown in parenthesis.
4
[['Ext.', 'Seq2Seq', 'Emb.', 'Fixed'], ['Ext.', 'Seq2Seq', 'Emb.', 'Learn'], ['Ext.', 'C&L', 'Emb.', 'Fixed'], ['Ext.', 'C&L', 'Emb.', 'Learn'], ['Ext.', 'Summa', 'Emb.', 'Fixed'], ['Ext.', 'Runner', 'Emb.', 'Learn']]
1
[['CNN/DM'], ['NYT'], ['DUC'], ['Reddit'], ['AMI'], [' PubMed']]
[['25.6', '35.7', '22.8', '13.6', '5.5', '17.7'], ['25.3 (0.3)', '35.7 (0.0)', '22.9 (-0.1)', '13.8 (-0.2)', '5.8 (-0.3)', '16.9 (0.8)'], ['25.3', '35.6', '23.1', '13.6', '6.1', '17.7'], ['24.9 (0.4)', '35.4 (0.2)', '23.0 (0.1)', '13.4 (0.2)', '6.2 (-0.1)', '16.4 (1.3)'], ['25.4', '35.4', '22.3', '13.4', '5.6', '17.2'], ['25.1 (0.3)', '35.2 (0.2)', '22.2 (0.1)', '12.6 (0.8)', '5.8 (-0.2)', '16.8 (0.4)']]
column
['ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2']
['Fixed', 'Learn']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CNN/DM</th> <th>NYT</th> <th>DUC</th> <th>Reddit</th> <th>AMI</th> <th>PubMed</th> </tr> </thead> <tbody> <tr> <td>Ext. || Seq2Seq || Emb. || Fixed</td> <td>25.6</td> <td>35.7</td> <td>22.8</td> <td>13.6</td> <td>5.5</td> <td>17.7</td> </tr> <tr> <td>Ext. || Seq2Seq || Emb. || Learn</td> <td>25.3 (0.3)</td> <td>35.7 (0.0)</td> <td>22.9 (-0.1)</td> <td>13.8 (-0.2)</td> <td>5.8 (-0.3)</td> <td>16.9 (0.8)</td> </tr> <tr> <td>Ext. || C&amp;L || Emb. || Fixed</td> <td>25.3</td> <td>35.6</td> <td>23.1</td> <td>13.6</td> <td>6.1</td> <td>17.7</td> </tr> <tr> <td>Ext. || C&amp;L || Emb. || Learn</td> <td>24.9 (0.4)</td> <td>35.4 (0.2)</td> <td>23.0 (0.1)</td> <td>13.4 (0.2)</td> <td>6.2 (-0.1)</td> <td>16.4 (1.3)</td> </tr> <tr> <td>Ext. || Summa || Emb. || Fixed</td> <td>25.4</td> <td>35.4</td> <td>22.3</td> <td>13.4</td> <td>5.6</td> <td>17.2</td> </tr> <tr> <td>Ext. || Runner || Emb. || Learn</td> <td>25.1 (0.3)</td> <td>35.2 (0.2)</td> <td>22.2 (0.1)</td> <td>12.6 (0.8)</td> <td>5.8 (-0.2)</td> <td>16.8 (0.4)</td> </tr> </tbody></table>
Table 3
table_3
D18-1208
6
emnlp2018
Word Embedding Learning. Given that learning a sentence encoder (averaging has no learned parameters) does not yield significant improvement, it is natural to consider whether learning word embeddings is also necessary. In Table 3 we compare the performance of different extractors using the averaging encoder, when the word embeddings are held fixed or learned during training. In both cases, word embeddings are initialized with GloVe embeddings trained on a combination of Gigaword and Wikipedia. When learning embeddings, words occurring fewer than three times in the training data are mapped to an unknown token (with learned embedding). In all but one case, fixed embeddings are as good or better than the learned embeddings. Thisis a somewhat surprising finding on the CNN/DM data since it is reasonably large, and learning embeddings should give the models more flexibility to identify important word features. This suggests that we cannot extract much generalizable learning signal from the content other than what is already present from initialization. Even on PubMed, where the language is quite different from the news/Wikipedia articles the GloVe embeddings were trained on, learning leads to significantly worse results.
[2, 2, 1, 2, 0, 1, 2, 2, 2]
['Word Embedding Learning.', 'Given that learning a sentence encoder (averaging has no learned parameters) does not yield significant improvement, it is natural to consider whether learning word embeddings is also necessary.', 'In Table 3 we compare the performance of different extractors using the averaging encoder, when the word embeddings are held fixed or learned during training.', 'In both cases, word embeddings are initialized with GloVe embeddings trained on a combination of Gigaword and Wikipedia.', 'When learning embeddings, words occurring fewer than three times in the training data are mapped to an unknown token (with learned embedding).', 'In all but one case, fixed embeddings are as good or better than the learned embeddings.', 'Thisis a somewhat surprising finding on the CNN/DM data since it is reasonably large, and learning embeddings should give the models more flexibility to identify important word features.', 'This suggests that we cannot extract much generalizable learning signal from the content other than what is already present from initialization.', 'Even on PubMed, where the language is quite different from the news/Wikipedia articles the GloVe embeddings were trained on, learning leads to significantly worse results.']
[None, None, ['Seq2Seq', 'C&L', 'Summa', 'Runner', 'Learn', 'Fixed'], ['Learn', 'Fixed'], None, ['Fixed'], None, None, [' PubMed']]
1
D18-1208table_5
ROUGE-2 recall using models trained on in-order and shuffled documents. Extractor uses the averaging sentence encoder. When both in-order and shuffled settings are bolded, there is no signifcant performance difference. Difference in scores shown in parenthesis.
4
[['Ext.', 'Seq2Seq', 'Order', 'In-Order'], ['Ext.', 'Seq2Seq', 'Order', 'Shuffled']]
1
[['CNN/DM'], ['NYT'], ['DUC'], ['Reddit'], ['AMI'], ['PubMed']]
[['25.6', '35.7', '22.8', '13.6', '5.5', '17.7'], ['21.7 (3.9)', '25.6 (10.1)', '21.2 (1.6)', '13.5 (0.1)', '6.0 (-0.5)', '14.9 (2.8)']]
column
['ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2']
['In-Order', 'Shuffled']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CNN/DM</th> <th>NYT</th> <th>DUC</th> <th>Reddit</th> <th>AMI</th> <th>PubMed</th> </tr> </thead> <tbody> <tr> <td>Ext. || Seq2Seq || Order || In-Order</td> <td>25.6</td> <td>35.7</td> <td>22.8</td> <td>13.6</td> <td>5.5</td> <td>17.7</td> </tr> <tr> <td>Ext. || Seq2Seq || Order || Shuffled</td> <td>21.7 (3.9)</td> <td>25.6 (10.1)</td> <td>21.2 (1.6)</td> <td>13.5 (0.1)</td> <td>6.0 (-0.5)</td> <td>14.9 (2.8)</td> </tr> </tbody></table>
Table 5
table_5
D18-1208
7
emnlp2018
Table 5 shows the results of the shuffling experiments. The news domains and PubMed suffer a significant drop in performance when the document order is shuffled. By comparison, there is no significant difference between the shuffled and inorder models on the Reddit domain, and shuffling actually improves performance on AMI. This suggest that position is being learned by the models in the news/journal article domain even when the model has no explicit position features, and that this feature is more important than either content or function words.
[1, 1, 1, 2]
['Table 5 shows the results of the shuffling experiments.', 'The news domains and PubMed suffer a significant drop in performance when the document order is shuffled.', 'By comparison, there is no significant difference between the shuffled and inorder models on the Reddit domain, and shuffling actually improves performance on AMI.', 'This suggest that position is being learned by the models in the news/journal article domain even when the model has no explicit position features, and that this feature is more important than either content or function words.']
[None, ['Shuffled', 'CNN/DM', 'NYT', 'PubMed'], ['In-Order', 'Shuffled', 'Reddit', 'AMI'], None]
1
D18-1215table_2
Comparison of sample precision and absolute recall (all instances and unique entity tuples) in test extraction on PMC. DPL + EMB is our full system using PubMed-trained word embedding, whereas DPL uses the original Wikipedia-trained word embedding in Peng et al. (2017). Ablation: DS (distant supervision), DP (data programming), JI (joint inference).
2
[['System', 'Peng 2017'], ['System', 'DPL + EMB'], ['System', 'DPL'], ['System', 'DPL -DS'], ['System', 'DPL -DP'], ['System', 'DPL -DP (ENTITY)'], ['System', 'DPL -JI']]
1
[['Prec.'], ['Abs. Rec.'], ['Unique']]
[['0.64', '6768', '2738'], ['0.74', '8478', '4821'], ['0.73', '7666', '4144'], ['0.29', '7555', '4912'], ['0.67', '4826', '2629'], ['0.70', '7638', '4074'], ['0.72', '7418', '4011']]
column
['Prec.', 'Abs. Rec.', 'Unique']
['DPL + EMB', 'DPL', 'DPL -DS', 'DPL -DP', 'DPL -DP (ENTITY)', 'DPL -JI']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Abs. Rec.</th> <th>Unique</th> </tr> </thead> <tbody> <tr> <td>System || Peng 2017</td> <td>0.64</td> <td>6768</td> <td>2738</td> </tr> <tr> <td>System || DPL + EMB</td> <td>0.74</td> <td>8478</td> <td>4821</td> </tr> <tr> <td>System || DPL</td> <td>0.73</td> <td>7666</td> <td>4144</td> </tr> <tr> <td>System || DPL -DS</td> <td>0.29</td> <td>7555</td> <td>4912</td> </tr> <tr> <td>System || DPL -DP</td> <td>0.67</td> <td>4826</td> <td>2629</td> </tr> <tr> <td>System || DPL -DP (ENTITY)</td> <td>0.70</td> <td>7638</td> <td>4074</td> </tr> <tr> <td>System || DPL -JI</td> <td>0.72</td> <td>7418</td> <td>4011</td> </tr> </tbody></table>
Table 2
table_2
D18-1215
7
emnlp2018
old in all cases (an instance is classified as positive if the normalized probability score is at least 0.5). For each system, sample precision was estimated by sampling 100 positive extractions and manually determining the proportion of correct extractions by an author knowledgeable about this domain. Absolute recall is estimated by multiplying sample precision with the number of positive extractions. Table 2 shows the results. DPL substantially outperformed Peng et al. (2017), improving sample precision by ten absolute points and raising absolute recall by 25%. Combining disparate indirect supervision strategies is key to this performance gain, as evident from the ablation results. While distant supervision remained the most potent source of indirect supervision, data programming and joint inference each contributed significantly. Replacing out-of-domain (Wikipedia) word embedding with in-domain (PubMed) word embedding (Pyysalo et al., 2013) also led to a small gain.
[2, 2, 2, 1, 1, 1, 1, 1]
['old in all cases (an instance is classified as positive if the normalized probability score is at least 0.5).', 'For each system, sample precision was estimated by sampling 100 positive extractions and manually determining the proportion of correct extractions by an author knowledgeable about this domain.', 'Absolute recall is estimated by multiplying sample precision with the number of positive extractions.', 'Table 2 shows the results.', 'DPL substantially outperformed Peng et al. (2017), improving sample precision by ten absolute points and raising absolute recall by 25%.', 'Combining disparate indirect supervision strategies is key to this performance gain, as evident from the ablation results.', 'While distant supervision remained the most potent source of indirect supervision, data programming and joint inference each contributed significantly.', 'Replacing out-of-domain (Wikipedia) word embedding with in-domain (PubMed) word embedding (Pyysalo et al., 2013) also led to a small gain.']
[None, None, None, None, ['DPL + EMB', 'Peng 2017', 'Prec.', 'Abs. Rec.'], ['DPL', 'DPL -DS', 'DPL -DP', 'DPL -DP (ENTITY)', 'DPL -JI'], ['DPL -DS', 'DPL -DP', 'DPL -DP (ENTITY)', 'DPL -JI'], ['DPL + EMB']]
1
D18-1215table_5
Comparison of gene entity linking results on a balanced test set. The string-matching baseline has low precision. By combining indirect supervision strategies, DPL substantially improved precision while retaining reasonably high recall.
2
[['System', 'String Match'], ['System', 'DS'], ['System', 'DS + DP'], ['System', 'DS + DP + JI']]
1
[['Acc.'], ['F1'], ['Prec.'], ['Rec.']]
[['0.18', '0.31', '0.18', '1.00'], ['0.64', '0.71', '0.62', '0.83'], ['0.66', '0.71', '0.62', '0.83'], ['0.70', '0.76', '0.68', '0.86']]
column
['Acc.', 'F1', 'Prec.', 'Rec.']
['DS', 'DS + DP', 'DS + DP + JI']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>F1</th> <th>Prec.</th> <th>Rec.</th> </tr> </thead> <tbody> <tr> <td>System || String Match</td> <td>0.18</td> <td>0.31</td> <td>0.18</td> <td>1.00</td> </tr> <tr> <td>System || DS</td> <td>0.64</td> <td>0.71</td> <td>0.62</td> <td>0.83</td> </tr> <tr> <td>System || DS + DP</td> <td>0.66</td> <td>0.71</td> <td>0.62</td> <td>0.83</td> </tr> <tr> <td>System || DS + DP + JI</td> <td>0.70</td> <td>0.76</td> <td>0.68</td> <td>0.86</td> </tr> </tbody></table>
Table 5
table_5
D18-1215
8
emnlp2018
Experiment results. For evaluation, we annotated a larger set of sample gene-mention candidates and then subsampled a balanced test set of 550 instances (half are true gene mentions, half not). These instances were excluded from training and development. Table 5 compares system performance on this test set. The string-matching baseline has a very low precision, as gene mentions are highly ambiguous, which explains why Peng et al. (2017) resorted to heavy filtering. By combining indirect supervision strategies, DPL improved precision by over 50 absolute points, while retaining a reasonably high recall (86%). All indirect supervision strategies contributed significantly, as the ablation tests show.
[2, 2, 2, 1, 1, 1, 2]
['Experiment results.', 'For evaluation, we annotated a larger set of sample gene-mention candidates and then subsampled a balanced test set of 550 instances (half are true gene mentions, half not).', 'These instances were excluded from training and development.', 'Table 5 compares system performance on this test set.', 'The string-matching baseline has a very low precision, as gene mentions are highly ambiguous, which explains why Peng et al. (2017) resorted to heavy filtering.', 'By combining indirect supervision strategies, DPL improved precision by over 50 absolute points, while retaining a reasonably high recall (86%).', 'All indirect supervision strategies contributed significantly, as the ablation tests show.']
[None, None, None, ['String Match', 'DS', 'DS + DP', 'DS + DP + JI'], ['String Match', 'Prec.'], ['DS', 'DS + DP', 'DS + DP + JI', 'Rec.'], ['DS', 'DS + DP', 'DS + DP + JI']]
1
D18-1218table_5
The quality of the coreference chains on the CoNLL-2012 test set. Each simulated scenario is randomly generated 10 times (summary reported in terms of average result and standard deviation)
6
[['CoNLL 2012 Test Dataset', 'Simulation', 'None', 'Method', 'Stanford', '-'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Uniform', 'Method', 'MV', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Uniform', 'Method', 'MV', 's.d.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Uniform', 'Method', 'MPA', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Uniform', 'Method', 'MPA', 's.d.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Sparse', 'Method', 'MV', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Sparse', 'Method', 'MV', 's.d.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Sparse', 'Method', 'MPA', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Sparse', 'Method', 'MPA', 's.d.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'PD-inspired Uniform', 'Method', 'MV', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'PD-inspired Uniform', 'Method', 'MV', 's.d.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'PD-inspired Uniform', 'Method', 'MPA', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'PD-inspired Uniform', 'Method', 'MPA', 's.d.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'PD-inspired Sparse', 'Method', 'MV', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'PD-inspired Sparse', 'Method', 'MV', 's.d.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'PD-inspired Sparse', 'Method', 'MPA', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'PD-inspired Sparse', 'Method', 'MPA', 's.d.']]
2
[['MUC', 'P'], ['MUC', 'R'], ['MUC', 'F1'], ['BCUB', 'P'], ['BCUB', 'R'], ['BCUB', 'F1'], ['CEAFE', 'P'], ['CEAFE', 'R'], ['CEAFE', 'F1'], ['Avg. F1', '-']]
[['89.78', '73.88', '81.06', '83.93', '59.22', '69.44', '73.87', '60.57', '66.56', '72.35'], ['88.27', '86.00', '87.12', '73.92', '70.81', '72.33', '70.62', '76.73', '73.55', '77.67'], ['0.38', '0.35', '0.36', '0.83', '0.52', '0.62', '0.49', '0.60', '0.50', '0.47'], ['90.92', '91.97', '91.44', '75.51', '80.14', '77.75', '81.98', '78.81', '80.37', '83.19'], ['0.48', '0.36', '0.41', '1.20', '0.67', '0.93', '0.74', '1.16', '0.93', '0.75'], ['81.99', '79.01', '80.47', '65.91', '62.64', '64.23', '60.61', '68.00', '64.09', '69.59'], ['0.32', '0.43', '0.38', '0.45', '0.51', '0.40', '0.39', '0.24', '0.28', '0.32'], ['87.90', '88.24', '88.07', '70.67', '73.91', '72.25', '74.62', '73.63', '74.12', '78.15'], ['0.47', '0.44', '0.45', '0.96', '0.65', '0.77', '0.75', '0.93', '0.82', '0.66'], ['91.84', '88.28', '90.02', '80.94', '74.19', '77.41', '75.13', '84.93', '79.73', '82.39'], ['0.36', '0.49', '0.42', '0.61', '0.84', '0.66', '0.88', '0.55', '0.72', '0.58'], ['97.42', '97.20', '97.31', '91.61', '91.53', '91.57', '93.87', '94.58', '94.23', '94.37'], ['0.27', '0.28', '0.27', '1.05', '1.28', '1.15', '0.67', '0.61', '0.63', '0.68'], ['86.86', '81.70', '84.20', '74.26', '65.42', '69.56', '65.45', '78.51', '71.39', '75.05'], ['0.49', '0.54', '0.51', '0.63', '0.48', '0.52', '0.67', '0.55', '0.60', '0.53'], ['94.86', '94.09', '94.47', '85.24', '84.42', '84.83', '87.52', '89.91', '88.70', '89.33'], ['0.32', '0.36', '0.34', '0.70', '0.75', '0.71', '0.60', '0.49', '0.54', '0.52']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'Avg.F1']
['MPA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MUC || P</th> <th>MUC || R</th> <th>MUC || F1</th> <th>BCUB || P</th> <th>BCUB || R</th> <th>BCUB || F1</th> <th>CEAFE || P</th> <th>CEAFE || R</th> <th>CEAFE || F1</th> <th>Avg. F1 || -</th> </tr> </thead> <tbody> <tr> <td>CoNLL 2012 Test Dataset || Simulation || None || Method || Stanford || -</td> <td>89.78</td> <td>73.88</td> <td>81.06</td> <td>83.93</td> <td>59.22</td> <td>69.44</td> <td>73.87</td> <td>60.57</td> <td>66.56</td> <td>72.35</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || Synthetic Uniform || Method || MV || avg.</td> <td>88.27</td> <td>86.00</td> <td>87.12</td> <td>73.92</td> <td>70.81</td> <td>72.33</td> <td>70.62</td> <td>76.73</td> <td>73.55</td> <td>77.67</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || Synthetic Uniform || Method || MV || s.d.</td> <td>0.38</td> <td>0.35</td> <td>0.36</td> <td>0.83</td> <td>0.52</td> <td>0.62</td> <td>0.49</td> <td>0.60</td> <td>0.50</td> <td>0.47</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || Synthetic Uniform || Method || MPA || avg.</td> <td>90.92</td> <td>91.97</td> <td>91.44</td> <td>75.51</td> <td>80.14</td> <td>77.75</td> <td>81.98</td> <td>78.81</td> <td>80.37</td> <td>83.19</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || Synthetic Uniform || Method || MPA || s.d.</td> <td>0.48</td> <td>0.36</td> <td>0.41</td> <td>1.20</td> <td>0.67</td> <td>0.93</td> <td>0.74</td> <td>1.16</td> <td>0.93</td> <td>0.75</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || Synthetic Sparse || Method || MV || avg.</td> <td>81.99</td> <td>79.01</td> <td>80.47</td> <td>65.91</td> <td>62.64</td> <td>64.23</td> <td>60.61</td> <td>68.00</td> <td>64.09</td> <td>69.59</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || Synthetic Sparse || Method || MV || s.d.</td> <td>0.32</td> <td>0.43</td> <td>0.38</td> <td>0.45</td> <td>0.51</td> <td>0.40</td> <td>0.39</td> <td>0.24</td> <td>0.28</td> <td>0.32</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || Synthetic Sparse || Method || MPA || avg.</td> <td>87.90</td> <td>88.24</td> <td>88.07</td> <td>70.67</td> <td>73.91</td> <td>72.25</td> <td>74.62</td> <td>73.63</td> <td>74.12</td> <td>78.15</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || Synthetic Sparse || Method || MPA || s.d.</td> <td>0.47</td> <td>0.44</td> <td>0.45</td> <td>0.96</td> <td>0.65</td> <td>0.77</td> <td>0.75</td> <td>0.93</td> <td>0.82</td> <td>0.66</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || PD-inspired Uniform || Method || MV || avg.</td> <td>91.84</td> <td>88.28</td> <td>90.02</td> <td>80.94</td> <td>74.19</td> <td>77.41</td> <td>75.13</td> <td>84.93</td> <td>79.73</td> <td>82.39</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || PD-inspired Uniform || Method || MV || s.d.</td> <td>0.36</td> <td>0.49</td> <td>0.42</td> <td>0.61</td> <td>0.84</td> <td>0.66</td> <td>0.88</td> <td>0.55</td> <td>0.72</td> <td>0.58</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || PD-inspired Uniform || Method || MPA || avg.</td> <td>97.42</td> <td>97.20</td> <td>97.31</td> <td>91.61</td> <td>91.53</td> <td>91.57</td> <td>93.87</td> <td>94.58</td> <td>94.23</td> <td>94.37</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || PD-inspired Uniform || Method || MPA || s.d.</td> <td>0.27</td> <td>0.28</td> <td>0.27</td> <td>1.05</td> <td>1.28</td> <td>1.15</td> <td>0.67</td> <td>0.61</td> <td>0.63</td> <td>0.68</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || PD-inspired Sparse || Method || MV || avg.</td> <td>86.86</td> <td>81.70</td> <td>84.20</td> <td>74.26</td> <td>65.42</td> <td>69.56</td> <td>65.45</td> <td>78.51</td> <td>71.39</td> <td>75.05</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || PD-inspired Sparse || Method || MV || s.d.</td> <td>0.49</td> <td>0.54</td> <td>0.51</td> <td>0.63</td> <td>0.48</td> <td>0.52</td> <td>0.67</td> <td>0.55</td> <td>0.60</td> <td>0.53</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || PD-inspired Sparse || Method || MPA || avg.</td> <td>94.86</td> <td>94.09</td> <td>94.47</td> <td>85.24</td> <td>84.42</td> <td>84.83</td> <td>87.52</td> <td>89.91</td> <td>88.70</td> <td>89.33</td> </tr> <tr> <td>CoNLL 2012 Test Dataset || Simulation || PD-inspired Sparse || Method || MPA || s.d.</td> <td>0.32</td> <td>0.36</td> <td>0.34</td> <td>0.70</td> <td>0.75</td> <td>0.71</td> <td>0.60</td> <td>0.49</td> <td>0.54</td> <td>0.52</td> </tr> </tbody></table>
Table 5
table_5
D18-1218
7
emnlp2018
In Table 5 we present the results obtained on simulated data from the CONLL-2012 test set. The results follow a similar trend to those observed using actual annotations: a much better quality of the chains produced using the mention pairs inferred by our MPA model, across all the simulated scenarios. Furthermore, the MV baseline achieves better chains compared to the Stanford system in 3 out of 4 simulation settings, again showcasing the potential of crowdsourced annotations.
[1, 1, 1]
['In Table 5 we present the results obtained on simulated data from the CONLL-2012 test set.', 'The results follow a similar trend to those observed using actual annotations: a much better quality of the chains produced using the mention pairs inferred by our MPA model, across all the simulated scenarios.', 'Furthermore, the MV baseline achieves better chains compared to the Stanford system in 3 out of 4 simulation settings, again showcasing the potential of crowdsourced annotations.']
[None, ['MPA', 'Synthetic Uniform', 'PD-inspired Uniform', 'Synthetic Sparse', 'PD-inspired Sparse'], ['MV', 'Stanford', 'Synthetic Uniform', 'PD-inspired Uniform', 'PD-inspired Sparse']]
1
D18-1219table_6
Results of using NP head plus modifications in different word representations for bridging anaphora resolution compared to the best results of two models from Hou et al. (2013b). Bold indicates statistically significant differences over the other models (two-sided paired approximate randomization test, p < 0.01).
2
[['models from Hou et al. (2013b)', 'pairwise model III'], ['models from Hou et al. (2013b)', 'MLN model II'], ['NP head + modifiers', 'GloVe GigaWiki14'], ['NP head + modifiers', 'GloVe Giga'], ['NP head + modifiers', 'embeddings PP'], ['NP head + modifiers', 'embeddings bridging']]
1
[['acc']]
[['36.35'], ['41.32'], ['20.52'], ['20.81'], ['31.67'], ['39.52']]
column
['acc']
['embeddings bridging']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>acc</th> </tr> </thead> <tbody> <tr> <td>models from Hou et al. (2013b) || pairwise model III</td> <td>36.35</td> </tr> <tr> <td>models from Hou et al. (2013b) || MLN model II</td> <td>41.32</td> </tr> <tr> <td>NP head + modifiers || GloVe GigaWiki14</td> <td>20.52</td> </tr> <tr> <td>NP head + modifiers || GloVe Giga</td> <td>20.81</td> </tr> <tr> <td>NP head + modifiers || embeddings PP</td> <td>31.67</td> </tr> <tr> <td>NP head + modifiers || embeddings bridging</td> <td>39.52</td> </tr> </tbody></table>
Table 6
table_6
D18-1219
7
emnlp2018
Table 6 lists the best results of the two models for bridging anaphora resolution from Hou et al. (2013b). pairwise model III is a pairwise mentionentity model based on various semantic, syntactic and lexical features. MLN model II is a joint inference framework based on Markov logic networks (Domingos and Lowd, 2009). It models that semantically or syntactically related anaphors are likely to share the same antecedent and achieves an accuracy of 41.32% on the ISNotes corpus. The results for GloVe GigaWiki14 and GloVe Giga are similar on two settings (using NP head vs. using NP head + modifiers). For embeddings PP, the result on using NP head + modifiers (31.67%) is worse than the result on using NP head (33.03%). However, if we apply embeddings PP to a bridging anaphor’s head and modifiers, and only apply embeddings PP to the head noun of an antecedent candidate, we get an accuracy of 34.53%. Although the differences are not significant, it confirms that the information from the modifiers of the antecedent candidates in embeddings PP hurts the performance. This corresponds to our observations in the previous section that the representations for words without the suffix “PP” in embeddings PP are not as good as in embeddings bridging due to less training instances. Finally, our method based on embeddings bridging achieves an accuracy of 39.52%, which is competitive to the best result (41.32%) reported in Hou et al. (2013b). There is no significant difference between NP head + modifiers based on embeddings bridging and MLN model II (randomization test with p < 0.01).
[1, 2, 2, 1, 1, 1, 2, 2, 2, 1, 2]
['Table 6 lists the best results of the two models for bridging anaphora resolution from Hou et al. (2013b).', 'pairwise model III is a pairwise mentionentity model based on various semantic, syntactic and lexical features.', 'MLN model II is a joint inference framework based on Markov logic networks (Domingos and Lowd, 2009).', 'It models that semantically or syntactically related anaphors are likely to share the same antecedent and achieves an accuracy of 41.32% on the ISNotes corpus.', 'The results for GloVe GigaWiki14 and GloVe Giga are similar on two settings (using NP head vs. using NP head + modifiers).', 'For embeddings PP, the result on using NP head + modifiers (31.67%) is worse than the result on using NP head (33.03%).', 'However, if we apply embeddings PP to a bridging anaphor’s head and modifiers, and only apply embeddings PP to the head noun of an antecedent candidate, we get an accuracy of 34.53%.', 'Although the differences are not significant, it confirms that the information from the modifiers of the antecedent candidates in embeddings PP hurts the performance.', 'This corresponds to our observations in the previous section that the representations for words without the suffix “PP” in embeddings PP are not as good as in embeddings bridging due to less training instances.', 'Finally, our method based on embeddings bridging achieves an accuracy of 39.52%, which is competitive to the best result (41.32%) reported in Hou et al. (2013b).', 'There is no significant difference between NP head + modifiers based on embeddings bridging and MLN model II (randomization test with p < 0.01).']
[['models from Hou et al. (2013b)', 'pairwise model III', 'MLN model II'], ['pairwise model III'], ['MLN model II'], ['pairwise model III', 'MLN model II', 'acc'], ['GloVe GigaWiki14', 'GloVe Giga'], ['embeddings PP', 'acc'], ['embeddings PP', 'acc'], ['embeddings PP'], ['embeddings PP', 'embeddings bridging'], ['embeddings bridging', 'MLN model II', 'acc'], ['embeddings bridging', 'MLN model II']]
1
D18-1219table_8
Results of different systems for bridging anaphora resolution in ISNotes. Bold indicates statistically significant differences over the other models (two-sided paired approximate randomization test, p < 0.01).
3
[['Baselines', 'System', 'Schulte im Walde (1998)'], ['Baselines', 'System', 'Poesio et al. (2004)'], ['Models from Hou et al. (2013b)', 'System', 'pairwise model III'], ['Models from Hou et al. (2013b)', 'System', 'MLN model II'], ['Hou (2018)', 'System', 'MLN model II + embeddings PP (NP head + noun pre-modifiers)'], ['This work', 'System', 'embeddings bridging (NP head + modifiers)'], ['This work', 'System', 'MLN model II + embeddings bridging (NP head + modifiers)']]
1
[['acc']]
[['13.68'], ['18.85'], ['36.35'], ['41.32'], ['45.85'], ['39.52'], ['46.46']]
column
['acc']
['embeddings bridging (NP head + modifiers)', 'MLN model II + embeddings bridging (NP head + modifiers)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>acc</th> </tr> </thead> <tbody> <tr> <td>Baselines || System || Schulte im Walde (1998)</td> <td>13.68</td> </tr> <tr> <td>Baselines || System || Poesio et al. (2004)</td> <td>18.85</td> </tr> <tr> <td>Models from Hou et al. (2013b) || System || pairwise model III</td> <td>36.35</td> </tr> <tr> <td>Models from Hou et al. (2013b) || System || MLN model II</td> <td>41.32</td> </tr> <tr> <td>Hou (2018) || System || MLN model II + embeddings PP (NP head + noun pre-modifiers)</td> <td>45.85</td> </tr> <tr> <td>This work || System || embeddings bridging (NP head + modifiers)</td> <td>39.52</td> </tr> <tr> <td>This work || System || MLN model II + embeddings bridging (NP head + modifiers)</td> <td>46.46</td> </tr> </tbody></table>
Table 8
table_8
D18-1219
9
emnlp2018
5.6 Combining NP Head + Modifiers with MLN II. For bridging anaphora resolution, Hou (2018) integrates a much simpler deterministic approach by combining an NP head with its noun modifiers (appearing before the head) based on embeddings PP into the MLN II system (Hou et al., 2013b). Similarly, we add a constraint on top of MLN II using our deterministic approach (NP head + modifiers) based on embeddings bridging. Table 8 lists the results of different systems for bridging anaphora resolution in ISNotes. It shows that combining our deterministic approach (NP Head + modifiers) with MLN II slightly improves the result compared to Hou (2018). Although combining NP Head + modifiers with MLN II achieves significant improvement over NP Head + modifiers, we think the latter has its own value. Our deterministic algorithm is simpler and more efficient compared to MLN model II + embeddings bridging, which contains many complicated features and might be hard to migrate to other bridging corpora. Moreover, our algorithm is “unsupervised” and requires no training when applied to other English bridging corpora.
[2, 2, 2, 1, 1, 1, 2, 2]
['5.6 Combining NP Head + Modifiers with MLN II.', 'For bridging anaphora resolution, Hou (2018) integrates a much simpler deterministic approach by combining an NP head with its noun modifiers (appearing before the head) based on embeddings PP into the MLN II system (Hou et al., 2013b).', 'Similarly, we add a constraint on top of MLN II using our deterministic approach (NP head + modifiers) based on embeddings bridging.', 'Table 8 lists the results of different systems for bridging anaphora resolution in ISNotes.', 'It shows that combining our deterministic approach (NP Head + modifiers) with MLN II slightly improves the result compared to Hou (2018).', 'Although combining NP Head + modifiers with MLN II achieves significant improvement over NP Head + modifiers, we think the latter has its own value.', 'Our deterministic algorithm is simpler and more efficient compared to MLN model II + embeddings bridging, which contains many complicated features and might be hard to migrate to other bridging corpora.', 'Moreover, our algorithm is “unsupervised” and requires no training when applied to other English bridging corpora.']
[None, ['Models from Hou et al. (2013b)', 'MLN model II + embeddings PP (NP head + noun pre-modifiers)'], ['MLN model II + embeddings PP (NP head + noun pre-modifiers)', 'MLN model II + embeddings bridging (NP head + modifiers)'], ['Schulte im Walde (1998)', 'Poesio et al. (2004)', 'pairwise model III', 'MLN model II', 'MLN model II + embeddings PP (NP head + noun pre-modifiers)', 'embeddings bridging (NP head + modifiers)', 'MLN model II + embeddings bridging (NP head + modifiers)'], ['MLN model II + embeddings PP (NP head + noun pre-modifiers)', 'MLN model II + embeddings bridging (NP head + modifiers)', 'acc'], ['MLN model II + embeddings bridging (NP head + modifiers)', 'embeddings bridging (NP head + modifiers)', 'acc'], ['embeddings bridging (NP head + modifiers)', 'MLN model II + embeddings bridging (NP head + modifiers)'], ['embeddings bridging (NP head + modifiers)']]
1
D18-1219table_9
Results of resolving bridging anaphors in other corpora. Number of bridging anaphors is reported after filtering out a few problematic cases on each corpus.
4
[['Corpus', 'BASHI', 'Bridging Type', 'referential, including comparative anaphora'], ['Corpus', 'BASHI', 'Bridging Type', 'referential, excluding comparative anaphora'], ['Corpus', 'ARRAU (RST Train)', 'Bridging Type', 'mostly lexical, some referential'], ['Corpus', 'ARRAU (RST Test)', 'Bridging Type', 'mostly lexical, some referential']]
1
[['# of Anaphors'], ['acc']]
[['452', '27.43'], ['344', '29.94'], ['2,325', '31.44'], ['639', '32.39']]
column
['# of Anaphors', 'acc']
['BASHI', 'BASHI', 'ARRAU (RST Train)', 'ARRAU (RST Test)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th># of Anaphors</th> <th>acc</th> </tr> </thead> <tbody> <tr> <td>Corpus || BASHI || Bridging Type || referential, including comparative anaphora</td> <td>452</td> <td>27.43</td> </tr> <tr> <td>Corpus || BASHI || Bridging Type || referential, excluding comparative anaphora</td> <td>344</td> <td>29.94</td> </tr> <tr> <td>Corpus || ARRAU (RST Train) || Bridging Type || mostly lexical, some referential</td> <td>2,325</td> <td>31.44</td> </tr> <tr> <td>Corpus || ARRAU (RST Test) || Bridging Type || mostly lexical, some referential</td> <td>639</td> <td>32.39</td> </tr> </tbody></table>
Table 9
table_9
D18-1219
9
emnlp2018
Table 9 lists the results of bridging anaphora resolution in the BASHI and ARRAU corpora, respectively. On the test set of the ARRAU (RST) corpus, Rosiger (2018b) proposed a modified rule-based system based on Hou et al. (2014)’s work and reported an accuracy of 39.8% for bridging anaphora resolution. And our algorithm achieves an accuracy of 32.39% using only embeddings bridging. Overall, the reasonable performance on these two corpora demonstrates thatembeddings bridging is a general word representation resource for bridging.
[1, 2, 1, 1]
['Table 9 lists the results of bridging anaphora resolution in the BASHI and ARRAU corpora, respectively.', 'On the test set of the ARRAU (RST) corpus, Rosiger (2018b) proposed a modified rule-based system based on Hou et al. (2014)’s work and reported an accuracy of 39.8% for bridging anaphora resolution.', 'And our algorithm achieves an accuracy of 32.39% using only embeddings bridging.', 'Overall, the reasonable performance on these two corpora demonstrates thatembeddings bridging is a general word representation resource for bridging.']
[['BASHI', 'ARRAU (RST Train)', 'ARRAU (RST Test)'], ['ARRAU (RST Train)', 'ARRAU (RST Test)'], ['ARRAU (RST Test)'], ['BASHI', 'ARRAU (RST Train)', 'ARRAU (RST Test)']]
1
D18-1221table_2
Results for the reverse dictionary task, compared with the highest numbers reported by Hill et al. (2016). TF vectors refers to textually enhanced vectors with λ = 1. For the MS-LSTM, k is set to 3.
3
[['Model', 'Seen (500 WordNet definitions)', 'OneLook (Hill et al. 2016)'], ['Model', 'Seen (500 WordNet definitions)', 'RNN cosine (Hill et al. 2016)'], ['Model', 'Seen (500 WordNet definitions)', 'Std LSTM (150 dim.) + TF vec.'], ['Model', 'Seen (500 WordNet definitions)', 'Std LSTM (k × 150 dim.) + TF vec.'], ['Model', 'Seen (500 WordNet definitions)', 'MS-LSTM +TF vectors'], ['Model', 'Seen (500 WordNet definitions)', 'MS-LSTM +TF vectors + anchors'], ['Model', 'Unseen (500 WordNet definitions)', 'RNN w2v cosine (Hill et al. 2016)'], ['Model', 'Unseen (500 WordNet definitions)', 'BOW w2v cosine (Hill et al. 2016)'], ['Model', 'Unseen (500 WordNet definitions)', 'Std LSTM (150 dim.) + TF vec.'], ['Model', 'Unseen (500 WordNet definitions)', 'Std LSTM (k × 150 dim.) + TF vec.'], ['Model', 'Unseen (500 WordNet definitions)', 'MS-LSTM + TF vectors'], ['Model', 'Unseen (500 WordNet definitions)', 'MS-LSTM + TF vectors + anchors']]
1
[['Acc-10'], ['Acc-100']]
[['0.89', '0.91'], ['0.48', '0.73'], ['0.86', '0.96'], ['0.93', '0.98'], ['0.95', '0.99'], ['0.96', '0.99'], ['0.44', '0.69'], ['0.46', '0.71'], ['0.72', '0.88'], ['0.77', '0.90'], ['0.79', '0.90'], ['0.80', '0.91']]
column
['Acc-10', 'Acc-100']
['MS-LSTM + TF vectors', 'MS-LSTM + TF vectors + anchors']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc-10</th> <th>Acc-100</th> </tr> </thead> <tbody> <tr> <td>Model || Seen (500 WordNet definitions) || OneLook (Hill et al. 2016)</td> <td>0.89</td> <td>0.91</td> </tr> <tr> <td>Model || Seen (500 WordNet definitions) || RNN cosine (Hill et al. 2016)</td> <td>0.48</td> <td>0.73</td> </tr> <tr> <td>Model || Seen (500 WordNet definitions) || Std LSTM (150 dim.) + TF vec.</td> <td>0.86</td> <td>0.96</td> </tr> <tr> <td>Model || Seen (500 WordNet definitions) || Std LSTM (k × 150 dim.) + TF vec.</td> <td>0.93</td> <td>0.98</td> </tr> <tr> <td>Model || Seen (500 WordNet definitions) || MS-LSTM +TF vectors</td> <td>0.95</td> <td>0.99</td> </tr> <tr> <td>Model || Seen (500 WordNet definitions) || MS-LSTM +TF vectors + anchors</td> <td>0.96</td> <td>0.99</td> </tr> <tr> <td>Model || Unseen (500 WordNet definitions) || RNN w2v cosine (Hill et al. 2016)</td> <td>0.44</td> <td>0.69</td> </tr> <tr> <td>Model || Unseen (500 WordNet definitions) || BOW w2v cosine (Hill et al. 2016)</td> <td>0.46</td> <td>0.71</td> </tr> <tr> <td>Model || Unseen (500 WordNet definitions) || Std LSTM (150 dim.) + TF vec.</td> <td>0.72</td> <td>0.88</td> </tr> <tr> <td>Model || Unseen (500 WordNet definitions) || Std LSTM (k × 150 dim.) + TF vec.</td> <td>0.77</td> <td>0.90</td> </tr> <tr> <td>Model || Unseen (500 WordNet definitions) || MS-LSTM + TF vectors</td> <td>0.79</td> <td>0.90</td> </tr> <tr> <td>Model || Unseen (500 WordNet definitions) || MS-LSTM + TF vectors + anchors</td> <td>0.80</td> <td>0.91</td> </tr> </tbody></table>
Table 2
table_2
D18-1221
7
emnlp2018
Table 2 shows the results, based on a MS-LSTM setup similar to that of ยง4.1. Note that the MSLSTM achieves 0.95-0.96 top-10 accuracy for the seen evaluation, significantly higher not only than the best model of Hill et al. (2016), but also higher than OneLook, a commercial system with access to more than 1000 dictionaries. It also presents considerably higher performance in the unseen evaluation. We are not aware of any other models with higher performance on the specific task.
[1, 1, 1, 2]
['Table 2 shows the results, based on a MS-LSTM setup similar to that of ยง4.1.', 'Note that the MSLSTM achieves 0.95-0.96 top-10 accuracy for the seen evaluation, significantly higher not only than the best model of Hill et al. (2016), but also higher than OneLook, a commercial system with access to more than 1000 dictionaries.', 'It also presents considerably higher performance in the unseen evaluation.', 'We are not aware of any other models with higher performance on the specific task.']
[['MS-LSTM +TF vectors', 'MS-LSTM +TF vectors + anchors'], ['MS-LSTM +TF vectors', 'MS-LSTM +TF vectors + anchors', 'Acc-10', 'Seen (500 WordNet definitions)', 'OneLook (Hill et al. 2016)', 'RNN cosine (Hill et al. 2016)'], ['MS-LSTM +TF vectors', 'MS-LSTM +TF vectors + anchors', 'Unseen (500 WordNet definitions)'], None]
1
D18-1221table_3
Results for the Cora dataset. TF vectors refers to textually enhanced KB vectors (λ = 0.5). Difference between our best models and GAT/GCN/TADW are not s.s.
3
[['Model', 'Evaluation 1 (training ratio=0.50)', 'PLSA (Hofmann 1999)'], ['Model', 'Evaluation 1 (training ratio=0.50)', 'NetPLSA (Mei et al. 2008)'], ['Model', 'Evaluation 1 (training ratio=0.50)', 'TADW (Yang et al. 2015)'], ['Model', 'Evaluation 1 (training ratio=0.50)', 'Linear SVM + DeepWalk vectors'], ['Model', 'Evaluation 1 (training ratio=0.50)', 'Linear SVM + TF vectors'], ['Model', 'Evaluation 2 (training ratio=0.05)', 'Planetoid (Yang et al. 2016)'], ['Model', 'Evaluation 2 (training ratio=0.05)', 'GCN (Kipf and Welling 2017)'], ['Model', 'Evaluation 2 (training ratio=0.05)', 'GAT (Veličkovic et al. 2018)'], ['Model', 'Evaluation 2 (training ratio=0.05)', 'Linear SVM + DeepWalk vectors'], ['Model', 'Evaluation 2 (training ratio=0.05)', 'Linear SVM + TF vectors']]
1
[['Accuracy']]
[['0.68'], ['0.85'], ['0.87'], ['0.85'], ['0.88'], ['0.76'], ['0.81'], ['0.83'], ['0.72'], ['0.82']]
column
['Accuracy']
['Linear SVM + DeepWalk vectors', 'Linear SVM + TF vectors']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || Evaluation 1 (training ratio=0.50) || PLSA (Hofmann 1999)</td> <td>0.68</td> </tr> <tr> <td>Model || Evaluation 1 (training ratio=0.50) || NetPLSA (Mei et al. 2008)</td> <td>0.85</td> </tr> <tr> <td>Model || Evaluation 1 (training ratio=0.50) || TADW (Yang et al. 2015)</td> <td>0.87</td> </tr> <tr> <td>Model || Evaluation 1 (training ratio=0.50) || Linear SVM + DeepWalk vectors</td> <td>0.85</td> </tr> <tr> <td>Model || Evaluation 1 (training ratio=0.50) || Linear SVM + TF vectors</td> <td>0.88</td> </tr> <tr> <td>Model || Evaluation 2 (training ratio=0.05) || Planetoid (Yang et al. 2016)</td> <td>0.76</td> </tr> <tr> <td>Model || Evaluation 2 (training ratio=0.05) || GCN (Kipf and Welling 2017)</td> <td>0.81</td> </tr> <tr> <td>Model || Evaluation 2 (training ratio=0.05) || GAT (Veličkovic et al. 2018)</td> <td>0.83</td> </tr> <tr> <td>Model || Evaluation 2 (training ratio=0.05) || Linear SVM + DeepWalk vectors</td> <td>0.72</td> </tr> <tr> <td>Model || Evaluation 2 (training ratio=0.05) || Linear SVM + TF vectors</td> <td>0.82</td> </tr> </tbody></table>
Table 3
table_3
D18-1221
8
emnlp2018
In Table 3 we report results for two evaluation settings. In Evaluation 1, we provide a comparison with the method of Yang et al. (2015) who include textual features in graph embeddings based on matrix factorisation, and two topic models used as baselines in their paper. Using the same classification algorithm (a linear SVM) and training ratio (0.50) with them, we present state-of-the-art results for vectors of 150 dimensions, prepared by a graph extended with 1422 textual features. We set λ = 0.5 by tuning on a dev set of 677 randomly selected entries from the training data. In Evaluation 2, using the same linear SVM classifier and λ as before, we reduce the training ratio to 0.05 in order to make our task comparable to the experiments reported by Velickovic et al. (2018) for a number of deep learning models: specifically, the graph attention network (GAT) of Veličkovic et al. (2018), the graph convolutional network (GCN) of Kipf and Welling (2017), and the Planetoid model of Yang et al. (2016). Again, our simple setting presents results within the state of the art range, comparable to (or better than) those of much more sophisticated models that have been specifically designed for the task of node classification. We consider this as a strong indication for the effectiveness of the textually enhanced vectors as representations of KB entities.
[1, 1, 2, 2, 1, 1, 2]
['In Table 3 we report results for two evaluation settings.', 'In Evaluation 1, we provide a comparison with the method of Yang et al. (2015) who include textual features in graph embeddings based on matrix factorisation, and two topic models used as baselines in their paper.', 'Using the same classification algorithm (a linear SVM) and training ratio (0.50) with them, we present state-of-the-art results for vectors of 150 dimensions, prepared by a graph extended with 1422 textual features.', 'We set λ = 0.5 by tuning on a dev set of 677 randomly selected entries from the training data.', 'In Evaluation 2, using the same linear SVM classifier and λ as before, we reduce the training ratio to 0.05 in order to make our task comparable to the experiments reported by Velickovic et al. (2018) for a number of deep learning models: specifically, the graph attention network (GAT) of Veličkovic et al. (2018), the graph convolutional network (GCN) of Kipf and Welling (2017), and the Planetoid model of Yang et al. (2016).', 'Again, our simple setting presents results within the state of the art range, comparable to (or better than) those of much more sophisticated models that have been specifically designed for the task of node classification.', 'We consider this as a strong indication for the effectiveness of the textually enhanced vectors as representations of KB entities.']
[['Evaluation 1 (training ratio=0.50)', 'Evaluation 2 (training ratio=0.05)'], ['PLSA (Hofmann 1999)', 'NetPLSA (Mei et al. 2008)', 'TADW (Yang et al. 2015)'], ['Linear SVM + DeepWalk vectors', 'Linear SVM + TF vectors'], ['Linear SVM + TF vectors'], ['Evaluation 2 (training ratio=0.05)', 'Planetoid (Yang et al. 2016)', 'GCN (Kipf and Welling 2017)', 'GAT (Veličkovic et al. 2018)'], ['Linear SVM + DeepWalk vectors', 'Linear SVM + TF vectors', 'Planetoid (Yang et al. 2016)', 'GCN (Kipf and Welling 2017)', 'GAT (Veličkovic et al. 2018)'], ['Linear SVM + TF vectors']]
1
D18-1222table_3
Experimental results on instanceOf triple classification(%).
2
[['Metric', 'TransE'], ['Metric', 'TransH'], ['Metric', 'TransR'], ['Metric', 'TransD'], ['Metric', 'HolE'], ['Metric', 'DistMult'], ['Metric', 'ComplEx'], ['Metric', 'TransC (unif)'], ['Metric', 'TransC (bern)']]
3
[['Datasets', 'YAGO39K', 'Accuracy'], ['Datasets', 'YAGO39K', 'Precision'], ['Datasets', 'YAGO39K', 'Recall'], ['Datasets', 'YAGO39K', 'F1-Score'], ['Datasets', 'M-YAGO39K', 'Accuracy'], ['Datasets', 'M-YAGO39K', 'Precision'], ['Datasets', 'M-YAGO39K', 'Recall'], ['Datasets', 'M-YAGO39K', 'F1-Score']]
[['82.6', '83.6', '81.0', '82.3', '71.0', '81.4', '54.4', '65.2'], ['82.9', '83.7', '81.7', '82.7', '70.1', '80.4', '53.2', '64.0'], ['80.6', '79.4', '82.5', '80.9', '70.9', '73.0', '66.3', '69.5'], ['83.2', '84.4', '81.5', '82.9', '72.5', '73.1', '71.4', '72.2'], ['82.3', '86.3', '76.7', '81.2', '74.2', '81.4', '62.7', '70.9'], ['83.9', '86.8', '80.1', '83.3', '70.5', '86.1', '49.0', '62.4'], ['83.3', '84.8', '81.1', '82.9', '70.2', '84.4', '49.5', '62.4'], ['80.2', '81.6', '80.0', '79.7', '85.5', '88.3', '81.8', '85.0'], ['79.7', '83.2', '74.4', '78.6', '85.3', '86.1', '84.2', '85.2']]
column
['Accuracy', 'Precision', 'Recall', 'F1-Score', 'Accuracy', 'Precision', 'Recall', 'F1-Score']
['TransC (unif)', 'TransC (bern)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Datasets || YAGO39K || Accuracy</th> <th>Datasets || YAGO39K || Precision</th> <th>Datasets || YAGO39K || Recall</th> <th>Datasets || YAGO39K || F1-Score</th> <th>Datasets || M-YAGO39K || Accuracy</th> <th>Datasets || M-YAGO39K || Precision</th> <th>Datasets || M-YAGO39K || Recall</th> <th>Datasets || M-YAGO39K || F1-Score</th> </tr> </thead> <tbody> <tr> <td>Metric || TransE</td> <td>82.6</td> <td>83.6</td> <td>81.0</td> <td>82.3</td> <td>71.0</td> <td>81.4</td> <td>54.4</td> <td>65.2</td> </tr> <tr> <td>Metric || TransH</td> <td>82.9</td> <td>83.7</td> <td>81.7</td> <td>82.7</td> <td>70.1</td> <td>80.4</td> <td>53.2</td> <td>64.0</td> </tr> <tr> <td>Metric || TransR</td> <td>80.6</td> <td>79.4</td> <td>82.5</td> <td>80.9</td> <td>70.9</td> <td>73.0</td> <td>66.3</td> <td>69.5</td> </tr> <tr> <td>Metric || TransD</td> <td>83.2</td> <td>84.4</td> <td>81.5</td> <td>82.9</td> <td>72.5</td> <td>73.1</td> <td>71.4</td> <td>72.2</td> </tr> <tr> <td>Metric || HolE</td> <td>82.3</td> <td>86.3</td> <td>76.7</td> <td>81.2</td> <td>74.2</td> <td>81.4</td> <td>62.7</td> <td>70.9</td> </tr> <tr> <td>Metric || DistMult</td> <td>83.9</td> <td>86.8</td> <td>80.1</td> <td>83.3</td> <td>70.5</td> <td>86.1</td> <td>49.0</td> <td>62.4</td> </tr> <tr> <td>Metric || ComplEx</td> <td>83.3</td> <td>84.8</td> <td>81.1</td> <td>82.9</td> <td>70.2</td> <td>84.4</td> <td>49.5</td> <td>62.4</td> </tr> <tr> <td>Metric || TransC (unif)</td> <td>80.2</td> <td>81.6</td> <td>80.0</td> <td>79.7</td> <td>85.5</td> <td>88.3</td> <td>81.8</td> <td>85.0</td> </tr> <tr> <td>Metric || TransC (bern)</td> <td>79.7</td> <td>83.2</td> <td>74.4</td> <td>78.6</td> <td>85.3</td> <td>86.1</td> <td>84.2</td> <td>85.2</td> </tr> </tbody></table>
Table 3
table_3
D18-1222
7
emnlp2018
Our datasets have three kinds of triples. Hence, we do experiments on them respectively. Experimental results for relational triples, instanceOf triples, and subClassOf triples are shown in Table 2, Table 3, and Table 4 respectively. In Table 3 and Table 4, a rising arrow means performance of this model have a promotion from YAGO39K to M-YAGO39K and a down arrow means a drop. From Table 2, we can learn that: (1) TransC outperforms all previous work in relational triple classification. (2) The “bern” sampling trick works better than “unif” in TransC. From Table 3 and Table 4, we can conclude that: (1) On YAGO39K, some compared models perform better than TransC in instanceOf triple classification. This is because that instanceOf has most triples (53.5%) among all relations in YAGO39K. This relation is trained superabundant times and nearly achieves the best performance, which has an adverse effect on other triples. TransC can find a balance between them and all triples achieve a good performance. (2) On YAGO39K, TransC outperforms other models in subClassOf triple classification. As shown in Table 1, subClassOf triples are much less than instanceOf triples. Hence, other models can not achieve the best performance under the bad influence of instanceOf triples. (3) On M-YAGO39K, TransC outperforms previous work in both instanceOf triple classification and subClassOf triple classification, which indicates that TransC can handle the transitivity of isA relations much better than other models. (4) After comparing experimental results in YAGO39K and M-YAGO39K, we can find that most previous work’s performance suffers a big drop in instanceOf triple classification and a small drop in subClassOf triple classification. This shows that previous work can not deal with instanceOf-subClassOf transitivity well. (5) In TransC, nearly all performances have a significant promotion from YAGO39K to MYAGO39K. Both instanceOf-subClassOf transitivity and subClassOf-subClassOf transitivity are solved well in TransC.
[2, 2, 1, 1, 2, 1, 2, 2, 1, 2, 2, 2, 1, 1, 2, 1, 1]
['Our datasets have three kinds of triples.', 'Hence, we do experiments on them respectively.', 'Experimental results for relational triples, instanceOf triples, and subClassOf triples are shown in Table 2, Table 3, and Table 4 respectively.', 'In Table 3 and Table 4, a rising arrow means performance of this model have a promotion from YAGO39K to M-YAGO39K and a down arrow means a drop.', 'From Table 2, we can learn that: (1) TransC outperforms all previous work in relational triple classification. (2) The “bern” sampling trick works better than “unif” in TransC.', 'From Table 3 and Table 4, we can conclude that: (1) On YAGO39K, some compared models perform better than TransC in instanceOf triple classification.', 'This is because that instanceOf has most triples (53.5%) among all relations in YAGO39K.', 'This relation is trained superabundant times and nearly achieves the best performance, which has an adverse effect on other triples.', 'TransC can find a balance between them and all triples achieve a good performance.', '(2) On YAGO39K, TransC outperforms other models in subClassOf triple classification.', 'As shown in Table 1, subClassOf triples are much less than instanceOf triples.', 'Hence, other models can not achieve the best performance under the bad influence of instanceOf triples.', '(3) On M-YAGO39K, TransC outperforms previous work in both instanceOf triple classification and subClassOf triple classification, which indicates that TransC can handle the transitivity of isA relations much better than other models.', '(4) After comparing experimental results in YAGO39K and M-YAGO39K, we can find that most previous work’s performance suffers a big drop in instanceOf triple classification and a small drop in subClassOf triple classification.', 'This shows that previous work can not deal with instanceOf-subClassOf transitivity well.', '(5) In TransC, nearly all performances have a significant promotion from YAGO39K to MYAGO39K.', 'Both instanceOf-subClassOf transitivity and subClassOf-subClassOf transitivity are solved well in TransC.']
[None, None, None, ['YAGO39K', 'M-YAGO39K'], ['TransC (unif)', 'TransC (bern)'], ['YAGO39K', 'TransE', 'TransH', 'TransR', 'TransD', 'HolE', 'DistMult', 'ComplEx', 'TransC (unif)', 'TransC (bern)'], None, None, ['TransC (unif)', 'TransC (bern)'], ['YAGO39K', 'TransC (unif)', 'TransC (bern)'], None, None, ['M-YAGO39K', 'TransC (unif)', 'TransC (bern)'], None, None, ['TransC (unif)', 'TransC (bern)', 'YAGO39K', 'M-YAGO39K'], ['TransC (unif)', 'TransC (bern)']]
1
D18-1225table_4
The predicted Mean Rank (lower the better) for temporal Scoping. The number of classes are 61 and 78 for YAGO11K and Wiki-data12k respectively. The results depict the effectiveness of TDNS. Please see Section 6.2
2
[['Negative Sampling', 'TANS (Equation 1)'], ['Negative Sampling', 'TDNS (Equation 2)']]
1
[['YAGO11K'], ['Wikidata12k']]
[['14.0', '29.3'], ['9.88', '17.6']]
column
['predicted Mean Rank', 'predicted Mean Rank']
['TANS (Equation 1)', 'TDNS (Equation 2)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>YAGO11K</th> <th>Wikidata12k</th> </tr> </thead> <tbody> <tr> <td>Negative Sampling || TANS (Equation 1)</td> <td>14.0</td> <td>29.3</td> </tr> <tr> <td>Negative Sampling || TDNS (Equation 2)</td> <td>9.88</td> <td>17.6</td> </tr> </tbody></table>
Table 4
table_4
D18-1225
7
emnlp2018
Temporal scoping of facts:. We report the rank of correct time instance of the triple. If the triple scope is an interval of time, we consider the lowest rank that corresponds to the time within that interval. The ranks are reported in table 4 for both the datasets. The results depict the effectiveness of TDNS.
[2, 2, 2, 1, 1]
['Temporal scoping of facts.', 'We report the rank of correct time instance of the triple.', 'If the triple scope is an interval of time, we consider the lowest rank that corresponds to the time within that interval.', 'The ranks are reported in table 4 for both the datasets.', 'The results depict the effectiveness of TDNS.']
[None, None, None, ['YAGO11K', 'Wikidata12k'], ['TDNS (Equation 2)']]
1
D18-1227table_2
Entity linking performance of different methods on Yelp-EL.
2
[['Method', 'DirectLink'], ['Method', 'ELT'], ['Method', 'SSRegu'], ['Method', 'LinkYelp']]
1
[['Accuracy (mean±std)']]
[['0.6684±0.008'], ['0.8451±0.012'], ['0.7970±0.013'], ['0.9034±0.014']]
column
['Accuracy (mean�std)']
['LinkYelp']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (mean±std)</th> </tr> </thead> <tbody> <tr> <td>Method || DirectLink</td> <td>0.6684±0.008</td> </tr> <tr> <td>Method || ELT</td> <td>0.8451±0.012</td> </tr> <tr> <td>Method || SSRegu</td> <td>0.7970±0.013</td> </tr> <tr> <td>Method || LinkYelp</td> <td>0.9034±0.014</td> </tr> </tbody></table>
Table 2
table_2
D18-1227
6
emnlp2018
5.3 Comparison Results. Table 2 shows the entity linking performance of different methods on Yelp-EL. Here, all three types of features described in Section 4 are fed into LinkYelp. Within the compared methods, LinkYelp performs substantially better. This shows that methods carefully designed for traditional entity linking problems may not work so well when applied to entity linking within a social media platform, and this new problem we propose is worth studying differently from the traditional entity linking problem. The accuracy of DirectLink means that many mentions (about 67%) in Yelp-EL simply refer to the corresponding reviewed businesses.". However, this does not mean that our problem is less challenging than traditional entity linking, since simply using the popularity measure of entities can achieve an accuracy of about 82% in the latter task (Pan et al., 2015).
[2, 1, 2, 1, 2, 1, 2]
['5.3 Comparison Results.', 'Table 2 shows the entity linking performance of different methods on Yelp-EL.', 'Here, all three types of features described in Section 4 are fed into LinkYelp.', 'Within the compared methods, LinkYelp performs substantially better.', 'This shows that methods carefully designed for traditional entity linking problems may not work so well when applied to entity linking within a social media platform, and this new problem we propose is worth studying differently from the traditional entity linking problem.', 'The accuracy of DirectLink means that many mentions (about 67%) in Yelp-EL simply refer to the corresponding reviewed businesses.".', 'However, this does not mean that our problem is less challenging than traditional entity linking, since simply using the popularity measure of entities can achieve an accuracy of about 82% in the latter task (Pan et al., 2015).']
[None, ['DirectLink', 'ELT', 'SSRegu', 'LinkYelp'], None, ['LinkYelp', 'Accuracy (mean±std)', 'DirectLink', 'ELT', 'SSRegu'], ['DirectLink', 'ELT', 'SSRegu'], ['DirectLink'], None]
1
D18-1230table_2
[Biomedical Domain] NER Performance Comparison. The supervised benchmarks on the BC5CDR and NCBI-Disease datasets are LM-LSTM-CRF and LSTM-CRF respectively (Wang et al., 2018). SwellShark has no annotated data, but for entity span extraction, it requires pre-trained POS taggers and extra human efforts of designing POS tag-based regular expressions and/or hand-tuning for special cases.
4
[['Method', 'Supervised Benchmark', 'Human Effort other than Dictionary', 'Gold Annotations'], ['Method', 'SwellShark', 'Human Effort other than Dictionary', 'Regex Design + Special Case Tuning'], ['Method', 'SwellShark', 'Human Effort other than Dictionary', 'Regex Design'], ['Method', 'Dictionary Match', 'Human Effort other than Dictionary', 'None'], ['Method', 'Fuzzy-LSTM-CRF', 'Human Effort other than Dictionary', 'None'], ['Method', 'AutoNER', 'Human Effort other than Dictionary', 'None']]
2
[['BC5CDR', 'Pre'], ['BC5CDR', 'Rec'], ['BC5CDR', 'F1'], ['NCBI-Disease', 'Pre'], ['NCBI-Disease', 'Rec'], ['NCBI-Disease', 'F1']]
[['88.84', '85.16', '86.96', '86.11', '85.49', '85.80'], ['86.11', '82.39', '84.21', '81.6', '80.1', '80.8'], ['84.98', '83.49', '84.23', '64.7', '69.7', '67.1'], ['93.93', '58.35', '71.98', '90.59', '56.15', '69.32'], ['88.27', '76.75', '82.11', '79.85', '67.71', '73.28'], ['88.96', '81.00', '84.8', '79.42', '71.98', '75.52']]
column
['Pre', 'Rec', 'F1', 'Pre', 'Rec', 'F1']
['AutoNER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BC5CDR || Pre</th> <th>BC5CDR || Rec</th> <th>BC5CDR || F1</th> <th>NCBI-Disease || Pre</th> <th>NCBI-Disease || Rec</th> <th>NCBI-Disease || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Supervised Benchmark || Human Effort other than Dictionary || Gold Annotations</td> <td>88.84</td> <td>85.16</td> <td>86.96</td> <td>86.11</td> <td>85.49</td> <td>85.80</td> </tr> <tr> <td>Method || SwellShark || Human Effort other than Dictionary || Regex Design + Special Case Tuning</td> <td>86.11</td> <td>82.39</td> <td>84.21</td> <td>81.6</td> <td>80.1</td> <td>80.8</td> </tr> <tr> <td>Method || SwellShark || Human Effort other than Dictionary || Regex Design</td> <td>84.98</td> <td>83.49</td> <td>84.23</td> <td>64.7</td> <td>69.7</td> <td>67.1</td> </tr> <tr> <td>Method || Dictionary Match || Human Effort other than Dictionary || None</td> <td>93.93</td> <td>58.35</td> <td>71.98</td> <td>90.59</td> <td>56.15</td> <td>69.32</td> </tr> <tr> <td>Method || Fuzzy-LSTM-CRF || Human Effort other than Dictionary || None</td> <td>88.27</td> <td>76.75</td> <td>82.11</td> <td>79.85</td> <td>67.71</td> <td>73.28</td> </tr> <tr> <td>Method || AutoNER || Human Effort other than Dictionary || None</td> <td>88.96</td> <td>81.00</td> <td>84.8</td> <td>79.42</td> <td>71.98</td> <td>75.52</td> </tr> </tbody></table>
Table 2
table_2
D18-1230
7
emnlp2018
5.3 NER Performance Comparison. We present F1, precision, and recall scores on all datasets in Table 2 and Table 3. From both tables, one can find the AutoNER achieves the best performance when there is no extra human effort. Fuzzy-LSTM-CRF does have some improvements over the Dictionary Match, but it is always worse than AutoNER. Even though SwellShark is designed for the biomedical domain and utilizes much more expert effort, AutoNER outperforms it in almost all cases. The only outlier happens on the NCBIdisease dataset when the entity span matcher in SwellShark is carefully tuned by experts for many special cases. It is worth mentioning that AutoNER beats Distant-LSTM-CRF, which is the previous stateof-the-art distantly supervised model on the LaptopReview dataset. Moreover, AutoNER’s performance is competitive to the supervised benchmarks. For example, on the BC5CDR dataset, its F1 score is only 2.16% away from the supervised benchmark.
[2, 1, 1, 1, 1, 2, 2, 1, 1]
['5.3 NER Performance Comparison.', 'We present F1, precision, and recall scores on all datasets in Table 2 and Table 3.', 'From both tables, one can find the AutoNER achieves the best performance when there is no extra human effort.', 'Fuzzy-LSTM-CRF does have some improvements over the Dictionary Match, but it is always worse than AutoNER.', 'Even though SwellShark is designed for the biomedical domain and utilizes much more expert effort, AutoNER outperforms it in almost all cases.', 'The only outlier happens on the NCBIdisease dataset when the entity span matcher in SwellShark is carefully tuned by experts for many special cases.', 'It is worth mentioning that AutoNER beats Distant-LSTM-CRF, which is the previous stateof-the-art distantly supervised model on the LaptopReview dataset.', 'Moreover, AutoNER’s performance is competitive to the supervised benchmarks.', 'For example, on the BC5CDR dataset, its F1 score is only 2.16% away from the supervised benchmark.']
[None, ['Pre', 'Rec', 'F1', 'BC5CDR', 'NCBI-Disease'], ['AutoNER', 'Human Effort other than Dictionary', 'None'], ['Fuzzy-LSTM-CRF', 'AutoNER', 'Dictionary Match'], ['AutoNER', 'SwellShark'], ['SwellShark', 'Regex Design + Special Case Tuning', 'NCBI-Disease'], ['AutoNER'], ['AutoNER', 'Supervised Benchmark'], ['AutoNER', 'Supervised Benchmark', 'BC5CDR', 'F1']]
1
D18-1231table_3
Evaluation of coarse entity-typing (§4.2): we compare two supervised entity-typers with our system. For the supervised systems, cells with gray color indicate in-domain evaluation. For each column, the best, out-of-domain and overall results are bold-faced and underlined, respectively. Numbers are F 1 in percentage. In most of the out-of-domain settings our system outperforms the supervised system.
4
[['System', 'COGCOMPNLP', 'Trained on', 'OntoNotes'], ['System', 'COGCOMPNLP', 'Trained on', 'CoNLL'], ['System', 'ZOE (ours)', 'Trained on', '×']]
2
[['OntoNotes', 'PER'], ['OntoNotes', 'LOC'], ['OntoNotes', 'ORG'], ['CoNLL', 'PER'], ['CoNLL', 'LOC'], ['CoNLL', 'ORG'], ['MUC', 'PER'], ['MUC', 'LOC'], ['MUC', 'ORG']]
[['98.4', '91.9', '97.7', '83.7', '70.1', '68.3', '82.5', '76.9', '86.7'], ['94.4', '59.1', '87.8', '95.6', '92.9', '90.5', '90.8', '90.8', '90.9'], ['88.4', '70.0', '85.6', '90.1', '80.1', '73.9', '87.8', '90.9', '91.2']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['ZOE (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OntoNotes || PER</th> <th>OntoNotes || LOC</th> <th>OntoNotes || ORG</th> <th>CoNLL || PER</th> <th>CoNLL || LOC</th> <th>CoNLL || ORG</th> <th>MUC || PER</th> <th>MUC || LOC</th> <th>MUC || ORG</th> </tr> </thead> <tbody> <tr> <td>System || COGCOMPNLP || Trained on || OntoNotes</td> <td>98.4</td> <td>91.9</td> <td>97.7</td> <td>83.7</td> <td>70.1</td> <td>68.3</td> <td>82.5</td> <td>76.9</td> <td>86.7</td> </tr> <tr> <td>System || COGCOMPNLP || Trained on || CoNLL</td> <td>94.4</td> <td>59.1</td> <td>87.8</td> <td>95.6</td> <td>92.9</td> <td>90.5</td> <td>90.8</td> <td>90.8</td> <td>90.9</td> </tr> <tr> <td>System || ZOE (ours) || Trained on || ×</td> <td>88.4</td> <td>70.0</td> <td>85.6</td> <td>90.1</td> <td>80.1</td> <td>73.9</td> <td>87.8</td> <td>90.9</td> <td>91.2</td> </tr> </tbody></table>
Table 3
table_3
D18-1231
7
emnlp2018
4.2 Coarse Entity Typing. In Table 3 we study entity typing for the coarse types on three datasets. We focus on three types that are shared among the datasets: PER, LOC, ORG. In coarse-entity typing, the best available systems are heavily supervised. In this evaluation, we use gold mention spans; i.e., we force the decoding algorithm of the supervised systems to select the best of the three classes for each gold mention. As expected, the supervised systems have strong in-domain performance. However, they suffer a significant drop when evaluated in a different domain. Our system, while not trained on any supervised data, achieves better or comparable performance compared to other supervised baselines in the out-of-domain evaluations.
[2, 1, 1, 2, 2, 1, 1, 1]
['4.2 Coarse Entity Typing.', 'In Table 3 we study entity typing for the coarse types on three datasets.', 'We focus on three types that are shared among the datasets: PER, LOC, ORG.', 'In coarse-entity typing, the best available systems are heavily supervised.', 'In this evaluation, we use gold mention spans; i.e., we force the decoding algorithm of the supervised systems to select the best of the three classes for each gold mention.', 'As expected, the supervised systems have strong in-domain performance.', 'However, they suffer a significant drop when evaluated in a different domain.', 'Our system, while not trained on any supervised data, achieves better or comparable performance compared to other supervised baselines in the out-of-domain evaluations.']
[None, ['OntoNotes', 'CoNLL', 'MUC'], ['PER', 'LOC', 'ORG'], ['COGCOMPNLP'], ['COGCOMPNLP'], ['COGCOMPNLP', 'OntoNotes', 'CoNLL'], ['COGCOMPNLP', 'OntoNotes', 'CoNLL', 'MUC'], ['ZOE (ours)', 'Trained on', '×', 'COGCOMPNLP']]
1
D18-1231table_6
Ablation study of different ways in which concepts are generated in our system (§4.5). The first row shows performance of our system on each dataset, followed by the change in the performance upon dropping a component. While both signals are crucial, contextual information is playing more important role than the mention-surface signal.
2
[['Approach', 'ZOE (ours)'], ['Approach', 'no surface-based concepts'], ['Approach', 'no context-based concepts']]
2
[['FIGER', 'Acc.'], ['FIGER', 'F1ma'], ['FIGER', 'F1mi'], ['BBN', 'Acc.'], ['BBN', 'F1ma'], ['BBN', 'F1mi'], ['OntoNotesfine', 'Acc.'], ['OntoNotesfine', 'F1ma'], ['OntoNotesfine', 'F1mi']]
[['58.8', '74.8', '71.3', '61.8', '74.6', '74.9', '50.7', '66.9', '60.8'], ['-8.8', '-7.5', '-9.2', '-12.9', '-7.0', '-8.6', '-1.8', '-1.2', '-0.1'], ['-39.3', '-42.1', '-25.4', '-36.4', '-31.0', '-13.9', '-10.0', '-12.3', '-7.4']]
column
['Acc.', 'F1ma', 'F1mi', 'Acc.', 'F1ma', 'F1mi', 'Acc.', 'F1ma', 'F1mi']
['ZOE (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FIGER || Acc.</th> <th>FIGER || F1ma</th> <th>FIGER || F1mi</th> <th>BBN || Acc.</th> <th>BBN || F1ma</th> <th>BBN || F1mi</th> <th>OntoNotesfine || Acc.</th> <th>OntoNotesfine || F1ma</th> <th>OntoNotesfine || F1mi</th> </tr> </thead> <tbody> <tr> <td>Approach || ZOE (ours)</td> <td>58.8</td> <td>74.8</td> <td>71.3</td> <td>61.8</td> <td>74.6</td> <td>74.9</td> <td>50.7</td> <td>66.9</td> <td>60.8</td> </tr> <tr> <td>Approach || no surface-based concepts</td> <td>-8.8</td> <td>-7.5</td> <td>-9.2</td> <td>-12.9</td> <td>-7.0</td> <td>-8.6</td> <td>-1.8</td> <td>-1.2</td> <td>-0.1</td> </tr> <tr> <td>Approach || no context-based concepts</td> <td>-39.3</td> <td>-42.1</td> <td>-25.4</td> <td>-36.4</td> <td>-31.0</td> <td>-13.9</td> <td>-10.0</td> <td>-12.3</td> <td>-7.4</td> </tr> </tbody></table>
Table 6
table_6
D18-1231
9
emnlp2018
4.5 Ablation Study. We carry out ablation studies that quantify the contribution of surface information (ยง3.3) and context information (ยง3.2). As Table 6 shows, both factors are crucial and complementary for the system. However, the contextual information seems to have a bigger role overall. We complement our qualitative analysis with the quantitative share of each component. In 69.3%, 54.6%, and 69.7% of mentions, our system uses the context information (and ignores the surface), in FIGER, BBN, and OntoNotesfine datasets, respectively, underscoring the importance of contextual information.
[2, 2, 1, 1, 2, 2]
['4.5 Ablation Study.', 'We carry out ablation studies that quantify the contribution of surface information (ยง3.3) and context information (ยง3.2).', 'As Table 6 shows, both factors are crucial and complementary for the system.', 'However, the contextual information seems to have a bigger role overall.', 'We complement our qualitative analysis with the quantitative share of each component.', 'In 69.3%, 54.6%, and 69.7% of mentions, our system uses the context information (and ignores the surface), in FIGER, BBN, and OntoNotesfine datasets, respectively, underscoring the importance of contextual information.']
[None, None, ['ZOE (ours)', 'no surface-based concepts', 'no context-based concepts'], ['no context-based concepts'], None, ['ZOE (ours)', 'no surface-based concepts', 'no context-based concepts', 'FIGER', 'BBN', 'OntoNotesfine']]
1
D18-1233table_3
Selected Results of the baseline models on follow-up question generation.
2
[['Model', 'First Sent.'], ['Model', 'NMT-Copy'], ['Model', 'BiDAF'], ['Model', 'Rule-based']]
1
[['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4']]
[['0.221', '0.144', '0.119', '0.106'], ['0.339', '0.206', '0.139', '0.102'], ['0.450', '0.375', '0.338', '0.312'], ['0.533', '0.437', '0.379', '0.344']]
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4']
['Rule-based']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>Model || First Sent.</td> <td>0.221</td> <td>0.144</td> <td>0.119</td> <td>0.106</td> </tr> <tr> <td>Model || NMT-Copy</td> <td>0.339</td> <td>0.206</td> <td>0.139</td> <td>0.102</td> </tr> <tr> <td>Model || BiDAF</td> <td>0.450</td> <td>0.375</td> <td>0.338</td> <td>0.312</td> </tr> <tr> <td>Model || Rule-based</td> <td>0.533</td> <td>0.437</td> <td>0.379</td> <td>0.344</td> </tr> </tbody></table>
Table 3
table_3
D18-1233
9
emnlp2018
Results. Our results, shown in Table 3 indicate that systems that return contiguous spans from the rule text perform better according to our BLEU metric. We speculate that the logical forms in the data are challenging for existing models to extract and manipulate, which may suggest why the explicit rule-based system performed best. We further note that only the rule-based and NMT-Copy models are capable of generating genuine questions rather than spans or sentences.
[2, 1, 1, 2]
['Results.', 'Our results, shown in Table 3 indicate that systems that return contiguous spans from the rule text perform better according to our BLEU metric.', 'We speculate that the logical forms in the data are challenging for existing models to extract and manipulate, which may suggest why the explicit rule-based system performed best.', 'We further note that only the rule-based and NMT-Copy models are capable of generating genuine questions rather than spans or sentences.']
[None, ['BiDAF', 'Rule-based', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4'], ['Rule-based'], ['NMT-Copy', 'Rule-based']]
1
D18-1233table_4
Results of entailment models on ShARC.
2
[['Model', 'Random'], ['Model', 'Surface LR'], ['Model', 'DAM (SNLI)'], ['Model', 'DAM (ShARC)']]
1
[['Micro Acc.'], ['Macro Acc.']]
[['0.330', '0.326'], ['0.682', '0.333'], ['0.479', '0.362'], ['0.492', '0.322']]
column
['Micro Acc.', 'Macro Acc.']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Micro Acc.</th> <th>Macro Acc.</th> </tr> </thead> <tbody> <tr> <td>Model || Random</td> <td>0.330</td> <td>0.326</td> </tr> <tr> <td>Model || Surface LR</td> <td>0.682</td> <td>0.333</td> </tr> <tr> <td>Model || DAM (SNLI)</td> <td>0.479</td> <td>0.362</td> </tr> <tr> <td>Model || DAM (ShARC)</td> <td>0.492</td> <td>0.322</td> </tr> </tbody></table>
Table 4
table_4
D18-1233
9
emnlp2018
Results. Table 4 shows the result of our baseline models on the entailment corpus of ShARC test set. Results show poor performance especially for the macro accuracy metric of both simple baselines and neural state-of-the-art entailment models. This performance highlights the challenges that the scenario interpretation task of ShARC presents, many of which are discussed in Section 4.2.2.
[2, 1, 1, 2]
['Results.', 'Table 4 shows the result of our baseline models on the entailment corpus of ShARC test set.', 'Results show poor performance especially for the macro accuracy metric of both simple baselines and neural state-of-the-art entailment models.', 'This performance highlights the challenges that the scenario interpretation task of ShARC presents, many of which are discussed in Section 4.2.2.']
[None, ['Random', 'Surface LR', 'DAM (SNLI)', 'DAM (ShARC)'], ['Random', 'Surface LR', 'DAM (SNLI)', 'DAM (ShARC)', 'Macro Acc.'], ['Random', 'Surface LR', 'DAM (SNLI)', 'DAM (ShARC)', 'Macro Acc.']]
1
D18-1235table_2
Comparison among different choices for the loss function with multiple answers on the development set
2
[['Loss', 'single answer'], ['Loss', 'Lmin'], ['Loss', 'Lavg'], ['Loss', 'Lwavg']]
1
[['ROUGE-L'], ['Δ']]
[['48.93', '-'], ['49.05', '+0.12'], ['49.67', '+0.74'], ['49.77', '+0.84']]
column
['ROUGE-L', 'delta']
['Lmin', 'Lavg', 'Lwavg']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-L</th> <th>Δ</th> </tr> </thead> <tbody> <tr> <td>Loss || single answer</td> <td>48.93</td> <td>-</td> </tr> <tr> <td>Loss || Lmin</td> <td>49.05</td> <td>+0.12</td> </tr> <tr> <td>Loss || Lavg</td> <td>49.67</td> <td>+0.74</td> </tr> <tr> <td>Loss || Lwavg</td> <td>49.77</td> <td>+0.84</td> </tr> </tbody></table>
Table 2
table_2
D18-1235
7
emnlp2018
4.3.2 Different loss functions with multi-answer. Table 2 shows the experimental results with three different multi-answer loss functions introduced in Section 3.5.1. All of them offer improvement over the single-answer baseline, which shows the effectiveness of utilizing multiple answers. The average loss performs better than the min loss, which suggests that forcing the model to predict all possible answers is better than asking it to just find the easiest one. Not surprisingly, by taking into account the quality of different answer spans, the weighted average loss outperforms the average loss and achieves the best result among the three. All later experiments are conducted based on the weighted average loss.
[2, 1, 1, 1, 1, 2]
['4.3.2 Different loss functions with multi-answer.', 'Table 2 shows the experimental results with three different multi-answer loss functions introduced in Section 3.5.1.', 'All of them offer improvement over the single-answer baseline, which shows the effectiveness of utilizing multiple answers.', 'The average loss performs better than the min loss, which suggests that forcing the model to predict all possible answers is better than asking it to just find the easiest one.', 'Not surprisingly, by taking into account the quality of different answer spans, the weighted average loss outperforms the average loss and achieves the best result among the three.', 'All later experiments are conducted based on the weighted average loss.']
[None, ['Lmin', 'Lavg', 'Lwavg'], ['ROUGE-L', 'single answer', 'Lmin', 'Lavg', 'Lwavg'], ['ROUGE-L', 'Lmin', 'Lavg'], ['ROUGE-L', 'Lmin', 'Lavg', 'Lwavg'], ['Lwavg']]
1
D18-1235table_4
Performance of our model and competing models on the DuReader test set
2
[['Model', 'BiDAF (He et al. 2017)'], ['Model', 'Match-LSTM (He et al. 2017)'], ['Model', 'PR+BiDAF (Wang et al. 2018b)'], ['Model', 'PE+BiDAF (ours)'], ['Model', 'V-Net (Wang et al. 2018b)'], ['Model', 'Our complete model'], ['Model', 'Human']]
1
[['ROUGE-L'], ['BLEU-4']]
[['39.0', '31.8'], ['39.2', '31.9'], ['41.81', '37.55'], ['45.93', '38.86'], ['44.18', '40.97'], ['51.09', '43.76'], ['57.4', '56.1']]
column
['ROUGE-L', 'BLEU-4']
['PE+BiDAF (ours)', 'Our complete model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-L</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>Model || BiDAF (He et al. 2017)</td> <td>39.0</td> <td>31.8</td> </tr> <tr> <td>Model || Match-LSTM (He et al. 2017)</td> <td>39.2</td> <td>31.9</td> </tr> <tr> <td>Model || PR+BiDAF (Wang et al. 2018b)</td> <td>41.81</td> <td>37.55</td> </tr> <tr> <td>Model || PE+BiDAF (ours)</td> <td>45.93</td> <td>38.86</td> </tr> <tr> <td>Model || V-Net (Wang et al. 2018b)</td> <td>44.18</td> <td>40.97</td> </tr> <tr> <td>Model || Our complete model</td> <td>51.09</td> <td>43.76</td> </tr> <tr> <td>Model || Human</td> <td>57.4</td> <td>56.1</td> </tr> </tbody></table>
Table 4
table_4
D18-1235
8
emnlp2018
4.3.4 Comparison with State-of-the-art. Table 4 shows the performance of our model and other state-of-the-art models on the DuReader test set. First, we compare our passage extraction method with the paragraph ranking model from Wang et al. (2018b). Based on the same BiDAF model described in Section 3.4, our method (PE+BiDAF) significantly outperforms the trained model from Wang et al. (2018b) (PR+BiDAF) on the DuReader test set. As we can see, our complete model achieves the state-of-the-art performance in both ROUGE-L and BLEU-4, and greatly narrows the performance gap between MRC system and human in the challenging realworld setting.
[2, 1, 2, 1, 1]
['4.3.4 Comparison with State-of-the-art.', 'Table 4 shows the performance of our model and other state-of-the-art models on the DuReader test set.', 'First, we compare our passage extraction method with the paragraph ranking model from Wang et al. (2018b).', 'Based on the same BiDAF model described in Section 3.4, our method (PE+BiDAF) significantly outperforms the trained model from Wang et al. (2018b) (PR+BiDAF) on the DuReader test set.', 'As we can see, our complete model achieves the state-of-the-art performance in both ROUGE-L and BLEU-4, and greatly narrows the performance gap between MRC system and human in the challenging realworld setting.']
[None, ['BiDAF (He et al. 2017)', 'Match-LSTM (He et al. 2017)', 'PR+BiDAF (Wang et al. 2018b)', 'PE+BiDAF (ours)', 'V-Net (Wang et al. 2018b)', 'Our complete model'], None, ['PE+BiDAF (ours)', 'PR+BiDAF (Wang et al. 2018b)'], ['Our complete model', 'Human', 'ROUGE-L', 'BLEU-4']]
1
D18-1239table_1
Results for Short Questions (CLEVRGEN): Performance of our model compared to baseline models on the Short Questions test set. The LSTM (NO KG) has accuracy close to chance, showing that the questions lack trivial biases. Our model almost perfectly solves all questions showing its ability to learn challenging semantic operators, and parse questions only using weak end-to-end supervision.
2
[['Model', 'LSTM (NO KG)'], ['Model', 'LSTM'], ['Model', 'BI-LSTM'], ['Model', 'TREE-LSTM'], ['Model', 'TREE-LSTM (UNSUP.)'], ['Model', 'RELATION NETWORK'], ['Model', 'Our Model (Pre-parsed)'], ['Model', 'Our Model']]
1
[['Boolean Questions'], ['Entity Set Questions'], ['Relation Questions'], ['Overall']]
[['50.7', '14.4', '17.5', '27.2'], ['88.5', '99.9', '15.7', '84.9'], ['85.3', '99.6', '14.9', '83.6'], ['82.2', '97.0', '15.7', '81.2'], ['85.4', '99.4', '16.1', '83.6'], ['85.6', '89.7', '97.6', '89.4'], ['94.8', '93.4', '70.5', '90.8'], ['99.9', '100', '100', '99.9']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Our Model (Pre-parsed)', 'Our Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Boolean Questions</th> <th>Entity Set Questions</th> <th>Relation Questions</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM (NO KG)</td> <td>50.7</td> <td>14.4</td> <td>17.5</td> <td>27.2</td> </tr> <tr> <td>Model || LSTM</td> <td>88.5</td> <td>99.9</td> <td>15.7</td> <td>84.9</td> </tr> <tr> <td>Model || BI-LSTM</td> <td>85.3</td> <td>99.6</td> <td>14.9</td> <td>83.6</td> </tr> <tr> <td>Model || TREE-LSTM</td> <td>82.2</td> <td>97.0</td> <td>15.7</td> <td>81.2</td> </tr> <tr> <td>Model || TREE-LSTM (UNSUP.)</td> <td>85.4</td> <td>99.4</td> <td>16.1</td> <td>83.6</td> </tr> <tr> <td>Model || RELATION NETWORK</td> <td>85.6</td> <td>89.7</td> <td>97.6</td> <td>89.4</td> </tr> <tr> <td>Model || Our Model (Pre-parsed)</td> <td>94.8</td> <td>93.4</td> <td>70.5</td> <td>90.8</td> </tr> <tr> <td>Model || Our Model</td> <td>99.9</td> <td>100</td> <td>100</td> <td>99.9</td> </tr> </tbody></table>
Table 1
table_1
D18-1239
7
emnlp2018
Short Questions Performance:. Table 1 shows that our model perfectly answers all test questions, demonstrating that it can learn challenging semantic operators and induce parse trees from end task supervision. Performance drops when using external parser, showing that our model learns an effective syntactic model for this domain. The RELATION NETWORK also achieves good performance, particularly on questions involving relations. LSTM baselines work well on questions not involving relations.
[2, 1, 1, 1, 1]
['Short Questions Performance:.', 'Table 1 shows that our model perfectly answers all test questions, demonstrating that it can learn challenging semantic operators and induce parse trees from end task supervision.', 'Performance drops when using external parser, showing that our model learns an effective syntactic model for this domain.', 'The RELATION NETWORK also achieves good performance, particularly on questions involving relations.', 'LSTM baselines work well on questions not involving relations.']
[None, ['Our Model (Pre-parsed)', 'Our Model', 'Boolean Questions', 'Entity Set Questions', 'Relation Questions'], ['Our Model (Pre-parsed)', 'Our Model'], ['RELATION NETWORK', 'Relation Questions'], ['LSTM', 'Relation Questions']]
1
D18-1239table_3
Results for Human Queries (GENX) Our model outperforms LSTM and semantic parsing models on complex human-generated queries, showing it is robust to work on natural language. Better performance than TREE-LSTM (UNSUP.) shows the efficacy in representing sub-phrases using explicit denotations. Our model also performs better without an external parser, showing the advantages of latent syntax.
2
[['Model', 'LSTM (NO KG)'], ['Model', 'LSTM'], ['Model', 'BI-LSTM'], ['Model', 'TREE-LSTM'], ['Model', 'TREE-LSTM (UNSUP.)'], ['Model', 'SEMPRE'], ['Model', 'Our Model (Pre-parsed)'], ['Model', 'Our Model']]
1
[['Accuracy']]
[['0.0'], ['64.9'], ['64.6'], ['43.5'], ['67.7'], ['48.1'], ['67.1'], ['73.7']]
column
['Accuracy']
['Our Model (Pre-parsed)', 'Our Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM (NO KG)</td> <td>0.0</td> </tr> <tr> <td>Model || LSTM</td> <td>64.9</td> </tr> <tr> <td>Model || BI-LSTM</td> <td>64.6</td> </tr> <tr> <td>Model || TREE-LSTM</td> <td>43.5</td> </tr> <tr> <td>Model || TREE-LSTM (UNSUP.)</td> <td>67.7</td> </tr> <tr> <td>Model || SEMPRE</td> <td>48.1</td> </tr> <tr> <td>Model || Our Model (Pre-parsed)</td> <td>67.1</td> </tr> <tr> <td>Model || Our Model</td> <td>73.7</td> </tr> </tbody></table>
Table 3
table_3
D18-1239
9
emnlp2018
Performance on Human-generated Language:. Table 3 shows the performance of our model on complex human-generated queries in GENX. Our approach outperforms strong LSTM and semantic parsing baselines, despite the semantic parser’s use of hard-coded operators. These results suggest that our method represents an attractive middle ground between minimally structured and highly structured approaches to interpretation. Our model learns to interpret operators such as except that were not considered during development. This shows that our model can learn to parse human language, which contains greater lexical and structural diversity than synthetic questions. Trees induced by the model are linguistically plausible (see Appendix D).
[2, 1, 1, 2, 2, 2, 2]
['Performance on Human-generated Language:.', 'Table 3 shows the performance of our model on complex human-generated queries in GENX.', 'Our approach outperforms strong LSTM and semantic parsing baselines, despite the semantic parser’s use of hard-coded operators.', 'These results suggest that our method represents an attractive middle ground between minimally structured and highly structured approaches to interpretation.', 'Our model learns to interpret operators such as except that were not considered during development.', 'This shows that our model can learn to parse human language, which contains greater lexical and structural diversity than synthetic questions.', 'Trees induced by the model are linguistically plausible (see Appendix D).']
[None, ['Our Model'], ['Our Model', 'LSTM', 'SEMPRE'], ['Our Model'], ['Our Model'], ['Our Model'], ['Our Model']]
1
D18-1240table_7
Comparison to the SoA on SemEval-2016
1
[['Kelp [#1] (Filice et al. 2016)'], ['Conv-KN [#2] (Barron-Cedeno et al. 2016)'], ['CTKC +VQF (Tymoshenko et al. 2016b)'], ['HyperQA (Tay et al. 2018)'], ['AI-CNN (Zhang et al. 2017)'], ['Our model (V+Bcr+Ecr+E+SST)']]
1
[['MRR'], ['MAP']]
[['86.42', '79.19'], ['84.93', '77.6'], ['86.26', '78.78'], ['n/a', '79.5'], ['n/a', '80.14'], ['86.52', '79.79']]
column
['MRR', 'MAP']
['Our model (V+Bcr+Ecr+E+SST)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MRR</th> <th>MAP</th> </tr> </thead> <tbody> <tr> <td>Kelp [#1] (Filice et al. 2016)</td> <td>86.42</td> <td>79.19</td> </tr> <tr> <td>Conv-KN [#2] (Barron-Cedeno et al. 2016)</td> <td>84.93</td> <td>77.6</td> </tr> <tr> <td>CTKC +VQF (Tymoshenko et al. 2016b)</td> <td>86.26</td> <td>78.78</td> </tr> <tr> <td>HyperQA (Tay et al. 2018)</td> <td>n/a</td> <td>79.5</td> </tr> <tr> <td>AI-CNN (Zhang et al. 2017)</td> <td>n/a</td> <td>80.14</td> </tr> <tr> <td>Our model (V+Bcr+Ecr+E+SST)</td> <td>86.52</td> <td>79.79</td> </tr> </tbody></table>
Table 7
table_7
D18-1240
9
emnlp2018
Semeval. Table 7 compares performance of Bcr + Ecr + V + E + SST system on Semeval to that of KeLP and ConvKN, the two top systems in the SemEval 2016 competition, and also to the performance of the recent DNN-based HyperQA and AI-CNN systems. In the Semeval 2016 competition, our model would have been the first, with #1 KeLP system being 0.6 MAP points behind. Then, it would have outperformed the state-of-theart AI-CNN system by 0.35 MAP points.
[2, 1, 1, 1]
['Semeval.', 'Table 7 compares performance of Bcr + Ecr + V + E + SST system on Semeval to that of KeLP and ConvKN, the two top systems in the SemEval 2016 competition, and also to the performance of the recent DNN-based HyperQA and AI-CNN systems.', 'In the Semeval 2016 competition, our model would have been the first, with #1 KeLP system being 0.6 MAP points behind.', 'Then, it would have outperformed the state-of-theart AI-CNN system by 0.35 MAP points.']
[None, ['Kelp [#1] (Filice et al. 2016)', 'Conv-KN [#2] (Barron-Cedeno et al. 2016)', 'CTKC +VQF (Tymoshenko et al. 2016b)', 'HyperQA (Tay et al. 2018)', 'AI-CNN (Zhang et al. 2017)', 'Our model (V+Bcr+Ecr+E+SST)'], ['Our model (V+Bcr+Ecr+E+SST)', 'Kelp [#1] (Filice et al. 2016)', 'MAP'], ['Our model (V+Bcr+Ecr+E+SST)', 'AI-CNN (Zhang et al. 2017)', 'MAP']]
1
D18-1244table_1
Results on TACRED. Underscore marks highest number among single models; bold marks highest among all. † marks results reported in (Zhang et al., 2017); ‡ marks results produced with our implementation. ⇤ marks statistically significant improvements over PA-LSTM with p < .01 under a bootstrap test.
2
[['System', 'LR (Zhang+2017)'], ['System', 'SDP-LSTM (Xu+2015b)'], ['System', 'Tree-LSTM (Tai+2015)'], ['System', 'PA-LSTM (Zhang+2017)'], ['System', 'GCN'], ['System', 'C-GCN'], ['System', 'GCN + PA-LSTM'], ['System', 'C-GCN + PA-LSTM']]
1
[['P'], ['R'], ['F1']]
[['73.5', '49.9', '59.4'], ['66.3', '52.7', '58.7'], ['66', '59.2', '62.4'], ['65.7', '64.5', '65.1'], ['69.8', '59', '64'], ['69.9', '63.3', '66.4'], ['71.7', '63', '67.1'], ['71.3', '65.4', '68.2']]
column
['P', 'R', 'F1']
['GCN', 'C-GCN', 'GCN + PA-LSTM', 'C-GCN + PA-LSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System || LR (Zhang+2017)</td> <td>73.5</td> <td>49.9</td> <td>59.4</td> </tr> <tr> <td>System || SDP-LSTM (Xu+2015b)</td> <td>66.3</td> <td>52.7</td> <td>58.7</td> </tr> <tr> <td>System || Tree-LSTM (Tai+2015)</td> <td>66</td> <td>59.2</td> <td>62.4</td> </tr> <tr> <td>System || PA-LSTM (Zhang+2017)</td> <td>65.7</td> <td>64.5</td> <td>65.1</td> </tr> <tr> <td>System || GCN</td> <td>69.8</td> <td>59</td> <td>64</td> </tr> <tr> <td>System || C-GCN</td> <td>69.9</td> <td>63.3</td> <td>66.4</td> </tr> <tr> <td>System || GCN + PA-LSTM</td> <td>71.7</td> <td>63</td> <td>67.1</td> </tr> <tr> <td>System || C-GCN + PA-LSTM</td> <td>71.3</td> <td>65.4</td> <td>68.2</td> </tr> </tbody></table>
Table 1
table_1
D18-1244
6
emnlp2018
5.3 Results on the TACRED Dataset. We present our main results on the TACRED test set in Table 1. We observe that our GCN model outperforms all dependency-based models by at least 1.6 F1. By using contextualized word representations, the C-GCN model further outperforms the strong PA-LSTM model by 1.3 F1, and achieves a new state of the art. In addition, we find our model improves upon other dependency based models in both precision and recall. Comparing the C-GCN model with the GCN model, we find that the gain mainly comes from improved recall. We hypothesize that this is because the CGCN is more robust to parse errors by capturing local word patterns (see also Section 6.2). As we will show in Section 6.2, we find that our GCN models have complementary strengths when compared to the PA-LSTM. To leverage this result, we experiment with a simple interpolation strategy to combine these models. Given the output probabilities PG(r|x) from a GCN model and PS(r|x) from the sequence model for any relation r, we calculate the interpolated probability as P(r|x) = α · PG(r|x) + (1 - α) · PS(r|x) where α ∈ [0, 1] is chosen on the dev set and set to 0.6. This simple interpolation between a GCN and a PA-LSTM achieves an F1 score of 67.1, outperforming each model alone by at least 2.0 F1. An interpolation between a C-GCN and a PA-LSTM further improves the result to 68.2. This complementary performance explains the gain we see in Table 1 when the two models are combined.
[2, 1, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1]
['5.3 Results on the TACRED Dataset.', 'We present our main results on the TACRED test set in Table 1.', 'We observe that our GCN model outperforms all dependency-based models by at least 1.6 F1.', 'By using contextualized word representations, the C-GCN model further outperforms the strong PA-LSTM model by 1.3 F1, and achieves a new state of the art.', 'In addition, we find our model improves upon other dependency based models in both precision and recall.', 'Comparing the C-GCN model with the GCN model, we find that the gain mainly comes from improved recall.', 'We hypothesize that this is because the CGCN is more robust to parse errors by capturing local word patterns (see also Section 6.2).', 'As we will show in Section 6.2, we find that our GCN models have complementary strengths when compared to the PA-LSTM.', 'To leverage this result, we experiment with a simple interpolation strategy to combine these models.', 'Given the output probabilities PG(r|x) from a GCN model and PS(r|x) from the sequence model for any relation r, we calculate the interpolated probability as P(r|x) = α · PG(r|x) + (1 - α) · PS(r|x) where α ∈ [0, 1] is chosen on the dev set and set to 0.6.', 'This simple interpolation between a GCN and a PA-LSTM achieves an F1 score of 67.1, outperforming each model alone by at least 2.0 F1.', 'An interpolation between a C-GCN and a PA-LSTM further improves the result to 68.2.', 'This complementary performance explains the gain we see in Table 1 when the two models are combined.']
[None, ['GCN', 'C-GCN', 'GCN + PA-LSTM', 'C-GCN + PA-LSTM'], ['GCN', 'LR (Zhang+2017)', 'SDP-LSTM (Xu+2015b)', 'Tree-LSTM (Tai+2015)', 'F1'], ['C-GCN', 'PA-LSTM (Zhang+2017)', 'F1'], ['C-GCN', 'LR (Zhang+2017)', 'SDP-LSTM (Xu+2015b)', 'Tree-LSTM (Tai+2015)', 'P', 'R'], ['GCN', 'C-GCN', 'R'], ['C-GCN'], ['GCN', 'PA-LSTM (Zhang+2017)'], ['GCN', 'PA-LSTM (Zhang+2017)'], ['GCN'], ['GCN + PA-LSTM', 'F1', 'PA-LSTM (Zhang+2017)', 'GCN'], ['C-GCN + PA-LSTM', 'F1'], ['GCN + PA-LSTM', 'C-GCN + PA-LSTM']]
1
D18-1259table_4
Main results: the performance of question answering and supporting fact prediction in the two benchmark settings. We encourage researchers to report these metrics when evaluating their methods.
4
[['Setting', 'distractor', 'Split', 'dev'], ['Setting', 'distractor', 'Split', 'test'], ['Setting', 'full wiki', 'Split', 'dev'], ['Setting', 'full wiki', 'Split', 'test']]
2
[['Answer', 'EM'], ['Answer', 'F1'], ['Sup Fact', 'EM'], ['Sup Fact', 'F1'], ['Joint', 'EM'], ['Joint', 'F1']]
[['44.44', '58.28', '21.95', '66.66', '11.56', '40.86'], ['45.46', '58.99', '22.24', '66.62', '12.04', '41.37'], ['24.68', '34.36', '5.28', '40.98', '2.54', '17.73'], ['25.23', '34.40', '5.07', '40.69', '2.63', '17.85']]
column
['EM', 'F1', 'EM', 'F1', 'EM', 'F1']
['distractor', 'full wiki']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Answer || EM</th> <th>Answer || F1</th> <th>Sup Fact || EM</th> <th>Sup Fact || F1</th> <th>Joint || EM</th> <th>Joint || F1</th> </tr> </thead> <tbody> <tr> <td>Setting || distractor || Split || dev</td> <td>44.44</td> <td>58.28</td> <td>21.95</td> <td>66.66</td> <td>11.56</td> <td>40.86</td> </tr> <tr> <td>Setting || distractor || Split || test</td> <td>45.46</td> <td>58.99</td> <td>22.24</td> <td>66.62</td> <td>12.04</td> <td>41.37</td> </tr> <tr> <td>Setting || full wiki || Split || dev</td> <td>24.68</td> <td>34.36</td> <td>5.28</td> <td>40.98</td> <td>2.54</td> <td>17.73</td> </tr> <tr> <td>Setting || full wiki || Split || test</td> <td>25.23</td> <td>34.40</td> <td>5.07</td> <td>40.69</td> <td>2.63</td> <td>17.85</td> </tr> </tbody></table>
Table 4
table_4
D18-1259
8
emnlp2018
The performance of our model on the benchmark settings is reported in Table 4, where all numbers are obtained with strong supervision over supporting facts. From the distractor setting to the full wiki setting, expanding the scope of the context increases the difficulty of question answering. The performance in the full wiki setting is substantially lower, which poses a challenge to existing techniques on retrieval-based question answering. Overall, model performance in all settings is significantly lower than human performance as shown in Section 5.3, which indicates that more technical advancements are needed in future work. We also investigate the explainability of our model by measuring supporting fact prediction performance. Our model achieves 60+ supporting fact prediction F1 and ∼40 joint F1, which indicates there is room for further improvement in terms of explainability.
[1, 1, 1, 2, 1, 1]
['The performance of our model on the benchmark settings is reported in Table 4, where all numbers are obtained with strong supervision over supporting facts.', 'From the distractor setting to the full wiki setting, expanding the scope of the context increases the difficulty of question answering.', 'The performance in the full wiki setting is substantially lower, which poses a challenge to existing techniques on retrieval-based question answering.', 'Overall, model performance in all settings is significantly lower than human performance as shown in Section 5.3, which indicates that more technical advancements are needed in future work.', 'We also investigate the explainability of our model by measuring supporting fact prediction performance.', 'Our model achieves 60+ supporting fact prediction F1 and ∼40 joint F1, which indicates there is room for further improvement in terms of explainability.']
[None, ['distractor', 'full wiki', 'Answer'], ['full wiki', 'Answer', 'Sup Fact', 'Joint', 'EM', 'F1'], ['distractor', 'full wiki'], ['Sup Fact'], ['Sup Fact', 'F1', 'Joint', 'distractor', 'full wiki']]
1
D18-1259table_9
Retrieval performance comparison on full wiki setting for train-medium, dev and test with 1,000 random samples each. MAP and are in %. Mean Rank averages over retrieval ranks of two gold paragraphs. CorAns Rank refers to the rank of the gold paragraph containing the answer.
2
[['Set', 'train-medium'], ['Set', 'dev'], ['Set', 'test']]
1
[['MAP'], ['Mean Rank'], ['CorAns Rank']]
[['41.89', '288.19', '82.76'], ['42.79', '304.30', '97.93'], ['45.92', '286.20', '74.85']]
column
['MAP', 'Mean Rank', 'CorAns Rank']
['train-medium']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>Mean Rank</th> <th>CorAns Rank</th> </tr> </thead> <tbody> <tr> <td>Set || train-medium</td> <td>41.89</td> <td>288.19</td> <td>82.76</td> </tr> <tr> <td>Set || dev</td> <td>42.79</td> <td>304.30</td> <td>97.93</td> </tr> <tr> <td>Set || test</td> <td>45.92</td> <td>286.20</td> <td>74.85</td> </tr> </tbody></table>
Table 9
table_9
D18-1259
12
emnlp2018
Table 9 shows the comparison between train-medium split and hard examples like dev and test under retrieval metrics in full wiki setting. As we can see, the performance gap between trainmedium split and its dev/test is close, which implies that train-medium split has a similar level of difficulty as hard examples under the full wiki setting in which a retrieval model is necessary as the first processing step.
[1, 1]
['Table 9 shows the comparison between train-medium split and hard examples like dev and test under retrieval metrics in full wiki setting.', 'As we can see, the performance gap between trainmedium split and its dev/test is close, which implies that train-medium split has a similar level of difficulty as hard examples under the full wiki setting in which a retrieval model is necessary as the first processing step.']
[['train-medium', 'dev', 'test', 'MAP', 'Mean Rank', 'CorAns Rank'], ['train-medium', 'dev', 'test']]
1
D18-1262table_3
Results on the English out-of-domain test set.
3
[['System', 'Local model', 'Lei et al. (2015)'], ['System', 'Local model', 'FitzGerald et al. (2015)'], ['System', 'Local model', 'Roth and Lapata (2016)'], ['System', 'Local model', 'Marcheggiani et al. (2017)'], ['System', 'Local model', 'Marcheggiani and Titov (2017)'], ['System', 'Local model', 'He et al. (2018)'], ['System', 'Local model', 'Cai et al. (2018)'], ['System', 'Local model', 'Ours (Syn-GCN)'], ['System', 'Local model', 'Ours (SA-LSTM)'], ['System', 'Local model', 'Ours (Tree-LSTM)'], ['System', 'Global model', 'Bjorkelund et al. (2010)'], ['System', 'Global model', 'FitzGerald et al. (2015)'], ['System', 'Global model', 'Roth and Lapata (2016)'], ['System', 'Ensemble model', 'FitzGerald et al. (2015)'], ['System', 'Ensemble model', 'Roth and Lapata (2016)'], ['System', 'Ensemble model', 'Marcheggiani and Titov (2017)']]
1
[['P'], ['R'], ['F1']]
[['-', '-', '75.6'], ['-', '-', '75.2'], ['76.9', '73.8', '75.3'], ['79.4', '76.2', '77.7'], ['78.5', '75.9', '77.2'], ['81.9', '76.9', '79.3'], ['79.8', '78.3', '79.0'], ['80.6', '79.0', '79.8'], ['81.0', '78.2', '79.6'], ['80.4', '78.7', '79.5'], ['77.9', '73.6', '75.7'], ['-', '-', '75.2'], ['78.6', '73.8', '76.1'], ['-', '-', '75.5'], ['79.7', '73.6', '76.5'], ['80.8', '77.1', '78.9']]
column
['P', 'R', 'F1']
['Ours (Syn-GCN)', 'Ours (SA-LSTM)', 'Ours (Tree-LSTM)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System || Local model || Lei et al. (2015)</td> <td>-</td> <td>-</td> <td>75.6</td> </tr> <tr> <td>System || Local model || FitzGerald et al. (2015)</td> <td>-</td> <td>-</td> <td>75.2</td> </tr> <tr> <td>System || Local model || Roth and Lapata (2016)</td> <td>76.9</td> <td>73.8</td> <td>75.3</td> </tr> <tr> <td>System || Local model || Marcheggiani et al. (2017)</td> <td>79.4</td> <td>76.2</td> <td>77.7</td> </tr> <tr> <td>System || Local model || Marcheggiani and Titov (2017)</td> <td>78.5</td> <td>75.9</td> <td>77.2</td> </tr> <tr> <td>System || Local model || He et al. (2018)</td> <td>81.9</td> <td>76.9</td> <td>79.3</td> </tr> <tr> <td>System || Local model || Cai et al. (2018)</td> <td>79.8</td> <td>78.3</td> <td>79.0</td> </tr> <tr> <td>System || Local model || Ours (Syn-GCN)</td> <td>80.6</td> <td>79.0</td> <td>79.8</td> </tr> <tr> <td>System || Local model || Ours (SA-LSTM)</td> <td>81.0</td> <td>78.2</td> <td>79.6</td> </tr> <tr> <td>System || Local model || Ours (Tree-LSTM)</td> <td>80.4</td> <td>78.7</td> <td>79.5</td> </tr> <tr> <td>System || Global model || Bjorkelund et al. (2010)</td> <td>77.9</td> <td>73.6</td> <td>75.7</td> </tr> <tr> <td>System || Global model || FitzGerald et al. (2015)</td> <td>-</td> <td>-</td> <td>75.2</td> </tr> <tr> <td>System || Global model || Roth and Lapata (2016)</td> <td>78.6</td> <td>73.8</td> <td>76.1</td> </tr> <tr> <td>System || Ensemble model || FitzGerald et al. (2015)</td> <td>-</td> <td>-</td> <td>75.5</td> </tr> <tr> <td>System || Ensemble model || Roth and Lapata (2016)</td> <td>79.7</td> <td>73.6</td> <td>76.5</td> </tr> <tr> <td>System || Ensemble model || Marcheggiani and Titov (2017)</td> <td>80.8</td> <td>77.1</td> <td>78.9</td> </tr> </tbody></table>
Table 3
table_3
D18-1262
6
emnlp2018
Table 3 presents the results on English out-of-domain test set. Our models outperform the highest records achieved by He et al. (2018), with absolute improvements of 0.2-0.5% in F1 scores. These favorable results on both in-domain and outof-domain data demonstrate the effectiveness and robustness of our proposed unified framework.
[1, 1, 1]
['Table 3 presents the results on English out-of-domain test set.', 'Our models outperform the highest records achieved by He et al. (2018), with absolute improvements of 0.2-0.5% in F1 scores.', 'These favorable results on both in-domain and outof-domain data demonstrate the effectiveness and robustness of our proposed unified framework.']
[None, ['Ours (Syn-GCN)', 'Ours (SA-LSTM)', 'Ours (Tree-LSTM)', 'F1', 'He et al. (2018)'], ['Ours (Syn-GCN)', 'Ours (SA-LSTM)', 'Ours (Tree-LSTM)']]
1
D18-1262table_6
Comparison of models with deep encoder and M&T encoder (Marcheggiani and Titov, 2017) on the English test set.
2
[['Our system', 'Baseline (syntax-agnostic)'], ['Our system', 'Syn-GCN'], ['Our system', 'SA-LSTM'], ['Our system', 'Tree-LSTM'], ['Our system', 'Syn-GCN (M&T encoder)'], ['Our system', 'SA-LSTM (M&T encoder)'], ['Our system', 'Tree-LSTM (M&T encoder)']]
1
[['P'], ['R'], ['F1']]
[['89.5', '87.9', '88.7'], ['90.3', '89.3', '89.8'], ['90.8', '88.6', '89.7'], ['90.0', '88.8', '89.4'], ['89.2', '88.0', '88.6'], ['89.8', '88.8', '89.3'], ['90.0', '87.8', '88.9']]
column
['P', 'R', 'F1']
['Syn-GCN', 'SA-LSTM', 'Tree-LSTM', 'Syn-GCN (M&T encoder)', 'SA-LSTM (M&T encoder)', 'Tree-LSTM (M&T encoder)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Our system || Baseline (syntax-agnostic)</td> <td>89.5</td> <td>87.9</td> <td>88.7</td> </tr> <tr> <td>Our system || Syn-GCN</td> <td>90.3</td> <td>89.3</td> <td>89.8</td> </tr> <tr> <td>Our system || SA-LSTM</td> <td>90.8</td> <td>88.6</td> <td>89.7</td> </tr> <tr> <td>Our system || Tree-LSTM</td> <td>90.0</td> <td>88.8</td> <td>89.4</td> </tr> <tr> <td>Our system || Syn-GCN (M&amp;T encoder)</td> <td>89.2</td> <td>88.0</td> <td>88.6</td> </tr> <tr> <td>Our system || SA-LSTM (M&amp;T encoder)</td> <td>89.8</td> <td>88.8</td> <td>89.3</td> </tr> <tr> <td>Our system || Tree-LSTM (M&amp;T encoder)</td> <td>90.0</td> <td>87.8</td> <td>88.9</td> </tr> </tbody></table>
Table 6
table_6
D18-1262
7
emnlp2018
To further investigate the impact of deep encoder, we perform our Syn-GCN, SA-LSTM and Tree-LSTM models with another alternative configuration, using the same encoder as (Marcheggiani and Titov, 2017) (M&T encoder for short), which removes the residual connections from our framework. The corresponding results of our models are also summarized in Table 6 for comparison. Note that the first row is the results of our syntax-agnostic model. Surprisingly, we observe a dramatical performance decline of 1.2% F1 for our Syn-GCN model with M&T encoder. A less significant performance loss for our SALSTM (−0.4%) and Tree-LSTM (−0.5%) models shows that the Syn-GCN is more sensitive to contextual information. Nevertheless, the overall results show that applying deep encoder could receive higher gains.
[2, 1, 1, 1, 1, 2]
['To further investigate the impact of deep encoder, we perform our Syn-GCN, SA-LSTM and Tree-LSTM models with another alternative configuration, using the same encoder as (Marcheggiani and Titov, 2017) (M&T encoder for short), which removes the residual connections from our framework.', 'The corresponding results of our models are also summarized in Table 6 for comparison.', 'Note that the first row is the results of our syntax-agnostic model.', 'Surprisingly, we observe a dramatical performance decline of 1.2% F1 for our Syn-GCN model with M&T encoder.', 'A less significant performance loss for our SALSTM (−0.4%) and Tree-LSTM (−0.5%) models shows that the Syn-GCN is more sensitive to contextual information.', 'Nevertheless, the overall results show that applying deep encoder could receive higher gains.']
[['Syn-GCN', 'SA-LSTM', 'Tree-LSTM'], ['Syn-GCN', 'SA-LSTM', 'Tree-LSTM'], ['Baseline (syntax-agnostic)'], ['Syn-GCN (M&T encoder)', 'Syn-GCN', 'F1'], ['SA-LSTM', 'Tree-LSTM', 'SA-LSTM (M&T encoder)', 'Tree-LSTM (M&T encoder)', 'F1'], None]
1
D18-1262table_7
Results on English test set, in terms of labeled attachment score for syntactic dependencies (LAS), semantic precision (P), semantic recall (R), semantic labeled F1 score (Sem-F1), the ratio Sem-F1/LAS. All numbers are in percent. A superscript * indicates LAS results from our personal communication with the authors.
2
[['System', 'Zhao et al. (2009c) [SRL-only]'], ['System', 'Zhao et al. (2009a) [Joint]'], ['System', 'Bjorkelund et al.(2010)'], ['System', 'Lei et al. (2015)'], ['System', 'Roth and Lapata (2016)'], ['System', 'Marcheggiani and Titov (2017)'], ['System', 'He et al. (2018) [CoNLL-2009 predicted]'], ['System', 'He et al. (2018) [Gold syntax]'], ['System', 'Our Syn-GCN (CoNLL-2009 predicted)'], ['System', 'Our Syn-GCN (Biaffine Parser)'], ['System', 'Our Syn-GCN (BIST Parser)'], ['System', 'Our Syn-GCN (Gold syntax)']]
1
[['LAS'], ['P'], ['R'], ['Sem-F1'], ['Sem-F1/LAS']]
[['86.0', '-', '-', '85.4', '99.3'], ['89.2', '-', '-', '86.2', '96.6'], ['89.8', '87.1', '84.5', '85.8', '95.6'], ['90.4', '-', '-', '86.6', '95.8'], ['89.8', '88.1', '85.3', '86.7', '96.5'], ['90.34*', '89.1', '86.8', '88.0', '97.41'], ['86.0', '89.7', '89.3', '89.5', '104.0'], ['100', '91.0', '89.7', '90.3', '90.3'], ['86.0', '90.5', '88.5', '89.5', '104.07'], ['90.22', '90.3', '89.3', '89.8', '99.53'], ['90.05', '90.3', '89.1', '89.7', '99.61'], ['100.0', '91.0', '90.0', '90.5', '90.50']]
column
['LAS', 'P', 'R', 'Sem-F1', 'Sem-F1/LAS']
['Our Syn-GCN (CoNLL-2009 predicted)', 'Our Syn-GCN (Biaffine Parser)', 'Our Syn-GCN (BIST Parser)', 'Our Syn-GCN (Gold syntax)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LAS</th> <th>P</th> <th>R</th> <th>Sem-F1</th> <th>Sem-F1/LAS</th> </tr> </thead> <tbody> <tr> <td>System || Zhao et al. (2009c) [SRL-only]</td> <td>86.0</td> <td>-</td> <td>-</td> <td>85.4</td> <td>99.3</td> </tr> <tr> <td>System || Zhao et al. (2009a) [Joint]</td> <td>89.2</td> <td>-</td> <td>-</td> <td>86.2</td> <td>96.6</td> </tr> <tr> <td>System || Bjorkelund et al.(2010)</td> <td>89.8</td> <td>87.1</td> <td>84.5</td> <td>85.8</td> <td>95.6</td> </tr> <tr> <td>System || Lei et al. (2015)</td> <td>90.4</td> <td>-</td> <td>-</td> <td>86.6</td> <td>95.8</td> </tr> <tr> <td>System || Roth and Lapata (2016)</td> <td>89.8</td> <td>88.1</td> <td>85.3</td> <td>86.7</td> <td>96.5</td> </tr> <tr> <td>System || Marcheggiani and Titov (2017)</td> <td>90.34*</td> <td>89.1</td> <td>86.8</td> <td>88.0</td> <td>97.41</td> </tr> <tr> <td>System || He et al. (2018) [CoNLL-2009 predicted]</td> <td>86.0</td> <td>89.7</td> <td>89.3</td> <td>89.5</td> <td>104.0</td> </tr> <tr> <td>System || He et al. (2018) [Gold syntax]</td> <td>100</td> <td>91.0</td> <td>89.7</td> <td>90.3</td> <td>90.3</td> </tr> <tr> <td>System || Our Syn-GCN (CoNLL-2009 predicted)</td> <td>86.0</td> <td>90.5</td> <td>88.5</td> <td>89.5</td> <td>104.07</td> </tr> <tr> <td>System || Our Syn-GCN (Biaffine Parser)</td> <td>90.22</td> <td>90.3</td> <td>89.3</td> <td>89.8</td> <td>99.53</td> </tr> <tr> <td>System || Our Syn-GCN (BIST Parser)</td> <td>90.05</td> <td>90.3</td> <td>89.1</td> <td>89.7</td> <td>99.61</td> </tr> <tr> <td>System || Our Syn-GCN (Gold syntax)</td> <td>100.0</td> <td>91.0</td> <td>90.0</td> <td>90.5</td> <td>90.50</td> </tr> </tbody></table>
Table 7
table_7
D18-1262
8
emnlp2018
Comparison and Discussion. Table 7 presents the comprehensive results of our Syn-GCN model on the four syntactic inputs aforementioned of different quality together with previous SRL models. A number of observations can be made from these results. First, our model gives quite stable SRL performance no matter the syntactic input quality varies in a broad range, obtaining overall higher scores compared to previous state-of-the-arts. Second, It is interesting to note that the Sem-F1/LAS score of our model becomes relatively smaller as the syntactic input becomes better. Though not so surprised, these results show that our SRL component is even relatively stronger. Third, when we adopt a syntactic parser with higher parsing accuracy, our SRL system will achieve a better performance. Notably, our model yields a Sem-F1 of 90.5% taking gold syntax as input. It suggests that high-quality syntactic parse may indeed enhance SRL, which is consistent with the conclusion in (He et al., 2017).
[2, 1, 2, 1, 1, 2, 1, 1, 2]
['Comparison and Discussion.', 'Table 7 presents the comprehensive results of our Syn-GCN model on the four syntactic inputs aforementioned of different quality together with previous SRL models.', 'A number of observations can be made from these results.', 'First, our model gives quite stable SRL performance no matter the syntactic input quality varies in a broad range, obtaining overall higher scores compared to previous state-of-the-arts.', 'Second, It is interesting to note that the Sem-F1/LAS score of our model becomes relatively smaller as the syntactic input becomes better.', 'Though not so surprised, these results show that our SRL component is even relatively stronger.', 'Third, when we adopt a syntactic parser with higher parsing accuracy, our SRL system will achieve a better performance.', 'Notably, our model yields a Sem-F1 of 90.5% taking gold syntax as input.', 'It suggests that high-quality syntactic parse may indeed enhance SRL, which is consistent with the conclusion in (He et al., 2017).']
[None, ['Our Syn-GCN (CoNLL-2009 predicted)', 'Our Syn-GCN (Biaffine Parser)', 'Our Syn-GCN (BIST Parser)', 'Our Syn-GCN (Gold syntax)'], None, ['Our Syn-GCN (CoNLL-2009 predicted)', 'Our Syn-GCN (Biaffine Parser)', 'Our Syn-GCN (BIST Parser)', 'Our Syn-GCN (Gold syntax)', 'P', 'R', 'Sem-F1'], ['Our Syn-GCN (CoNLL-2009 predicted)', 'Our Syn-GCN (Biaffine Parser)', 'Our Syn-GCN (BIST Parser)', 'Our Syn-GCN (Gold syntax)', 'Sem-F1/LAS'], None, ['Our Syn-GCN (CoNLL-2009 predicted)', 'Our Syn-GCN (Biaffine Parser)', 'Our Syn-GCN (BIST Parser)', 'Our Syn-GCN (Gold syntax)', 'P', 'R', 'Sem-F1'], ['Our Syn-GCN (Gold syntax)', 'Sem-F1'], None]
1
D18-1263table_3
Evaluation of different DFS orderings, in labeled F1 score, across the different tasks.
1
[['Random'], ['Sentence order'], ['Closest words'], ['Smaller-first']]
1
[['DM'], ['PAS'], ['PSD'], ['Avg.']]
[['86.1', '87.7', '78.4', '84.1'], ['87.2', '90.3', '79.9', '85.8'], ['87.5', '89.8', '79.7', '85.8'], ['87.9', '90.9', '80.3', '86.2']]
column
['F1', 'F1', 'F1', 'F1']
['Random', 'Sentence order', 'Closest words', 'Smaller-first']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DM</th> <th>PAS</th> <th>PSD</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Random</td> <td>86.1</td> <td>87.7</td> <td>78.4</td> <td>84.1</td> </tr> <tr> <td>Sentence order</td> <td>87.2</td> <td>90.3</td> <td>79.9</td> <td>85.8</td> </tr> <tr> <td>Closest words</td> <td>87.5</td> <td>89.8</td> <td>79.7</td> <td>85.8</td> </tr> <tr> <td>Smaller-first</td> <td>87.9</td> <td>90.9</td> <td>80.3</td> <td>86.2</td> </tr> </tbody></table>
Table 3
table_3
D18-1263
8
emnlp2018
DFS order matters. Table 3 depicts our model's performance when linearizing the graphs according to the different traversal orders discussed and exemplified in Table 2. Overall, we find that the “smaller-first” approach performs best across all datasets, and that imposing one of our orders is always preferable over random permutations. Intuitively, the “smaller-first” approach presents shorter, and likely easier, paths first, thus minimizing the amount of error-propagation for following decoding steps.
[2, 1, 1, 2]
['DFS order matters.', "Table 3 depicts our model's performance when linearizing the graphs according to the different traversal orders discussed and exemplified in Table 2.", 'Overall, we find that the “smaller-first” approach performs best across all datasets, and that imposing one of our orders is always preferable over random permutations.', 'Intuitively, the “smaller-first” approach presents shorter, and likely easier, paths first, thus minimizing the amount of error-propagation for following decoding steps.']
[None, ['Random', 'Sentence order', 'Closest words', 'Smaller-first'], ['Smaller-first', 'Random'], ['Smaller-first']]
1
D18-1263table_4
Evaluation of our model (labeled F1 score) versus the current state of the art. “Single” denotes training a different encoder-decoder for each task. “MTL PRIMARY” reports the performance of multi-task learning on only the PRIMARY tasks. “MTL PRIMARY+AUX” shows the performance of our full model, including MTL with the AUXILIARY tasks.
1
[['Peng et al. (2017a)'], ['Single'], ['MTL PRIMARY'], ['MTL PRIMARY+AUX']]
1
[['DM'], ['PAS'], ['PSD'], ['Avg.']]
[['90.4', '92.7', '78.5', '87.2'], ['70.1', '73.6', '63.6', '69.1'], ['82.4', '87.2', '71.4', '80.3'], ['87.9', '90.9', '80.3', '86.2']]
column
['F1', 'F1', 'F1', 'F1']
['Single', 'MTL PRIMARY', 'MTL PRIMARY+AUX']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DM</th> <th>PAS</th> <th>PSD</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Peng et al. (2017a)</td> <td>90.4</td> <td>92.7</td> <td>78.5</td> <td>87.2</td> </tr> <tr> <td>Single</td> <td>70.1</td> <td>73.6</td> <td>63.6</td> <td>69.1</td> </tr> <tr> <td>MTL PRIMARY</td> <td>82.4</td> <td>87.2</td> <td>71.4</td> <td>80.3</td> </tr> <tr> <td>MTL PRIMARY+AUX</td> <td>87.9</td> <td>90.9</td> <td>80.3</td> <td>86.2</td> </tr> </tbody></table>
Table 4
table_4
D18-1263
9
emnlp2018
From English to SDP. Table 4 presents the performance of our complete model (“MTL PRIMARY+AUX”) versus Peng et al. (2017a). On average, our model performs within 1% F1 point from the state-of-the art (outperforming it on the harder PSD task), despite using the more general sequence-to-sequence approach instead of a dedicated graph-parsing algorithm. In addition, an ablation study shows that multi-tasking the PRIMARY tasks is beneficial over a single task setting, which in turn is outperformed by the inclusion of the AUXILIARY tasks.
[2, 1, 1, 1]
['From English to SDP.', 'Table 4 presents the performance of our complete model (“MTL PRIMARY+AUX”) versus Peng et al. (2017a).', 'On average, our model performs within 1% F1 point from the state-of-the art (outperforming it on the harder PSD task), despite using the more general sequence-to-sequence approach instead of a dedicated graph-parsing algorithm.', 'In addition, an ablation study shows that multi-tasking the PRIMARY tasks is beneficial over a single task setting, which in turn is outperformed by the inclusion of the AUXILIARY tasks.']
[None, ['MTL PRIMARY+AUX', 'Peng et al. (2017a)'], ['Peng et al. (2017a)', 'MTL PRIMARY+AUX', 'Avg.', 'PSD'], ['Single', 'MTL PRIMARY', 'MTL PRIMARY+AUX']]
1
D18-1263table_5
Performance (labeled F1 score) of our model versus the state of the art, when reducing the amount of overlap in the training data to 10%.
1
[['Peng et al. (2017a)'], ['MTL PRIMARY+AUX']]
1
[['DM'], ['PAS'], ['PSD'], ['Avg.']]
[['86.8', '90.5', '77.3', '84.9'], ['87.1', '89.6', '79.1', '85.3']]
column
['F1', 'F1', 'F1', 'F1']
['MTL PRIMARY+AUX']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DM</th> <th>PAS</th> <th>PSD</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Peng et al. (2017a)</td> <td>86.8</td> <td>90.5</td> <td>77.3</td> <td>84.9</td> </tr> <tr> <td>MTL PRIMARY+AUX</td> <td>87.1</td> <td>89.6</td> <td>79.1</td> <td>85.3</td> </tr> </tbody></table>
Table 5
table_5
D18-1263
9
emnlp2018
Simulating disjoint annotations. In contrast with SDP's complete overlap of annotated sentences, multi-task learning often deals with disjoint training data. To simulate such scenario, we retrained the models on a randomly selected set of 33% of the train sentences for each representation (11, 886 sentences), such that the three representations overlap on only 10% (3, 565 sentences). The results in Table 5 show that our approach is more resilient to the decrease in annotation overlap, outperforming the state-of-the-art model on the DM and PSD task, as well as on the average score. We hypothesize that this is in part thanks to our ability to use the inter-task translations, even when these exist only for part of the annotations.
[2, 2, 2, 1, 2]
['Simulating disjoint annotations.', "In contrast with SDP's complete overlap of annotated sentences, multi-task learning often deals with disjoint training data.", 'To simulate such scenario, we retrained the models on a randomly selected set of 33% of the train sentences for each representation (11, 886 sentences), such that the three representations overlap on only 10% (3, 565 sentences).', 'The results in Table 5 show that our approach is more resilient to the decrease in annotation overlap, outperforming the state-of-the-art model on the DM and PSD task, as well as on the average score.', 'We hypothesize that this is in part thanks to our ability to use the inter-task translations, even when these exist only for part of the annotations.']
[None, None, ['Peng et al. (2017a)', 'MTL PRIMARY+AUX'], ['MTL PRIMARY+AUX', 'Peng et al. (2017a)', 'DM', 'PSD', 'Avg.'], ['MTL PRIMARY+AUX']]
1
D18-1264table_3
The intrinsic evaluation results.
2
[['Aligner', 'JAMR'], ['Aligner', 'Our']]
1
[['Alignment F1 (on hand-align)'], ['Oracle’s Smatch (on dev. dataset)']]
[['90.6', '91.7'], ['95.2', '94.7']]
column
['Alignment F1 (on hand-align)', 'Oracle�s Smatch (on dev. dataset)']
['Our']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Alignment F1 (on hand-align)</th> <th>Oracle’s Smatch (on dev. dataset)</th> </tr> </thead> <tbody> <tr> <td>Aligner || JAMR</td> <td>90.6</td> <td>91.7</td> </tr> <tr> <td>Aligner || Our</td> <td>95.2</td> <td>94.7</td> </tr> </tbody></table>
Table 3
table_3
D18-1264
7
emnlp2018
Intrinsic Evaluation. Table 3 shows the intrinsic evaluation results, in which our alignment intrinsically outperforms JAMR aligner by achieving better alignment F1 score and leading to a higher scored oracle parser.
[2, 1]
['Intrinsic Evaluation.', 'Table 3 shows the intrinsic evaluation results, in which our alignment intrinsically outperforms JAMR aligner by achieving better alignment F1 score and leading to a higher scored oracle parser.']
[None, ['Our', 'JAMR', 'Alignment F1 (on hand-align)']]
1
D18-1264table_4
The parsing results.
3
[['model', 'JAMR parser: Word POS NER DEP', '+ JAMR aligner'], ['model', 'JAMR parser: Word POS NER DEP', '+ Our aligner'], ['model', 'CAMR parser: Word POS NER DEP', '+ JAMR aligner'], ['model', 'CAMR parser: Word POS NER DEP', '+ Our aligner']]
1
[['newswire'], ['all']]
[['71.3', '65.9'], ['73.1', '67.6'], ['68.4', '64.6'], ['68.8', '65.1']]
column
['accuracy', 'accuracy']
['+ Our aligner']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>newswire</th> <th>all</th> </tr> </thead> <tbody> <tr> <td>model || JAMR parser: Word POS NER DEP || + JAMR aligner</td> <td>71.3</td> <td>65.9</td> </tr> <tr> <td>model || JAMR parser: Word POS NER DEP || + Our aligner</td> <td>73.1</td> <td>67.6</td> </tr> <tr> <td>model || CAMR parser: Word POS NER DEP || + JAMR aligner</td> <td>68.4</td> <td>64.6</td> </tr> <tr> <td>model || CAMR parser: Word POS NER DEP || + Our aligner</td> <td>68.8</td> <td>65.1</td> </tr> </tbody></table>
Table 4
table_4
D18-1264
7
emnlp2018
Extrinsic Evaluation. Table 4 shows the results. From this table, we can see that our alignment consistently improves all the parsers by a margin ranging from 0.5 to 1.7. Both the intrinsic and the extrinsic evaluations show the effectiveness our aligner.
[2, 1, 1, 2]
['Extrinsic Evaluation.', 'Table 4 shows the results.', 'From this table, we can see that our alignment consistently improves all the parsers by a margin ranging from 0.5 to 1.7.', 'Both the intrinsic and the extrinsic evaluations show the effectiveness our aligner.']
[None, None, ['JAMR parser: Word POS NER DEP', 'CAMR parser: Word POS NER DEP', '+ JAMR aligner', '+ Our aligner', 'all'], ['+ Our aligner']]
1
D18-1268table_3
The accuracy@k scores of all methods in bilingual lexicon induction on LEX-C. The best score for each language pair is bold-faced for the supervised and unsupervised categories, respectively. Languages are paired among English(en), Bulgarian(bg), Catalan(ca), Swedish(sv) and Latvian(lv). ”-” means that during the training time, the model failed to converge to reasonable local minimal and hence the result is omitted in the table.
3
[['Methods', 'Supervised', 'Mikolov et al. (2013)'], ['Methods', 'Supervised', 'Zhang et al. (2016)'], ['Methods', 'Supervised', 'Xing et al. (2015)'], ['Methods', 'Supervised', 'Shigeto et al. (2015)'], ['Methods', 'Supervised', 'Artetxe et al. (2016)'], ['Methods', 'Supervised', 'Artetxe et al. (2017)'], ['Methods', 'Unsupervised', 'Conneau et al. (2017)'], ['Methods', 'Unsupervised', 'Zhang et al. (2017a)'], ['Methods', 'Unsupervised', 'Ours']]
1
[['bg-en'], ['en-bg'], ['ca-en'], ['en-ca'], ['sv-en'], ['en-sv'], ['lv-en'], ['en-lv']]
[['44.80', '48.47', '57.73', '66.20', '43.73', '63.73', '26.53', '28.93'], ['50.60', '39.73', '63.40', '58.73', '50.87', '53.93', '34.53', '22.87'], ['50.33', '40.00', '63.40', '58.53', '51.13', '53.73', '34.27', '21.60'], ['61.00', '33.80', '69.33', '53.60', '61.27', '41.67', '42.20', '13.87'], ['53.27', '43.40', '65.27', '60.87', '54.07', '55.93', '35.80', '26.47'], ['47.27', '34.40', '61.27', '56.73', '38.07', '44.20', '24.07', '12.20'], ['26.47', '13.87', '41.00', '33.07', '24.27', '24.47', '-', '-'], ['-', '-', '-', '-', '-', '-', '-', '-'], ['50.33', '34.27', '58.60', '54.60', '48.13', '50.47', '27.73', '13.53']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>bg-en</th> <th>en-bg</th> <th>ca-en</th> <th>en-ca</th> <th>sv-en</th> <th>en-sv</th> <th>lv-en</th> <th>en-lv</th> </tr> </thead> <tbody> <tr> <td>Methods || Supervised || Mikolov et al. (2013)</td> <td>44.80</td> <td>48.47</td> <td>57.73</td> <td>66.20</td> <td>43.73</td> <td>63.73</td> <td>26.53</td> <td>28.93</td> </tr> <tr> <td>Methods || Supervised || Zhang et al. (2016)</td> <td>50.60</td> <td>39.73</td> <td>63.40</td> <td>58.73</td> <td>50.87</td> <td>53.93</td> <td>34.53</td> <td>22.87</td> </tr> <tr> <td>Methods || Supervised || Xing et al. (2015)</td> <td>50.33</td> <td>40.00</td> <td>63.40</td> <td>58.53</td> <td>51.13</td> <td>53.73</td> <td>34.27</td> <td>21.60</td> </tr> <tr> <td>Methods || Supervised || Shigeto et al. (2015)</td> <td>61.00</td> <td>33.80</td> <td>69.33</td> <td>53.60</td> <td>61.27</td> <td>41.67</td> <td>42.20</td> <td>13.87</td> </tr> <tr> <td>Methods || Supervised || Artetxe et al. (2016)</td> <td>53.27</td> <td>43.40</td> <td>65.27</td> <td>60.87</td> <td>54.07</td> <td>55.93</td> <td>35.80</td> <td>26.47</td> </tr> <tr> <td>Methods || Supervised || Artetxe et al. (2017)</td> <td>47.27</td> <td>34.40</td> <td>61.27</td> <td>56.73</td> <td>38.07</td> <td>44.20</td> <td>24.07</td> <td>12.20</td> </tr> <tr> <td>Methods || Unsupervised || Conneau et al. (2017)</td> <td>26.47</td> <td>13.87</td> <td>41.00</td> <td>33.07</td> <td>24.27</td> <td>24.47</td> <td>-</td> <td>-</td> </tr> <tr> <td>Methods || Unsupervised || Zhang et al. (2017a)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Methods || Unsupervised || Ours</td> <td>50.33</td> <td>34.27</td> <td>58.60</td> <td>54.60</td> <td>48.13</td> <td>50.47</td> <td>27.73</td> <td>13.53</td> </tr> </tbody></table>
Table 3
table_3
D18-1268
7
emnlp2018
Table 3 and Table 4 summarize the results of all the methods on the LEX-C dataset. Several points may be worth noticing. Firstly, the performance scores on LEX-C are not necessarily consistent with those on LEX-Z (Table 2) even if the methods and the language pairs are the same; this is not surprising as the two datasets differ in query words, word embedding quality, and training-set sizes. Secondly, the performance gap between the best supervised methods and the best unsupervised methods in both Table 3 and Table 4 are larger than that in Table 2. This is attributed to the large amount of good-quality supervision in LEX-C (5,000 human-annotated word pairs) and the larger candidate size in WE-C (200, 000 candidates). Thirdly, the average performance in Table 3 is lower than that in Table 4, indicating that the language pairs in the former are more difficult than that in the latter. Nevertheless, we can see that our method has much stronger performance than other unsupervised methods in Table 3, i.e., on the harder language pairs, and that it performed comparably with the model by Conneau et al. (2017) in Table 4 on the easier language pairs. Combining all these observations, we see that our method is highly robust for various language pairs and under different training conditions.
[1, 2, 1, 1, 2, 1, 1, 2]
['Table 3 and Table 4 summarize the results of all the methods on the LEX-C dataset.', 'Several points may be worth noticing.', 'Firstly, the performance scores on LEX-C are not necessarily consistent with those on LEX-Z (Table 2) even if the methods and the language pairs are the same; this is not surprising as the two datasets differ in query words, word embedding quality, and training-set sizes.', 'Secondly, the performance gap between the best supervised methods and the best unsupervised methods in both Table 3 and Table 4 are larger than that in Table 2.', 'This is attributed to the large amount of good-quality supervision in LEX-C (5,000 human-annotated word pairs) and the larger candidate size in WE-C (200, 000 candidates).', 'Thirdly, the average performance in Table 3 is lower than that in Table 4, indicating that the language pairs in the former are more difficult than that in the latter.', 'Nevertheless, we can see that our method has much stronger performance than other unsupervised methods in Table 3, i.e., on the harder language pairs, and that it performed comparably with the model by Conneau et al. (2017) in Table 4 on the easier language pairs.', 'Combining all these observations, we see that our method is highly robust for various language pairs and under different training conditions.']
[['Mikolov et al. (2013)', 'Zhang et al. (2016)', 'Xing et al. (2015)', 'Shigeto et al. (2015)', 'Artetxe et al. (2016)', 'Artetxe et al. (2017)', 'Conneau et al. (2017)', 'Zhang et al. (2017a)', 'Ours'], None, None, ['Supervised', 'Unsupervised'], None, ['bg-en', 'en-bg', 'ca-en', 'en-ca', 'sv-en', 'en-sv', 'lv-en', 'en-lv'], ['Unsupervised', 'Ours', 'bg-en', 'en-bg', 'ca-en', 'en-ca', 'sv-en', 'en-sv', 'lv-en', 'en-lv'], ['Ours']]
1
D18-1268table_4
The accuracy@k scores of all methods in bilingual lexicon induction on LEX-C. The best score for each language pair is bold-faced for the supervised and unsupervised categories, respectively. Languages are paired among English (en), German (de), Spanish (es), French (fr) and Italian (it). ”-” means that during the training time, the model failed to converge to reasonable local minimal and hence the result is omitted in the table.
3
[['Methods', 'Supervised', 'Mikolov et al. (2013)'], ['Methods', 'Supervised', 'Zhang et al. (2016)'], ['Methods', 'Supervised', 'Xing et al. (2015)'], ['Methods', 'Supervised', 'Shigeto et al. (2015)'], ['Methods', 'Supervised', 'Artetxe et al. (2016)'], ['Methods', 'Supervised', 'Artetxe et al. (2017)'], ['Methods', 'Unsupervised', 'Conneau et al. (2017)'], ['Methods', 'Unsupervised', 'Zhang et al. (2017a)'], ['Methods', 'Unsupervised', 'Ours']]
1
[['de-en'], ['en-de'], ['es-en'], ['en-es'], ['fr-en'], ['en-fr'], ['it-en'], ['en-it']]
[['61.93', '73.07', '74.00', '80.73', '71.33', '82.20', '68.93', '77.60'], ['67.67', '69.87', '77.27', '78.53', '76.07', '78.20', '72.40', '73.40'], ['67.73', '69.53', '77.20', '78.60', '76.33', '78.67', '72.00', '73.33'], ['71.07', '63.73', '81.07', '74.53', '79.93', '73.13', '76.47', '68.13'], ['69.13', '72.13', '78.27', '80.07', '77.73', '79.20', '73.60', '74.47'], ['68.07', '69.20', '75.60', '78.20', '74.47', '77.67', '70.53', '71.67'], ['69.87', '71.53', '78.53', '79.40', '77.67', '78.33', '74.60', '75.80'], ['-', '-', '-', '-', '-', '-', '-', '-'], ['67.00', '69.33', '77.80', '79.53', '75.47', '77.93', '72.60', '73.47']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>de-en</th> <th>en-de</th> <th>es-en</th> <th>en-es</th> <th>fr-en</th> <th>en-fr</th> <th>it-en</th> <th>en-it</th> </tr> </thead> <tbody> <tr> <td>Methods || Supervised || Mikolov et al. (2013)</td> <td>61.93</td> <td>73.07</td> <td>74.00</td> <td>80.73</td> <td>71.33</td> <td>82.20</td> <td>68.93</td> <td>77.60</td> </tr> <tr> <td>Methods || Supervised || Zhang et al. (2016)</td> <td>67.67</td> <td>69.87</td> <td>77.27</td> <td>78.53</td> <td>76.07</td> <td>78.20</td> <td>72.40</td> <td>73.40</td> </tr> <tr> <td>Methods || Supervised || Xing et al. (2015)</td> <td>67.73</td> <td>69.53</td> <td>77.20</td> <td>78.60</td> <td>76.33</td> <td>78.67</td> <td>72.00</td> <td>73.33</td> </tr> <tr> <td>Methods || Supervised || Shigeto et al. (2015)</td> <td>71.07</td> <td>63.73</td> <td>81.07</td> <td>74.53</td> <td>79.93</td> <td>73.13</td> <td>76.47</td> <td>68.13</td> </tr> <tr> <td>Methods || Supervised || Artetxe et al. (2016)</td> <td>69.13</td> <td>72.13</td> <td>78.27</td> <td>80.07</td> <td>77.73</td> <td>79.20</td> <td>73.60</td> <td>74.47</td> </tr> <tr> <td>Methods || Supervised || Artetxe et al. (2017)</td> <td>68.07</td> <td>69.20</td> <td>75.60</td> <td>78.20</td> <td>74.47</td> <td>77.67</td> <td>70.53</td> <td>71.67</td> </tr> <tr> <td>Methods || Unsupervised || Conneau et al. (2017)</td> <td>69.87</td> <td>71.53</td> <td>78.53</td> <td>79.40</td> <td>77.67</td> <td>78.33</td> <td>74.60</td> <td>75.80</td> </tr> <tr> <td>Methods || Unsupervised || Zhang et al. (2017a)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Methods || Unsupervised || Ours</td> <td>67.00</td> <td>69.33</td> <td>77.80</td> <td>79.53</td> <td>75.47</td> <td>77.93</td> <td>72.60</td> <td>73.47</td> </tr> </tbody></table>
Table 4
table_4
D18-1268
8
emnlp2018
Table 3 and Table 4 summarize the results of all the methods on the LEX-C dataset. Several points may be worth noticing. Firstly, the performance scores on LEX-C are not necessarily consistent with those on LEX-Z (Table 2) even if the methods and the language pairs are the same; this is not surprising as the two datasets differ in query words, word embedding quality, and training-set sizes. Secondly, the performance gap between the best supervised methods and the best unsupervised methods in both Table 3 and Table 4 are larger than that in Table 2. This is attributed to the large amount of good-quality supervision in LEX-C (5,000 human-annotated word pairs) and the larger candidate size in WE-C (200, 000 candidates). Thirdly, the average performance in Table 3 is lower than that in Table 4, indicating that the language pairs in the former are more difficult than that in the latter. Nevertheless, we can see that our method has much stronger performance than other unsupervised methods in Table 3, i.e., on the harder language pairs, and that it performed comparably with the model by Conneau et al. (2017) in Table 4 on the easier language pairs. Combining all these observations, we see that our method is highly robust for various language pairs and under different training conditions.
[1, 2, 1, 1, 2, 1, 1, 2]
['Table 3 and Table 4 summarize the results of all the methods on the LEX-C dataset.', 'Several points may be worth noticing.', 'Firstly, the performance scores on LEX-C are not necessarily consistent with those on LEX-Z (Table 2) even if the methods and the language pairs are the same; this is not surprising as the two datasets differ in query words, word embedding quality, and training-set sizes.', 'Secondly, the performance gap between the best supervised methods and the best unsupervised methods in both Table 3 and Table 4 are larger than that in Table 2.', 'This is attributed to the large amount of good-quality supervision in LEX-C (5,000 human-annotated word pairs) and the larger candidate size in WE-C (200, 000 candidates).', 'Thirdly, the average performance in Table 3 is lower than that in Table 4, indicating that the language pairs in the former are more difficult than that in the latter.', 'Nevertheless, we can see that our method has much stronger performance than other unsupervised methods in Table 3, i.e., on the harder language pairs, and that it performed comparably with the model by Conneau et al. (2017) in Table 4 on the easier language pairs.', 'Combining all these observations, we see that our method is highly robust for various language pairs and under different training conditions.']
[['Mikolov et al. (2013)', 'Zhang et al. (2016)', 'Xing et al. (2015)', 'Shigeto et al. (2015)', 'Artetxe et al. (2016)', 'Artetxe et al. (2017)', 'Conneau et al. (2017)', 'Zhang et al. (2017a)', 'Ours'], None, None, ['Supervised', 'Unsupervised'], None, ['de-en', 'en-de', 'es-en', 'en-es', 'fr-en', 'en-fr', 'it-en', 'en-it'], ['Ours', 'Conneau et al. (2017)', 'de-en', 'en-de', 'es-en', 'en-es', 'fr-en', 'en-fr', 'it-en', 'en-it'], ['Ours']]
1
D18-1268table_5
Performance (measured using Pearson correlation) of all the methods in cross-lingual semantic word similarity prediction on the benchmark data from Conneau et al. (2017). The best score in the supervised and unsupervised category is bold-faced, respectively. The languages include English (en), German (de), Spanish (es), Persian (fa) and Italian (it). ”-” means that the model failed to converge to reasonable local minimal during the training process.
3
[['Methods', 'Supervised', 'Mikolov et al. (2013)'], ['Methods', 'Supervised', 'Zhang et al. (2016)'], ['Methods', 'Supervised', 'Xing et al. (2015)'], ['Methods', 'Supervised', 'Shigeto et al. (2015)'], ['Methods', 'Supervised', 'Artetxe et al. (2016)'], ['Methods', 'Supervised', 'Artetxe et al. (2017)'], ['Methods', 'Unsupervised', 'Conneau et al. (2017)'], ['Methods', 'Unsupervised', 'Zhang et al. (2017a)'], ['Methods', 'Unsupervised', 'Ours']]
1
[['de-en'], ['es-en'], ['fa-en'], ['it-en']]
[['0.71', '0.72', '0.68', '0.71'], ['0.71', '0.71', '0.69', '0.71'], ['0.72', '0.71', '0.69', '0.72'], ['0.72', '0.72', '0.69', '0.71'], ['0.73', '0.72', '0.70', '0.73'], ['0.70', '0.70', '0.67', '0.71'], ['0.71', '0.71', '0.68', '0.71'], ['-', '-', '-', '-'], ['0.71', '0.71', '0.67', '0.71']]
column
['correlation', 'correlation', 'correlation', 'correlation']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>de-en</th> <th>es-en</th> <th>fa-en</th> <th>it-en</th> </tr> </thead> <tbody> <tr> <td>Methods || Supervised || Mikolov et al. (2013)</td> <td>0.71</td> <td>0.72</td> <td>0.68</td> <td>0.71</td> </tr> <tr> <td>Methods || Supervised || Zhang et al. (2016)</td> <td>0.71</td> <td>0.71</td> <td>0.69</td> <td>0.71</td> </tr> <tr> <td>Methods || Supervised || Xing et al. (2015)</td> <td>0.72</td> <td>0.71</td> <td>0.69</td> <td>0.72</td> </tr> <tr> <td>Methods || Supervised || Shigeto et al. (2015)</td> <td>0.72</td> <td>0.72</td> <td>0.69</td> <td>0.71</td> </tr> <tr> <td>Methods || Supervised || Artetxe et al. (2016)</td> <td>0.73</td> <td>0.72</td> <td>0.70</td> <td>0.73</td> </tr> <tr> <td>Methods || Supervised || Artetxe et al. (2017)</td> <td>0.70</td> <td>0.70</td> <td>0.67</td> <td>0.71</td> </tr> <tr> <td>Methods || Unsupervised || Conneau et al. (2017)</td> <td>0.71</td> <td>0.71</td> <td>0.68</td> <td>0.71</td> </tr> <tr> <td>Methods || Unsupervised || Zhang et al. (2017a)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Methods || Unsupervised || Ours</td> <td>0.71</td> <td>0.71</td> <td>0.67</td> <td>0.71</td> </tr> </tbody></table>
Table 5
table_5
D18-1268
8
emnlp2018
Table 5 summarizes the performance of all the methods in cross-lingual word similarity prediction. We can see that the unsupervised methods, including ours, perform equally well as the supervised methods, which is highly encouraging.
[1, 1]
['Table 5 summarizes the performance of all the methods in cross-lingual word similarity prediction.', 'We can see that the unsupervised methods, including ours, perform equally well as the supervised methods, which is highly encouraging.']
[['Mikolov et al. (2013)', 'Zhang et al. (2016)', 'Xing et al. (2015)', 'Shigeto et al. (2015)', 'Artetxe et al. (2016)', 'Artetxe et al. (2017)', 'Conneau et al. (2017)', 'Zhang et al. (2017a)', 'Ours'], ['Unsupervised', 'Supervised', 'Ours']]
1
D18-1270table_7
Linking accuracy of the zero-shot (Z-S) approach on different datasets. Zero-shot (w/ prior) is close to SoTA for datasets like TAC15-Test, but performance drops in the more realistic setting of zero-shot (w/o prior) (§6.1) on all datasets, indicating most of the performance can be attributed to the presence of prior probabilities. The slight drop in MCN-TEST is due to trivial mentions, which only have a single candidate.
2
[['Approach', 'XELMS (Z-S w/ prior)'], ['Approach', 'XELMS (Z-S w/o prior)'], ['Approach', 'SoTA']]
3
[['Dataset', 'TAC15-Test', '(es)'], ['Dataset', 'TAC15-Test', ' (zh)'], ['Dataset', 'TH-Test', ' (avg)'], ['Dataset', 'McN-Test', ' (avg)']]
[['80.3', '83.9', '43.5', '88.1'], ['53.5', '55.9', '41.1', '86.0'], ['83.9', '85.9', '54.7', '89.4']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['XELMS (Z-S w/ prior)', 'XELMS (Z-S w/o prior)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dataset || TAC15-Test || (es)</th> <th>Dataset || TAC15-Test || (zh)</th> <th>Dataset || TH-Test || (avg)</th> <th>Dataset || McN-Test || (avg)</th> </tr> </thead> <tbody> <tr> <td>Approach || XELMS (Z-S w/ prior)</td> <td>80.3</td> <td>83.9</td> <td>43.5</td> <td>88.1</td> </tr> <tr> <td>Approach || XELMS (Z-S w/o prior)</td> <td>53.5</td> <td>55.9</td> <td>41.1</td> <td>86.0</td> </tr> <tr> <td>Approach || SoTA</td> <td>83.9</td> <td>85.9</td> <td>54.7</td> <td>89.4</td> </tr> </tbody></table>
Table 7
table_7
D18-1270
8
emnlp2018
Is zero-shot XEL really effective?. To evaluate the effectiveness of the zero-shot XEL approach, we perform zero-shot XEL using XELMS on all datasets. Table 7 shows zero-shot XEL results on all datasets, both with and without using the prior during inference. Note that zero-shot XEL (with prior) is close to SoTA (Sil et al. (2018)) on TAC15-TEST, which also uses the prior for zeroshot XEL. However, for zero-shot XEL (without prior) performance drops by more than 20% for TAC15-Test, 2.4% for TH-Test and by 2.1% for McN-Test. This indicates that zero-shot XEL is not effective in a realistic zero-shot setting (i.e., when the prior is unavailable for inference).
[2, 2, 1, 1, 1, 2]
['Is zero-shot XEL really effective?.', 'To evaluate the effectiveness of the zero-shot XEL approach, we perform zero-shot XEL using XELMS on all datasets.', 'Table 7 shows zero-shot XEL results on all datasets, both with and without using the prior during inference.', 'Note that zero-shot XEL (with prior) is close to SoTA (Sil et al. (2018)) on TAC15-TEST, which also uses the prior for zeroshot XEL.', 'However, for zero-shot XEL (without prior) performance drops by more than 20% for TAC15-Test, 2.4% for TH-Test and by 2.1% for McN-Test.', 'This indicates that zero-shot XEL is not effective in a realistic zero-shot setting (i.e., when the prior is unavailable for inference).']
[None, ['XELMS (Z-S w/ prior)', 'XELMS (Z-S w/o prior)', 'TAC15-Test', 'TH-Test', 'McN-Test'], ['XELMS (Z-S w/ prior)', 'XELMS (Z-S w/o prior)', 'TAC15-Test', 'TH-Test', 'McN-Test'], ['XELMS (Z-S w/ prior)', 'SoTA', 'TAC15-Test'], ['XELMS (Z-S w/ prior)', 'XELMS (Z-S w/o prior)', 'TAC15-Test', 'TH-Test', 'McN-Test'], ['XELMS (Z-S w/o prior)']]
1
D18-1273table_5
Correlation results of Trn13, Trn14, Trn15 and D with Tst13, Tst14, Tst15.
2
[['Train:Test', 'Trn13 : Tst13'], ['Train:Test', 'Trn14 : Tst14'], ['Train:Test', 'Trn15 : Tst15'], ['Train:Test', 'D : Tst13'], ['Train:Test', 'D : Tst14'], ['Train:Test', 'D : Tst15']]
1
[['C (%)']]
[['16.2'], ['53.9'], ['46.7'], ['74.1'], ['80.6'], ['84.2']]
column
['C (%)']
['D : Tst13', 'D : Tst14', 'D : Tst15']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>C (%)</th> </tr> </thead> <tbody> <tr> <td>Train:Test || Trn13 : Tst13</td> <td>16.2</td> </tr> <tr> <td>Train:Test || Trn14 : Tst14</td> <td>53.9</td> </tr> <tr> <td>Train:Test || Trn15 : Tst15</td> <td>46.7</td> </tr> <tr> <td>Train:Test || D : Tst13</td> <td>74.1</td> </tr> <tr> <td>Train:Test || D : Tst14</td> <td>80.6</td> </tr> <tr> <td>Train:Test || D : Tst15</td> <td>84.2</td> </tr> </tbody></table>
Table 5
table_5
D18-1273
5
emnlp2018
Table 5 illustrates that, for three different testing datasets, the entire generated corpus D achieves 74.1%, 80.6% and 84.2% on Ctrain:test, respectively, which are much higher than that of Trn13, Trn14 and Trn15. This difference may denote the validity of the generated corpus, with adequate spelling errors.
[1, 2]
['Table 5 illustrates that, for three different testing datasets, the entire generated corpus D achieves 74.1%, 80.6% and 84.2% on Ctrain:test, respectively, which are much higher than that of Trn13, Trn14 and Trn15.', 'This difference may denote the validity of the generated corpus, with adequate spelling errors.']
[['D : Tst13', 'D : Tst14', 'D : Tst15', 'C (%)', 'Trn13 : Tst13', 'Trn14 : Tst14', 'Trn15 : Tst15'], None]
1
D18-1273table_7
The performance of Chinese spelling error detection with BiLSTM on Tst13,Tst14,Tst15 (%). Best results are in bold. Trn represents the training dataset provided in the corresponding shared task, e.g., Trn denotes Trn13 in Tst13.
1
[['Trn'], ['D-10k'], ['D-20k'], ['D-30k'], ['D-40k'], ['D-50k']]
2
[['Tst13', 'P'], ['Tst13', 'R'], ['Tst13', 'F1'], ['Tst14', 'P'], ['Tst14', 'R'], ['Tst14', 'F1'], ['Tst15', 'P'], ['Tst15', 'R'], ['Tst15', 'F1']]
[['24.4', '27.3', '25.8', '49.8', '51.5', '50.6', '40.1', '43.2', '41.6'], ['33.3', '39.6', '36.1', '31.1', '35.1', '32.9', '31.0', '37.0', '33.7'], ['41.1', '50.2', '45.2', '41.1', '50.2', '45.2', '43.0', '54.9', '48.2'], ['47.2', '59.1', '52.5', '40.9', '48.0', '44.2', '50.3', '62.3', '55.7'], ['53.4', '65.0', '58.6', '52.3', '64.3', '57.7', '56.6', '66.5', '61.2'], ['54.0', '69.3', '60.7', '51.9', '66.2', '58.2', '56.6', '69.4', '62.3']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Tst13 || P</th> <th>Tst13 || R</th> <th>Tst13 || F1</th> <th>Tst14 || P</th> <th>Tst14 || R</th> <th>Tst14 || F1</th> <th>Tst15 || P</th> <th>Tst15 || R</th> <th>Tst15 || F1</th> </tr> </thead> <tbody> <tr> <td>Trn</td> <td>24.4</td> <td>27.3</td> <td>25.8</td> <td>49.8</td> <td>51.5</td> <td>50.6</td> <td>40.1</td> <td>43.2</td> <td>41.6</td> </tr> <tr> <td>D-10k</td> <td>33.3</td> <td>39.6</td> <td>36.1</td> <td>31.1</td> <td>35.1</td> <td>32.9</td> <td>31.0</td> <td>37.0</td> <td>33.7</td> </tr> <tr> <td>D-20k</td> <td>41.1</td> <td>50.2</td> <td>45.2</td> <td>41.1</td> <td>50.2</td> <td>45.2</td> <td>43.0</td> <td>54.9</td> <td>48.2</td> </tr> <tr> <td>D-30k</td> <td>47.2</td> <td>59.1</td> <td>52.5</td> <td>40.9</td> <td>48.0</td> <td>44.2</td> <td>50.3</td> <td>62.3</td> <td>55.7</td> </tr> <tr> <td>D-40k</td> <td>53.4</td> <td>65.0</td> <td>58.6</td> <td>52.3</td> <td>64.3</td> <td>57.7</td> <td>56.6</td> <td>66.5</td> <td>61.2</td> </tr> <tr> <td>D-50k</td> <td>54.0</td> <td>69.3</td> <td>60.7</td> <td>51.9</td> <td>66.2</td> <td>58.2</td> <td>56.6</td> <td>69.4</td> <td>62.3</td> </tr> </tbody></table>
Table 7
table_7
D18-1273
7
emnlp2018
Table 7 shows the detection performance on three different testing datasets. We have the following observations. The size of training dataset is important for the model training. For Tst13, D-10k achieves a better F1 score than Trn13. A major reason may be the size of Trn13 (=350, see in Table 3), which is much smaller than the testing dataset. In this situation, the model can not learn enough information, resulting in being unable to detect unseen spelling errors. Besides, we can see that the detection performance shows a stable improvement as the size of our generated corpus is continuously enlarged. Therefore, for data-driven approaches, it is of great importance to train our model with enough instances having different spelling errors. The precision may be compromised if the training dataset contains too many “noisy” spelling errors. From Table 7, although the overall performance (F1 score) keeps improving as the size of our generated corpus increases, the precision and the recall demonstrate different changing trends. It is observed that as the size of training dataset increases, the model achieves a better performance in terms of the recall. An possible reason is that with more instances containing different spelling error including in the training dataset, the number of unseen spelling error in the testing dataset is reduced, thus facilitating the model to detect more spelling errors. However, the improvement of the precision is not so obvious as that of the recall. Specifically, in Tst14 and Tst15, D-50k does not achieve a higher precision than D-40k. A possible explanation is that with a larger training dataset containing more spelling error instances, it may lead the model to misidentify some more correct characters, resulting in a lower precision. Compared with the limited training dataset manually annotated by human, our generated large-scale corpus can achieves a better performance. From Table 7, we can see that with a certain size of our generated corpus, it can train a model that achieve better detection performance than with the manually annotated datasets provided in the corresponding shared tasks. To some extent, this demonstrates the effectiveness of our generated corpus, thus confirms the validity of our approach.
[1, 1, 2, 1, 2, 2, 1, 2, 2, 1, 1, 2, 1, 1, 2, 2, 1, 2]
['Table 7 shows the detection performance on three different testing datasets.', 'We have the following observations.', 'The size of training dataset is important for the model training.', 'For Tst13, D-10k achieves a better F1 score than Trn13.', 'A major reason may be the size of Trn13 (=350, see in Table 3), which is much smaller than the testing dataset.', 'In this situation, the model can not learn enough information, resulting in being unable to detect unseen spelling errors.', 'Besides, we can see that the detection performance shows a stable improvement as the size of our generated corpus is continuously enlarged.', 'Therefore, for data-driven approaches, it is of great importance to train our model with enough instances having different spelling errors.', 'The precision may be compromised if the training dataset contains too many “noisy” spelling errors.', 'From Table 7, although the overall performance (F1 score) keeps improving as the size of our generated corpus increases, the precision and the recall demonstrate different changing trends.', 'It is observed that as the size of training dataset increases, the model achieves a better performance in terms of the recall.', 'An possible reason is that with more instances containing different spelling error including in the training dataset, the number of unseen spelling error in the testing dataset is reduced, thus facilitating the model to detect more spelling errors.', 'However, the improvement of the precision is not so obvious as that of the recall.', 'Specifically, in Tst14 and Tst15, D-50k does not achieve a higher precision than D-40k.', 'A possible explanation is that with a larger training dataset containing more spelling error instances, it may lead the model to misidentify some more correct characters, resulting in a lower precision.', 'Compared with the limited training dataset manually annotated by human, our generated large-scale corpus can achieves a better performance.', 'From Table 7, we can see that with a certain size of our generated corpus, it can train a model that achieve better detection performance than with the manually annotated datasets provided in the corresponding shared tasks.', 'To some extent, this demonstrates the effectiveness of our generated corpus, thus confirms the validity of our approach.']
[['Tst13', 'Tst14', 'Tst15'], None, None, ['D-10k', 'Tst13', 'Trn'], ['Trn'], ['Trn'], ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k'], None, None, ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k', 'P', 'R', 'F1'], ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k', 'R'], None, ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k', 'P', 'R'], ['D-50k', 'D-40k', 'Tst14', 'Tst15', 'P'], None, None, ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k'], ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k']]
1
D18-1277table_7
Effect of using different tuning sets. As usual with early stopping, the best tuning set performance was used to evaluate the test set. Here, we evaluated the same experimental runs at two points: when the performance was best on the WSJ development set, and again when the performance was best on the Noun-Verb development set. The increase in Noun-Verb results is significant at the p < 0.001(†) and p < 0.01(‡) levels.
3
[['Model', 'WSJ Test Set', 'Bohnet et al. (2018)'], ['Model', 'WSJ Test Set', '+ELMo'], ['Model', 'WSJ Test Set', '+NV Data'], ['Model', 'WSJ Test Set', '+ELMo+NV Data'], ['Model', 'Noun-Verb Test Set', 'Bohnet et al. (2018)'], ['Model', 'Noun-Verb Test Set', '+ELMo'], ['Model', 'Noun-Verb Test Set', '+NV Data'], ['Model', 'Noun-Verb Test Set', '+ELMo+NV Data']]
2
[['Tuning Set', 'WSJ'], ['Tuning Set', 'NV']]
[['98.00±0.12', '97.98±0.13'], ['97.94±0.08', '97.85±0.16'], ['97.98±0.11', '97.94±0.14'], ['97.97±0.09', '97.94±0.13'], ['74.0±1.2', '76.9±0.6†'], ['82.1±0.9', '83.4±0.5†'], ['86.4±0.4', '86.8±0.4'], ['88.9±0.3', '89.3±0.2‡']]
column
['accuracy', 'accuracy']
['Tuning Set']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Tuning Set || WSJ</th> <th>Tuning Set || NV</th> </tr> </thead> <tbody> <tr> <td>Model || WSJ Test Set || Bohnet et al. (2018)</td> <td>98.00±0.12</td> <td>97.98±0.13</td> </tr> <tr> <td>Model || WSJ Test Set || +ELMo</td> <td>97.94±0.08</td> <td>97.85±0.16</td> </tr> <tr> <td>Model || WSJ Test Set || +NV Data</td> <td>97.98±0.11</td> <td>97.94±0.14</td> </tr> <tr> <td>Model || WSJ Test Set || +ELMo+NV Data</td> <td>97.97±0.09</td> <td>97.94±0.13</td> </tr> <tr> <td>Model || Noun-Verb Test Set || Bohnet et al. (2018)</td> <td>74.0±1.2</td> <td>76.9±0.6†</td> </tr> <tr> <td>Model || Noun-Verb Test Set || +ELMo</td> <td>82.1±0.9</td> <td>83.4±0.5†</td> </tr> <tr> <td>Model || Noun-Verb Test Set || +NV Data</td> <td>86.4±0.4</td> <td>86.8±0.4</td> </tr> <tr> <td>Model || Noun-Verb Test Set || +ELMo+NV Data</td> <td>88.9±0.3</td> <td>89.3±0.2‡</td> </tr> </tbody></table>
Table 7
table_7
D18-1277
7
emnlp2018
Impact of Tuning Set. Table 7 compares performance of the same experiments on the WSJ and Noun-Verb Challenge test sets, tuned either using the WSJ or the Noun-Verb development set. The only effect of the change in tuning set was for the Noun-Verb tuning to cause the early stopping to sometimes be a little earlier. When we tuned on the Noun-Verb development set, the WSJ results remained almost unchanged, while the NounVerb test set results increased significantly. We see that the performance on each dataset is best when matched with its tuning data. The effect was greatest on the unenhanced model, which improved 2.9% absolute on the Noun-Verb evaluation. The best overall Noun-Verb test set result was 89.3±0.2 when tuned this way.
[2, 1, 2, 1, 2, 1, 2]
['Impact of Tuning Set.', 'Table 7 compares performance of the same experiments on the WSJ and Noun-Verb Challenge test sets, tuned either using the WSJ or the Noun-Verb development set.', 'The only effect of the change in tuning set was for the Noun-Verb tuning to cause the early stopping to sometimes be a little earlier.', 'When we tuned on the Noun-Verb development set, the WSJ results remained almost unchanged, while the NounVerb test set results increased significantly.', 'We see that the performance on each dataset is best when matched with its tuning data.', 'The effect was greatest on the unenhanced model, which improved 2.9% absolute on the Noun-Verb evaluation.', 'The best overall Noun-Verb test set result was 89.3±0.2 when tuned this way.']
[None, ['WSJ Test Set', 'Noun-Verb Test Set', 'WSJ', 'NV'], ['Tuning Set', 'WSJ', 'NV'], ['NV', 'WSJ Test Set', 'Noun-Verb Test Set', 'Bohnet et al. (2018)', '+ELMo', '+NV Data', '+ELMo+NV Data'], ['Tuning Set'], ['Noun-Verb Test Set', 'Bohnet et al. (2018)', 'WSJ', 'NV'], ['Noun-Verb Test Set', '+ELMo+NV Data', 'NV']]
1
D18-1278table_6
LAS results when case information is added. We use bold to highlight the best results for models without explicit access to gold annotations.
4
[['Language', 'Czech', 'Input', 'char'], ['Language', 'Czech', 'Input', 'char (multi-task)'], ['Language', 'Czech', 'Input', 'char + predicted case'], ['Language', 'Czech', 'Input', 'char + gold case'], ['Language', 'Czech', 'Input', 'oracle'], ['Language', 'German', 'Input', 'char'], ['Language', 'German', 'Input', 'char (multi-task)'], ['Language', 'German', 'Input', 'char + predicted case'], ['Language', 'German', 'Input', 'char + gold case'], ['Language', 'German', 'Input', 'oracle'], ['Language', 'Russian', 'Input', 'char'], ['Language', 'Russian', 'Input', 'char (multi-task)'], ['Language', 'Russian', 'Input', 'char + predicted case'], ['Language', 'Russian', 'Input', 'char + gold case'], ['Language', 'Russian', 'Input', 'oracle']]
1
[['Dev'], ['Test']]
[['91.2', '90.6'], ['91.6', '91.0'], ['92.2', '91.8'], ['92.3', '91.9'], ['92.5', '92.0'], ['87.5', '84.5'], ['87.9', '84.4'], ['87.8', '86.4'], ['90.2', '86.9'], ['89.7', '86.5'], ['91.6', '92.4'], ['92.2', '92.6'], ['92.5', '93.3'], ['92.8', '93.5'], ['92.6', '93.3']]
column
['LAS', 'LAS']
['char', 'char (multi-task)', 'char + predicted case', 'char + gold case', 'oracle']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Language || Czech || Input || char</td> <td>91.2</td> <td>90.6</td> </tr> <tr> <td>Language || Czech || Input || char (multi-task)</td> <td>91.6</td> <td>91.0</td> </tr> <tr> <td>Language || Czech || Input || char + predicted case</td> <td>92.2</td> <td>91.8</td> </tr> <tr> <td>Language || Czech || Input || char + gold case</td> <td>92.3</td> <td>91.9</td> </tr> <tr> <td>Language || Czech || Input || oracle</td> <td>92.5</td> <td>92.0</td> </tr> <tr> <td>Language || German || Input || char</td> <td>87.5</td> <td>84.5</td> </tr> <tr> <td>Language || German || Input || char (multi-task)</td> <td>87.9</td> <td>84.4</td> </tr> <tr> <td>Language || German || Input || char + predicted case</td> <td>87.8</td> <td>86.4</td> </tr> <tr> <td>Language || German || Input || char + gold case</td> <td>90.2</td> <td>86.9</td> </tr> <tr> <td>Language || German || Input || oracle</td> <td>89.7</td> <td>86.5</td> </tr> <tr> <td>Language || Russian || Input || char</td> <td>91.6</td> <td>92.4</td> </tr> <tr> <td>Language || Russian || Input || char (multi-task)</td> <td>92.2</td> <td>92.6</td> </tr> <tr> <td>Language || Russian || Input || char + predicted case</td> <td>92.5</td> <td>93.3</td> </tr> <tr> <td>Language || Russian || Input || char + gold case</td> <td>92.8</td> <td>93.5</td> </tr> <tr> <td>Language || Russian || Input || oracle</td> <td>92.6</td> <td>93.3</td> </tr> </tbody></table>
Table 6
table_6
D18-1278
7
emnlp2018
Table 6 summarizes the results on Czech, German, and Russian. We find augmenting the charlstm model with either oracle or predicted case improve its accuracy, although the effect is different across languages. The improvements from predicted case results are interesting, since in nonneural parsers, predicted case usually harms accuracy (Tsarfaty et al., 2010). However, we note that our taggers use gold POS, which might help. The MTL models achieve similar or slightly better performance than the character-only models, suggesting that supplying case in this way is beneficial. Curiously, the MTL parser is worse than the the pipeline parser, but the MTL case tagger is better than the pipeline case tagger (Table 7). This indicates that the MTL model must learn to encode case in the model’s representation, but must not learn to effectively use it for parsing. Finally, we observe that augmenting the char-lstm with either gold or predicted case improves the parsing performance for all languages, and indeed closes the performance gap with the full oracle, which has access to all morphological features. This is especially interesting, because it shows using carefully targeted linguistic analyses can improve accuracy as much as wholesale linguistic analysis.
[1, 1, 2, 2, 1, 2, 2, 1, 2]
['Table 6 summarizes the results on Czech, German, and Russian.', 'We find augmenting the charlstm model with either oracle or predicted case improve its accuracy, although the effect is different across languages.', 'The improvements from predicted case results are interesting, since in nonneural parsers, predicted case usually harms accuracy (Tsarfaty et al., 2010).', 'However, we note that our taggers use gold POS, which might help.', 'The MTL models achieve similar or slightly better performance than the character-only models, suggesting that supplying case in this way is beneficial.', 'Curiously, the MTL parser is worse than the the pipeline parser, but the MTL case tagger is better than the pipeline case tagger (Table 7).', 'This indicates that the MTL model must learn to encode case in the model’s representation, but must not learn to effectively use it for parsing.', 'Finally, we observe that augmenting the char-lstm with either gold or predicted case improves the parsing performance for all languages, and indeed closes the performance gap with the full oracle, which has access to all morphological features.', 'This is especially interesting, because it shows using carefully targeted linguistic analyses can improve accuracy as much as wholesale linguistic analysis.']
[['Czech', 'German', 'Russian'], ['char + predicted case', 'oracle', 'Dev', 'Test', 'Czech', 'German', 'Russian'], ['char + predicted case'], None, ['char'], None, None, ['char + gold case', 'char + predicted case', 'Czech', 'German', 'Russian', 'oracle'], None]
1
D18-1279table_2
F1 score of our proposed models in comparison with state-of-the-art results.
2
[['Model', 'Conv-CRF+Lexicon (Collobert et al. 2011)'], ['Model', 'LSTM-CRF+Lexicon (Huang et al. 2015)'], ['Model', 'LSTM-CRF+Lexicon+char-CNN (Chiu and Nichols 2016)'], ['Model', 'LSTM-Softmax+char-LSTM (Ling et al. 2015)'], ['Model', 'LSTM-CRF+char-LSTM (Lample et al. 2016)'], ['Model', 'LSTM-CRF+char-CNN (Ma and Hovy 2016)'], ['Model', 'GRM-CRF+char-GRU (Yang et al. 2017)'], ['Model', 'LSTM-CRF'], ['Model', 'LSTM-CRF+char-LSTM'], ['Model', 'LSTM-CRF+char-CNN'], ['Model', 'LSTM-CRF+char-IntNet-9'], ['Model', 'LSTM-CRF+char-IntNet-5']]
1
[['Spanish'], ['Dutch'], ['English'], ['German'], ['Chunking'], ['POS']]
[['-', '-', '89.59', '-', '94.32', '97.29'], ['-', '-', '90.10', '-', '94.46', '97.43'], ['-', '-', '90.77', '-', '-', '-'], ['-', '-', '-', '-', '-', '97.55'], ['85.75', '81.74', '90.94', '78.76', '-', '-'], ['-', '-', '91.21', '-', '-', '97.55'], ['84.69', '85.00', '91.20', '-', '94.66', '97.55'], ['80.33±0.37', '79.87±0.28', '88.41±0.22', '73.42±0.39', '94.29±0.11', '96.63±0.08'], ['86.12±0.34', '87.13±0.25', '91.13±0.15', '78.31±0.35', '94.97±0.09', '97.49±0.04'], ['85.91±0.38', '86.69±0.22', '91.11±0.14', '78.15±0.31', '94.91±0.08', '97.45±0.03'], ['85.71±0.39', '87.38±0.27', '91.39±0.16', '79.43±0.33', '95.08±0.07', '97.51±0.04'], ['86.68±0.35', '87.81±0.24', '91.64±0.17', '78.58±0.32', '95.29±0.08', '97.58±0.02']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['LSTM-CRF+char-IntNet-9', 'LSTM-CRF+char-IntNet-5']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Spanish</th> <th>Dutch</th> <th>English</th> <th>German</th> <th>Chunking</th> <th>POS</th> </tr> </thead> <tbody> <tr> <td>Model || Conv-CRF+Lexicon (Collobert et al. 2011)</td> <td>-</td> <td>-</td> <td>89.59</td> <td>-</td> <td>94.32</td> <td>97.29</td> </tr> <tr> <td>Model || LSTM-CRF+Lexicon (Huang et al. 2015)</td> <td>-</td> <td>-</td> <td>90.10</td> <td>-</td> <td>94.46</td> <td>97.43</td> </tr> <tr> <td>Model || LSTM-CRF+Lexicon+char-CNN (Chiu and Nichols 2016)</td> <td>-</td> <td>-</td> <td>90.77</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LSTM-Softmax+char-LSTM (Ling et al. 2015)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>97.55</td> </tr> <tr> <td>Model || LSTM-CRF+char-LSTM (Lample et al. 2016)</td> <td>85.75</td> <td>81.74</td> <td>90.94</td> <td>78.76</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LSTM-CRF+char-CNN (Ma and Hovy 2016)</td> <td>-</td> <td>-</td> <td>91.21</td> <td>-</td> <td>-</td> <td>97.55</td> </tr> <tr> <td>Model || GRM-CRF+char-GRU (Yang et al. 2017)</td> <td>84.69</td> <td>85.00</td> <td>91.20</td> <td>-</td> <td>94.66</td> <td>97.55</td> </tr> <tr> <td>Model || LSTM-CRF</td> <td>80.33±0.37</td> <td>79.87±0.28</td> <td>88.41±0.22</td> <td>73.42±0.39</td> <td>94.29±0.11</td> <td>96.63±0.08</td> </tr> <tr> <td>Model || LSTM-CRF+char-LSTM</td> <td>86.12±0.34</td> <td>87.13±0.25</td> <td>91.13±0.15</td> <td>78.31±0.35</td> <td>94.97±0.09</td> <td>97.49±0.04</td> </tr> <tr> <td>Model || LSTM-CRF+char-CNN</td> <td>85.91±0.38</td> <td>86.69±0.22</td> <td>91.11±0.14</td> <td>78.15±0.31</td> <td>94.91±0.08</td> <td>97.45±0.03</td> </tr> <tr> <td>Model || LSTM-CRF+char-IntNet-9</td> <td>85.71±0.39</td> <td>87.38±0.27</td> <td>91.39±0.16</td> <td>79.43±0.33</td> <td>95.08±0.07</td> <td>97.51±0.04</td> </tr> <tr> <td>Model || LSTM-CRF+char-IntNet-5</td> <td>86.68±0.35</td> <td>87.81±0.24</td> <td>91.64±0.17</td> <td>78.58±0.32</td> <td>95.29±0.08</td> <td>97.58±0.02</td> </tr> </tbody></table>
Table 2
table_2
D18-1279
7
emnlp2018
5.2 State-of-the-art Results. Table 2 presents our proposed model in comparison with state-of-the-art results. LSTM-CRF is our baseline which uses fine-tuned pre-trained word embeddings. Its comparison with LSTMCRF using random initializations for word embeddings, as shown in Table 1, confirms that pre-trained word embeddings are useful for sequence labeling. Since the training corpus for sequence labeling is relatively small, pre-trained embeddings learned from a huge unlabeled corpus can help to enhance word semantics. Furthermore, we adopt and re-implement two stateof-the-art character models, char-LSTM and charCNN, by combining with LSTM-CRF, which we refer to as LSTM-CRF-char-LSTM and LSTMCRF-char-CNN. These experiments show that our char-IntNet generally improves results across different models and datasets. The improvement is more pronounced for non-English datasets, for example, IntNet improves the F-1 score over the stateof-the-art results by more than 2% for Dutch and Spanish. It also shows that the results of LSTM-CRF are significantly improved after adding character-to-word models, which confirms that word shape information is very important for sequence labeling. Figure 2 presents the details of training epochs in comparison with other state-ofthe-art character models for different languages. It shows that char-CNN and char-LSTM converge early whereas char-IntNet takes more epochs to converge and generally performs better. It alludes to the fact that IntNet is suitable for reducing overfitting, since we have used early stopping while training.
[2, 1, 2, 2, 2, 2, 1, 1, 1, 2, 2, 2]
['5.2 State-of-the-art Results.', 'Table 2 presents our proposed model in comparison with state-of-the-art results.', 'LSTM-CRF is our baseline which uses fine-tuned pre-trained word embeddings.', 'Its comparison with LSTMCRF using random initializations for word embeddings, as shown in Table 1, confirms that pre-trained word embeddings are useful for sequence labeling.', 'Since the training corpus for sequence labeling is relatively small, pre-trained embeddings learned from a huge unlabeled corpus can help to enhance word semantics.', 'Furthermore, we adopt and re-implement two stateof-the-art character models, char-LSTM and charCNN, by combining with LSTM-CRF, which we refer to as LSTM-CRF-char-LSTM and LSTMCRF-char-CNN.', 'These experiments show that our char-IntNet generally improves results across different models and datasets.', 'The improvement is more pronounced for non-English datasets, for example, IntNet improves the F-1 score over the stateof-the-art results by more than 2% for Dutch and Spanish.', 'It also shows that the results of LSTM-CRF are significantly improved after adding character-to-word models, which confirms that word shape information is very important for sequence labeling.', 'Figure 2 presents the details of training epochs in comparison with other state-ofthe-art character models for different languages.', 'It shows that char-CNN and char-LSTM converge early whereas char-IntNet takes more epochs to converge and generally performs better.', 'It alludes to the fact that IntNet is suitable for reducing overfitting, since we have used early stopping while training.']
[None, ['Conv-CRF+Lexicon (Collobert et al. 2011)', 'LSTM-CRF+Lexicon (Huang et al. 2015)', 'LSTM-CRF+Lexicon+char-CNN (Chiu and Nichols 2016)', 'LSTM-Softmax+char-LSTM (Ling et al. 2015)', 'LSTM-CRF+char-LSTM (Lample et al. 2016)', 'LSTM-CRF+char-CNN (Ma and Hovy 2016)', 'GRM-CRF+char-GRU (Yang et al. 2017)', 'LSTM-CRF', 'LSTM-CRF+char-LSTM', 'LSTM-CRF+char-CNN', 'LSTM-CRF+char-IntNet-9', 'LSTM-CRF+char-IntNet-5'], ['LSTM-CRF'], ['LSTM-CRF'], None, ['LSTM-CRF+char-LSTM', 'LSTM-CRF+char-CNN'], ['LSTM-CRF+char-IntNet-9', 'LSTM-CRF+char-IntNet-5'], ['LSTM-CRF+char-IntNet-9', 'LSTM-CRF+char-IntNet-5', 'Spanish', 'Dutch', 'German', 'LSTM-CRF+char-LSTM (Lample et al. 2016)', 'GRM-CRF+char-GRU (Yang et al. 2017)'], ['LSTM-CRF', 'LSTM-CRF+char-LSTM', 'LSTM-CRF+char-CNN'], None, ['LSTM-CRF+char-LSTM', 'LSTM-CRF+char-CNN', 'LSTM-CRF+char-IntNet-9', 'LSTM-CRF+char-IntNet-5'], None]
1
D18-1280table_4
Performance of ICON on the IEMOCAP dataset. † represents statistical significance over state-of-the-art scores under
2
[['Models', 'memnet'], ['Models', 'cLSTM'], ['Models', 'TFN'], ['Models', 'MFN'], ['Models', 'CMN'], ['Models', 'ICON']]
3
[['IEMOCAP: Emotion Categories', 'Happy', 'acc.'], ['IEMOCAP: Emotion Categories', 'Happy', 'F1'], ['IEMOCAP: Emotion Categories', 'Sad', 'acc.'], ['IEMOCAP: Emotion Categories', 'Sad', 'F1'], ['IEMOCAP: Emotion Categories', 'Neutral', 'acc.'], ['IEMOCAP: Emotion Categories', 'Neutral', 'F1'], ['IEMOCAP: Emotion Categories', 'Angry', 'acc.'], ['IEMOCAP: Emotion Categories', 'Angry', 'F1'], ['IEMOCAP: Emotion Categories', 'Excited', 'acc.'], ['IEMOCAP: Emotion Categories', 'Excited', 'F1'], ['IEMOCAP: Emotion Categories', 'Frustrated', 'acc.'], ['IEMOCAP: Emotion Categories', 'Frustrated', 'F1'], ['IEMOCAP: Emotion Categories', 'Avg.', 'acc.'], ['IEMOCAP: Emotion Categories', 'Avg.', 'F1']]
[['24.4', '33.0', '60.4', '69.3', '56.8', '55.0', '67.1', '66.1', '65.2', '62.3', '68.4', '63.0', '59.9', '59.5'], ['25.5', '35.6', '58.6', '69.2', '56.5', '53.5', '70.0', '66.3', '58.8', '61.1', '67.4', '62.4', '59.8', '59.0'], ['23.2', '33.7', '58.0', '68.6', '56.6', '55.1', '69.1', '64.2', '63.1', '62.4', '65.5', '61.2', '58.8', '58.5'], ['24.0', '34.1', '65.6', '70.5', '55.5', '52.1', '72.3†', '66.8', '64.3', '62.1', '67.9', '62.5', '60.1', '59.9'], ['25.7', '32.6', '66.5', '72.9', '53.9', '56.2', '67.6', '64.6', '69.9', '67.9', '71.7', '63.1', '61.9', '61.4'], ['23.6', '32.8', '70.6†', '74.4†', '59.9', '60.6†', '68.2', '68.2', '72.2†', '68.4', '71.9', '66.2†', '64.0†', '63.5†']]
column
['acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1']
['ICON']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IEMOCAP: Emotion Categories || Happy || acc.</th> <th>IEMOCAP: Emotion Categories || Happy || F1</th> <th>IEMOCAP: Emotion Categories || Sad || acc.</th> <th>IEMOCAP: Emotion Categories || Sad || F1</th> <th>IEMOCAP: Emotion Categories || Neutral || acc.</th> <th>IEMOCAP: Emotion Categories || Neutral || F1</th> <th>IEMOCAP: Emotion Categories || Angry || acc.</th> <th>IEMOCAP: Emotion Categories || Angry || F1</th> <th>IEMOCAP: Emotion Categories || Excited || acc.</th> <th>IEMOCAP: Emotion Categories || Excited || F1</th> <th>IEMOCAP: Emotion Categories || Frustrated || acc.</th> <th>IEMOCAP: Emotion Categories || Frustrated || F1</th> <th>IEMOCAP: Emotion Categories || Avg. || acc.</th> <th>IEMOCAP: Emotion Categories || Avg. || F1</th> </tr> </thead> <tbody> <tr> <td>Models || memnet</td> <td>24.4</td> <td>33.0</td> <td>60.4</td> <td>69.3</td> <td>56.8</td> <td>55.0</td> <td>67.1</td> <td>66.1</td> <td>65.2</td> <td>62.3</td> <td>68.4</td> <td>63.0</td> <td>59.9</td> <td>59.5</td> </tr> <tr> <td>Models || cLSTM</td> <td>25.5</td> <td>35.6</td> <td>58.6</td> <td>69.2</td> <td>56.5</td> <td>53.5</td> <td>70.0</td> <td>66.3</td> <td>58.8</td> <td>61.1</td> <td>67.4</td> <td>62.4</td> <td>59.8</td> <td>59.0</td> </tr> <tr> <td>Models || TFN</td> <td>23.2</td> <td>33.7</td> <td>58.0</td> <td>68.6</td> <td>56.6</td> <td>55.1</td> <td>69.1</td> <td>64.2</td> <td>63.1</td> <td>62.4</td> <td>65.5</td> <td>61.2</td> <td>58.8</td> <td>58.5</td> </tr> <tr> <td>Models || MFN</td> <td>24.0</td> <td>34.1</td> <td>65.6</td> <td>70.5</td> <td>55.5</td> <td>52.1</td> <td>72.3†</td> <td>66.8</td> <td>64.3</td> <td>62.1</td> <td>67.9</td> <td>62.5</td> <td>60.1</td> <td>59.9</td> </tr> <tr> <td>Models || CMN</td> <td>25.7</td> <td>32.6</td> <td>66.5</td> <td>72.9</td> <td>53.9</td> <td>56.2</td> <td>67.6</td> <td>64.6</td> <td>69.9</td> <td>67.9</td> <td>71.7</td> <td>63.1</td> <td>61.9</td> <td>61.4</td> </tr> <tr> <td>Models || ICON</td> <td>23.6</td> <td>32.8</td> <td>70.6†</td> <td>74.4†</td> <td>59.9</td> <td>60.6†</td> <td>68.2</td> <td>68.2</td> <td>72.2†</td> <td>68.4</td> <td>71.9</td> <td>66.2†</td> <td>64.0†</td> <td>63.5†</td> </tr> </tbody></table>
Table 4
table_4
D18-1280
7
emnlp2018
6 Results. Tables 4 and 5 present the results on the IEMOCAP and SEMAINE testing sets, respectively. In Table 4, we evaluate the mean classification performance using Weighted Accuracy (acc.) and F1-Score (F1) on the discrete emotion categories. ICON performs better than the compared models with significant performance increase in emotions (∼2.1% acc.). For each emotion, ICON outperforms all the compared models except for happiness emotion. However, its performance is still at par with cLSTM without a significant gap. Also, ICON manages to correctly identify the relatively similar excitement emotion by a large margin.
[2, 1, 1, 1, 1, 1, 2]
['6 Results.', 'Tables 4 and 5 present the results on the IEMOCAP and SEMAINE testing sets, respectively.', 'In Table 4, we evaluate the mean classification performance using Weighted Accuracy (acc.) and F1-Score (F1) on the discrete emotion categories.', 'ICON performs better than the compared models with significant performance increase in emotions (∼2.1% acc.).', 'For each emotion, ICON outperforms all the compared models except for happiness emotion.', 'However, its performance is still at par with cLSTM without a significant gap.', 'Also, ICON manages to correctly identify the relatively similar excitement emotion by a large margin.']
[None, ['IEMOCAP: Emotion Categories'], ['acc.', 'F1'], ['ICON', 'acc.'], ['ICON', 'Sad', 'Neutral', 'Angry', 'Excited', 'Frustrated'], ['ICON', 'cLSTM', 'Happy'], ['ICON']]
1