id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
26b5c090f72f6d51e5d9af2e470d06b2d7fc4a98
26b5c090f72f6d51e5d9af2e470d06b2d7fc4a98_0
Q: what are the baselines? Text: Introduction Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years. Starting from the encoder-decoder framework BIBREF0 , NMT starts to show promising results in many language pairs. The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable. The attention mechanism BIBREF1 added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions BIBREF2 , BIBREF3 , BIBREF4 . One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages. From the optimization perspective, deeper models are hard to efficiently back-propagate the gradients, and this phenomenon as well as its solution is better explored in the computer vision society. Residual networks (ResNet) BIBREF5 achieve great performance in a wide range of tasks, including image classification and image segmentation. Residual connections allow features from previous layers to be accumulated to the next layer easily, and make the optimization of the model efficiently focus on refining upper layer features. NMT is considered as a challenging problem due to its sequence-to-sequence generation framework, and the goal of comprehension and reorganizing from one language to the other. Apart from the encoder block that works as a feature generator, the decoder network combining with the attention mechanism bring new challenges to the optimization of the models. While nowadays best-performing NMT systems use residual connections, we question whether this is the most efficient way to propagate information through deep models. In this paper, inspired by the idea of using dense connections for training computer vision tasks BIBREF6 , we propose a densely connected NMT framework (DenseNMT) that efficiently propagates information from the encoder to the decoder through the attention component. Taking the CNN-based deep architecture as an example, we verify the efficiency of DenseNMT. Our contributions in this work include: (i) by comparing the loss curve, we show that DenseNMT allows the model to pass information more efficiently, and speeds up training; (ii) we show through ablation study that dense connections in all three blocks altogether help improve the performance, while not increasing the number of parameters; (iii) DenseNMT allows the models to achieve similar performance with much smaller embedding size; (iv) DenseNMT on IWSLT14 German-English and Turkish-English translation tasks achieves new benchmark BLEU scores, and the result on WMT14 English-German task is more competitive than the residual connections based baseline model. DenseNMT In this section, we introduce our DenseNMT architecture. In general, compared with residual connected NMT models, DenseNMT allows each layer to provide its information to all subsequent layers directly. Figure FIGREF9 - FIGREF15 show the design of our model structure by parts. We start with the formulation of a regular NMT model. Given a set of sentence pairs INLINEFORM0 , an NMT model learns parameter INLINEFORM1 by maximizing the log-likelihood function: DISPLAYFORM0 For every sentence pair INLINEFORM0 , INLINEFORM1 is calculated based on the decomposition: DISPLAYFORM0 where INLINEFORM0 is the length of sentence INLINEFORM1 . Typically, NMT models use the encoder-attention-decoder framework BIBREF1 , and potentially use multi-layer structure for both encoder and decoder. Given a source sentence INLINEFORM2 with length INLINEFORM3 , the encoder calculates hidden representations by layer. We denote the representation in the INLINEFORM4 -th layer as INLINEFORM5 , with dimension INLINEFORM6 , where INLINEFORM7 is the dimension of features in layer INLINEFORM8 . The hidden representation at each position INLINEFORM9 is either calculated by: DISPLAYFORM0 for recurrent transformation INLINEFORM0 such as LSTM and GRU, or by: DISPLAYFORM0 for parallel transformation INLINEFORM0 . On the other hand, the decoder layers INLINEFORM1 follow similar structure, while getting extra representations from the encoder side. These extra representations are also called attention, and are especially useful for capturing alignment information. In our experiments, we use convolution based transformation for INLINEFORM0 due to both its efficiency and high performance, more formally, DISPLAYFORM0 INLINEFORM0 is the gated linear unit proposed in BIBREF11 and the kernel size is INLINEFORM1 . DenseNMT is agnostic to the transformation function, and we expect it to also work well combining with other transformations, such as LSTM, self-attention and depthwise separable convolution. Dense encoder and decoder Different from residual connections, later layers in the dense encoder are able to use features from all previous layers by concatenating them: DISPLAYFORM0 Here, INLINEFORM0 is defined in Eq. ( EQREF10 ), INLINEFORM1 represents concatenation operation. Although this brings extra connections to the network, with smaller number of features per layer, the architecture encourages feature reuse, and can be more compact and expressive. As shown in Figure FIGREF9 , when designing the model, the hidden size in each layer is much smaller than the hidden size of the corresponding layer in the residual-connected model. While each encoder layer perceives information from its previous layers, each decoder layer INLINEFORM0 has two information sources: previous layers INLINEFORM1 , and attention values INLINEFORM2 . Therefore, in order to allow dense information flow, we redefine the generation of INLINEFORM3 -th layer as a nonlinear function over all its previous decoder layers and previous attentions. This can be written as: DISPLAYFORM0 where INLINEFORM0 is the attention value using INLINEFORM1 -th decoder layer and information from encoder side, which will be specified later. Figure FIGREF13 shows the comparison of a dense decoder with a regular residual decoder. The dimensions of both attention values and hidden layers are chosen with smaller values, yet the perceived information for each layer consists of a higher dimension vector with more representation power. The output of the decoder is a linear transformation of the concatenation of all layers by default. To compromise to the increment of dimensions, we use summary layers, which will be introduced in Section 3.3. With summary layers, the output of the decoder is only a linear transformation of the concatenation of the upper few layers. Dense attention Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2). However, most of them only use the last encoder layer. In order to pass more abundant information from the encoder side to the decoder side, the attention block needs to be more expressive. Following the recent development of designing attention architectures, we propose DenseAtt as the dense attention block, which serves for the dense connection between the encoder and the decoder side. More specifically, two options are proposed accordingly. For each decoding step in the corresponding decoder layer, the two options both calculate attention using multiple encoder layers. The first option is more compressed, while the second option is more expressive and flexible. We name them as DenseAtt-1 and DenseAtt-2 respectively. Figure FIGREF15 shows the architecture of (a) multi-step attention BIBREF2 , (b) DenseAtt-1, and (c) DenseAtt-2 in order. In general, a popular multiplicative attention module can be written as: DISPLAYFORM0 where INLINEFORM0 represent query, key, value respectively. We will use this function INLINEFORM1 in the following descriptions. In the decoding phase, we use a layer-wise attention mechanism, such that each decoder layer absorbs different attention information to adjust its output. Instead of treating the last hidden layer as the encoder's output, we treat the concatenation of all hidden layers from encoder side as the output. The decoder layer multiplies with the encoder output to obtain the attention weights, which is then multiplied by a linear combination of the encoder output and the sentence embedding. The attention output of each layer INLINEFORM0 can be formally written as: DISPLAYFORM0 where INLINEFORM0 is the multiplicative attention function, INLINEFORM1 is a concatenation operation that combines all features, and INLINEFORM2 is a linear transformation function that maps each variable to a fixed dimension in order to calculate the attention value. Notice that we explicitly write the INLINEFORM3 term in ( EQREF19 ) to keep consistent with the multi-step attention mechanism, as pictorially shown in Figure FIGREF15 (a). Notice that the transformation INLINEFORM0 in DenseAtt-1 forces the encoder layers to be mixed before doing attention. Since we use multiple hidden layers from the encoder side to get an attention value, we can alternatively calculate multiple attention values before concatenating them. In another word, the decoder layer can get different attention values from different encoder layers. This can be formally expressed as: DISPLAYFORM0 where the only difference from Eq. ( EQREF19 ) is that the concatenation operation is substituted by a summation operation, and is put after the attention function INLINEFORM0 . This method further increases the representation power in the attention block, while maintaining the same number of parameters in the model. Summary layers Since the number of features fed into nonlinear operation is accumulated along the path, the parameter size increases accordingly. For example, for the INLINEFORM0 -th encoder layer, the input dimension of features is INLINEFORM1 , where INLINEFORM2 is the feature dimension in previous layers, INLINEFORM3 is the embedding size. In order to avoid the calculation bottleneck for later layers due to large INLINEFORM4 , we introduce the summary layer for deeper models. It summarizes the features for all previous layers and projects back to the embedding size, so that later layers of both the encoder and the decoder side do not need to look back further. The summary layers can be considered as contextualized word vectors in a given sentence BIBREF12 . We add one summary layer after every INLINEFORM5 layers, where INLINEFORM6 is the hyperparameter we introduce. Accordingly, the input dimension of features is at most INLINEFORM7 for the last layer of the encoder. Moreover, combined with the summary layer setting, our DenseAtt mechanism allows each decoder layer to calculate the attention value focusing on the last few encoder layers, which consists of the last contextual embedding layer and several dense connected layers with low dimension. In practice, we set INLINEFORM8 as 5 or 6. Analysis of information flow Figure FIGREF9 and Figure FIGREF13 show the difference of information flow compared with a residual-based encoder/decoder. For residual-based models, each layer can absorb a single high-dimensional vector from its previous layer as the only information, while for DenseNMT, each layer can utilize several low-dimensional vectors from its previous layers and a high-dimensional vector from the first layer (embedding layer) as its information. In DenseNMT, each layer directly provides information to its later layers. Therefore, the structure allows feature reuse, and encourages upper layers to focus on creating new features. Furthermore, the attention block allows the embedding vectors (as well as other hidden layers) to guide the decoder's generation more directly; therefore, during back-propagation, the gradient information can be passed directly to all encoder layers simultaneously. Datasets We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German. We preprocess the IWSLT14 German-English dataset following byte-pair-encoding (BPE) method BIBREF13 . We learn 25k BPE codes using the joint corpus of source and target languages. We randomly select 7k from IWSLT14 German-English as the development set , and the test set is a concatenation of dev2010, tst2010, tst2011 and tst2012, which is widely used in prior works BIBREF14 , BIBREF15 , BIBREF16 . For the Turkish-English translation task, we use the data provided by IWSLT14 BIBREF17 and the SETimes corpus BIBREF17 following BIBREF18 . After removing sentence pairs with length ratio over 9, we obtain 360k sentence pairs. Since there is little commonality between the two languages, we learn 30k size BPE codes separately for Turkish and English. In addition to this, we give another preprocessing for Turkish sentences and use word-level English corpus. For Turkish sentences, following BIBREF19 , BIBREF18 , we use the morphology tool Zemberek with disambiguation by the morphological analysis BIBREF20 and removal of non-surface tokens. Following BIBREF18 , we concatenate tst2011, tst2012, tst2013, tst2014 as our test set. We concatenate dev2010 and tst2010 as the development set. We preprocess the WMT14 English-German dataset using a BPE code size of 40k. We use the concatenation of newstest2013 and newstest2012 as the development set. Model and architect design As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96. Since NMT model usually allocates a large proportion of its parameters to the source/target sentence embedding and softmax matrix, we explore in our experiments to what extent decreasing the dimensions of the three parts would harm the BLEU score. We change the dimensions of the source embedding, the target embedding as well as the softmax matrix simultaneously to smaller values, and then project each word back to the original embedding dimension through a linear transformation. This significantly reduces the number of total parameters, while not influencing the upper layer structure of the model. We also introduce three additional models we use for ablation study, all using 4-layer structure. Based on the residual connected BASE-4L model, (1) DenseENC-4L only makes encoder side dense, (2) DenseDEC-4L only makes decoder side dense, and (3) DenseAtt-4L only makes the attention dense using DenseAtt-2. There is no summary layer in the models, and both DenseENC-4L and DenseDEC-4L use hidden size 128. Again, by reducing the hidden size, we ensure that different 4-layer models have similar model sizes. Our design for the WMT14 English-German model follows the best performance model provided in BIBREF2 . The construction of our model is straightforward: our 15-layer model DenseNMT-En-De-15 uses dense connection with DenseAtt-2, INLINEFORM0 . The hidden number in each layer is INLINEFORM1 that of the original model, while the kernel size maintains the same. Training setting We use Nesterov Accelerated Gradient (NAG) BIBREF21 as our optimizer, and the initial learning rate is set to INLINEFORM0 . For German-English and Turkish-English experiments, the learning rate will shrink by 10 every time the validation loss increases. For the English-German dataset, in consistent with BIBREF2 , the learning rate will shrink by 10 every epoch since the first increment of validation loss. The system stops training until the learning rate is less than INLINEFORM1 . All models are trained end-to-end without any warmstart techniques. We set our batch size for the WMT14 English-German dataset to be 48, and additionally tune the length penalty parameter, in consistent with BIBREF2 . For other datasets, we set batch size to be 32. During inference, we use a beam size of 5. Training curve We first show that DenseNMT helps information flow more efficiently by presenting the training loss curve. All hyperparameters are fixed in each plot, only the models are different. In Figure FIGREF30 , the loss curves for both training and dev sets (before entering the finetuning period) are provided for De-En, Tr-En and Tr-En-morph. For clarity, we compare DenseNMT-4L-2 with BASE-4L. We observe that DenseNMT models are consistently better than residual-connected models, since their loss curves are always below those of the baseline models. The effect is more obvious on the WMT14 English-German dataset. We rerun the best model provided by BIBREF2 and compare with our model. In Figure FIGREF33 , where train/test loss curve are provided, DenseNMT-En-De-15 reaches the same level of loss and starts finetuning (validation loss starts to increase) at epoch 13, which is 35% faster than the baseline. Adding dense connections changes the architecture, and would slightly influence training speed. For the WMT14 En-De experiments, the computing time for both DenseNMT and the baseline (with similar number of parameters and same batch size) tested on single M40 GPU card are 1571 and 1710 word/s, respectively. While adding dense connections influences the per-iteration training slightly (8.1% reduction of speed), it uses many fewer epochs, and achieves a better BLEU score. In terms of training time, DenseNMT uses 29.3%(before finetuning)/22.9%(total) less time than the baseline. DenseNMT improves accuracy with similar architectures and model sizes Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well. Furthermore, in Table TABREF36 , we investigate DenseNMT models through ablation study. In order to make the comparison fair, six models listed have roughly the same number of parameters. On De-En, Tr-En and Tr-En-morph, we see improvement by making the encoder dense, making the decoder dense, and making the attention dense. Fully dense-connected model DenseNMT-4L-1 further improves the translation accuracy. By allowing more flexibility in dense attention, DenseNMT-4L-2 provides the highest BLEU scores for all three experiments. From the experiments, we have seen that enlarging the information flow in the attention block benefits the models. The dense attention block provides multi-layer information transmission from the encoder to the decoder, and to the output as well. Meanwhile, as shown by the ablation study, the dense-connected encoder and decoder both give more powerful representations than the residual-connected counterparts. As a result, the integration of the three parts improve the accuracy significantly. DenseNMT with smaller embedding size From Table TABREF32 , we also observe that DenseNMT performs better with small embedding sizes compared to residual-connected models with regular embedding size. For example, on Tr-En model, the 8-layer DenseNMT-8L-2 model with embedding size 64 matches the BLEU score of the 8-layer BASE model with embedding size 256, while the number of parameter of the former one is only INLINEFORM0 of the later one. In all genres, DenseNMT model with embedding size 128 is comparable or even better than the baseline model with embedding size 256. While overlarge embedding sizes hurt accuracy because of overfitting issues, smaller sizes are not preferable because of insufficient representation power. However, our dense models show that with better model design, the embedding information can be well concentrated on fewer dimensions, e.g., 64. This is extremely helpful when building models on mobile and small devices where the model size is critical. While there are other works that stress the efficiency issue by using techniques such as separable convolution BIBREF3 , and shared embedding BIBREF4 , our DenseNMT framework is orthogonal to those approaches. We believe that other techniques would produce more efficient models through combining with our DenseNMT framework. DenseNMT compares with state-of-the-art results For the IWSLT14 German-English dataset, we compare with the best results reported from literatures. To be consistent with prior works, we also provide results using our model directly on the dataset without BPE preprocessing. As shown in Table TABREF39 , DenseNMT outperforms the phrase-structure based network NPMT BIBREF16 (with beam size 10) by 1.2 BLEU, using a smaller beam size, and outperforms the actor-critic method based algorithm BIBREF15 by 2.8 BLEU. For reference, our model trained on the BPE preprocessed dataset achieves 32.26 BLEU, which is 1.93 BLEU higher than our word-based model. For Turkish-English task, we compare with BIBREF19 which uses the same morphology preprocessing as our Tr-En-morph. As shown in Table TABREF37 , our baseline is higher than the previous result, and we further achieve new benchmark result with 24.36 BLEU average score. For WMT14 English-German, from Table TABREF41 , we can see that DenseNMT outperforms ConvS2S model by 0.36 BLEU score using 35% fewer training iterations and 20% fewer parameters. We also compare with another convolution based NMT model: SliceNet BIBREF3 , which explores depthwise separable convolution architectures. SliceNet-Full matches our result, and SliceNet-Super outperforms by 0.58 BLEU score. However, both models have 2.2x more parameters than our model. We expect DenseNMT structure could help improve their performance as well. Conclusion In this work, we have proposed DenseNMT as a dense-connection framework for translation tasks, which uses the information from embeddings more efficiently, and passes abundant information from the encoder side to the decoder side. Our experiments have shown that DenseNMT is able to speed up the information flow and improve translation accuracy. For the future work, we will combine dense connections with other deep architectures, such as RNNs BIBREF7 and self-attention networks BIBREF4 .
4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256
8c0621016e96d86a7063cb0c9ec20c76a2dba678
8c0621016e96d86a7063cb0c9ec20c76a2dba678_0
Q: did they outperform previous methods? Text: Introduction Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years. Starting from the encoder-decoder framework BIBREF0 , NMT starts to show promising results in many language pairs. The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable. The attention mechanism BIBREF1 added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions BIBREF2 , BIBREF3 , BIBREF4 . One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages. From the optimization perspective, deeper models are hard to efficiently back-propagate the gradients, and this phenomenon as well as its solution is better explored in the computer vision society. Residual networks (ResNet) BIBREF5 achieve great performance in a wide range of tasks, including image classification and image segmentation. Residual connections allow features from previous layers to be accumulated to the next layer easily, and make the optimization of the model efficiently focus on refining upper layer features. NMT is considered as a challenging problem due to its sequence-to-sequence generation framework, and the goal of comprehension and reorganizing from one language to the other. Apart from the encoder block that works as a feature generator, the decoder network combining with the attention mechanism bring new challenges to the optimization of the models. While nowadays best-performing NMT systems use residual connections, we question whether this is the most efficient way to propagate information through deep models. In this paper, inspired by the idea of using dense connections for training computer vision tasks BIBREF6 , we propose a densely connected NMT framework (DenseNMT) that efficiently propagates information from the encoder to the decoder through the attention component. Taking the CNN-based deep architecture as an example, we verify the efficiency of DenseNMT. Our contributions in this work include: (i) by comparing the loss curve, we show that DenseNMT allows the model to pass information more efficiently, and speeds up training; (ii) we show through ablation study that dense connections in all three blocks altogether help improve the performance, while not increasing the number of parameters; (iii) DenseNMT allows the models to achieve similar performance with much smaller embedding size; (iv) DenseNMT on IWSLT14 German-English and Turkish-English translation tasks achieves new benchmark BLEU scores, and the result on WMT14 English-German task is more competitive than the residual connections based baseline model. DenseNMT In this section, we introduce our DenseNMT architecture. In general, compared with residual connected NMT models, DenseNMT allows each layer to provide its information to all subsequent layers directly. Figure FIGREF9 - FIGREF15 show the design of our model structure by parts. We start with the formulation of a regular NMT model. Given a set of sentence pairs INLINEFORM0 , an NMT model learns parameter INLINEFORM1 by maximizing the log-likelihood function: DISPLAYFORM0 For every sentence pair INLINEFORM0 , INLINEFORM1 is calculated based on the decomposition: DISPLAYFORM0 where INLINEFORM0 is the length of sentence INLINEFORM1 . Typically, NMT models use the encoder-attention-decoder framework BIBREF1 , and potentially use multi-layer structure for both encoder and decoder. Given a source sentence INLINEFORM2 with length INLINEFORM3 , the encoder calculates hidden representations by layer. We denote the representation in the INLINEFORM4 -th layer as INLINEFORM5 , with dimension INLINEFORM6 , where INLINEFORM7 is the dimension of features in layer INLINEFORM8 . The hidden representation at each position INLINEFORM9 is either calculated by: DISPLAYFORM0 for recurrent transformation INLINEFORM0 such as LSTM and GRU, or by: DISPLAYFORM0 for parallel transformation INLINEFORM0 . On the other hand, the decoder layers INLINEFORM1 follow similar structure, while getting extra representations from the encoder side. These extra representations are also called attention, and are especially useful for capturing alignment information. In our experiments, we use convolution based transformation for INLINEFORM0 due to both its efficiency and high performance, more formally, DISPLAYFORM0 INLINEFORM0 is the gated linear unit proposed in BIBREF11 and the kernel size is INLINEFORM1 . DenseNMT is agnostic to the transformation function, and we expect it to also work well combining with other transformations, such as LSTM, self-attention and depthwise separable convolution. Dense encoder and decoder Different from residual connections, later layers in the dense encoder are able to use features from all previous layers by concatenating them: DISPLAYFORM0 Here, INLINEFORM0 is defined in Eq. ( EQREF10 ), INLINEFORM1 represents concatenation operation. Although this brings extra connections to the network, with smaller number of features per layer, the architecture encourages feature reuse, and can be more compact and expressive. As shown in Figure FIGREF9 , when designing the model, the hidden size in each layer is much smaller than the hidden size of the corresponding layer in the residual-connected model. While each encoder layer perceives information from its previous layers, each decoder layer INLINEFORM0 has two information sources: previous layers INLINEFORM1 , and attention values INLINEFORM2 . Therefore, in order to allow dense information flow, we redefine the generation of INLINEFORM3 -th layer as a nonlinear function over all its previous decoder layers and previous attentions. This can be written as: DISPLAYFORM0 where INLINEFORM0 is the attention value using INLINEFORM1 -th decoder layer and information from encoder side, which will be specified later. Figure FIGREF13 shows the comparison of a dense decoder with a regular residual decoder. The dimensions of both attention values and hidden layers are chosen with smaller values, yet the perceived information for each layer consists of a higher dimension vector with more representation power. The output of the decoder is a linear transformation of the concatenation of all layers by default. To compromise to the increment of dimensions, we use summary layers, which will be introduced in Section 3.3. With summary layers, the output of the decoder is only a linear transformation of the concatenation of the upper few layers. Dense attention Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2). However, most of them only use the last encoder layer. In order to pass more abundant information from the encoder side to the decoder side, the attention block needs to be more expressive. Following the recent development of designing attention architectures, we propose DenseAtt as the dense attention block, which serves for the dense connection between the encoder and the decoder side. More specifically, two options are proposed accordingly. For each decoding step in the corresponding decoder layer, the two options both calculate attention using multiple encoder layers. The first option is more compressed, while the second option is more expressive and flexible. We name them as DenseAtt-1 and DenseAtt-2 respectively. Figure FIGREF15 shows the architecture of (a) multi-step attention BIBREF2 , (b) DenseAtt-1, and (c) DenseAtt-2 in order. In general, a popular multiplicative attention module can be written as: DISPLAYFORM0 where INLINEFORM0 represent query, key, value respectively. We will use this function INLINEFORM1 in the following descriptions. In the decoding phase, we use a layer-wise attention mechanism, such that each decoder layer absorbs different attention information to adjust its output. Instead of treating the last hidden layer as the encoder's output, we treat the concatenation of all hidden layers from encoder side as the output. The decoder layer multiplies with the encoder output to obtain the attention weights, which is then multiplied by a linear combination of the encoder output and the sentence embedding. The attention output of each layer INLINEFORM0 can be formally written as: DISPLAYFORM0 where INLINEFORM0 is the multiplicative attention function, INLINEFORM1 is a concatenation operation that combines all features, and INLINEFORM2 is a linear transformation function that maps each variable to a fixed dimension in order to calculate the attention value. Notice that we explicitly write the INLINEFORM3 term in ( EQREF19 ) to keep consistent with the multi-step attention mechanism, as pictorially shown in Figure FIGREF15 (a). Notice that the transformation INLINEFORM0 in DenseAtt-1 forces the encoder layers to be mixed before doing attention. Since we use multiple hidden layers from the encoder side to get an attention value, we can alternatively calculate multiple attention values before concatenating them. In another word, the decoder layer can get different attention values from different encoder layers. This can be formally expressed as: DISPLAYFORM0 where the only difference from Eq. ( EQREF19 ) is that the concatenation operation is substituted by a summation operation, and is put after the attention function INLINEFORM0 . This method further increases the representation power in the attention block, while maintaining the same number of parameters in the model. Summary layers Since the number of features fed into nonlinear operation is accumulated along the path, the parameter size increases accordingly. For example, for the INLINEFORM0 -th encoder layer, the input dimension of features is INLINEFORM1 , where INLINEFORM2 is the feature dimension in previous layers, INLINEFORM3 is the embedding size. In order to avoid the calculation bottleneck for later layers due to large INLINEFORM4 , we introduce the summary layer for deeper models. It summarizes the features for all previous layers and projects back to the embedding size, so that later layers of both the encoder and the decoder side do not need to look back further. The summary layers can be considered as contextualized word vectors in a given sentence BIBREF12 . We add one summary layer after every INLINEFORM5 layers, where INLINEFORM6 is the hyperparameter we introduce. Accordingly, the input dimension of features is at most INLINEFORM7 for the last layer of the encoder. Moreover, combined with the summary layer setting, our DenseAtt mechanism allows each decoder layer to calculate the attention value focusing on the last few encoder layers, which consists of the last contextual embedding layer and several dense connected layers with low dimension. In practice, we set INLINEFORM8 as 5 or 6. Analysis of information flow Figure FIGREF9 and Figure FIGREF13 show the difference of information flow compared with a residual-based encoder/decoder. For residual-based models, each layer can absorb a single high-dimensional vector from its previous layer as the only information, while for DenseNMT, each layer can utilize several low-dimensional vectors from its previous layers and a high-dimensional vector from the first layer (embedding layer) as its information. In DenseNMT, each layer directly provides information to its later layers. Therefore, the structure allows feature reuse, and encourages upper layers to focus on creating new features. Furthermore, the attention block allows the embedding vectors (as well as other hidden layers) to guide the decoder's generation more directly; therefore, during back-propagation, the gradient information can be passed directly to all encoder layers simultaneously. Datasets We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German. We preprocess the IWSLT14 German-English dataset following byte-pair-encoding (BPE) method BIBREF13 . We learn 25k BPE codes using the joint corpus of source and target languages. We randomly select 7k from IWSLT14 German-English as the development set , and the test set is a concatenation of dev2010, tst2010, tst2011 and tst2012, which is widely used in prior works BIBREF14 , BIBREF15 , BIBREF16 . For the Turkish-English translation task, we use the data provided by IWSLT14 BIBREF17 and the SETimes corpus BIBREF17 following BIBREF18 . After removing sentence pairs with length ratio over 9, we obtain 360k sentence pairs. Since there is little commonality between the two languages, we learn 30k size BPE codes separately for Turkish and English. In addition to this, we give another preprocessing for Turkish sentences and use word-level English corpus. For Turkish sentences, following BIBREF19 , BIBREF18 , we use the morphology tool Zemberek with disambiguation by the morphological analysis BIBREF20 and removal of non-surface tokens. Following BIBREF18 , we concatenate tst2011, tst2012, tst2013, tst2014 as our test set. We concatenate dev2010 and tst2010 as the development set. We preprocess the WMT14 English-German dataset using a BPE code size of 40k. We use the concatenation of newstest2013 and newstest2012 as the development set. Model and architect design As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96. Since NMT model usually allocates a large proportion of its parameters to the source/target sentence embedding and softmax matrix, we explore in our experiments to what extent decreasing the dimensions of the three parts would harm the BLEU score. We change the dimensions of the source embedding, the target embedding as well as the softmax matrix simultaneously to smaller values, and then project each word back to the original embedding dimension through a linear transformation. This significantly reduces the number of total parameters, while not influencing the upper layer structure of the model. We also introduce three additional models we use for ablation study, all using 4-layer structure. Based on the residual connected BASE-4L model, (1) DenseENC-4L only makes encoder side dense, (2) DenseDEC-4L only makes decoder side dense, and (3) DenseAtt-4L only makes the attention dense using DenseAtt-2. There is no summary layer in the models, and both DenseENC-4L and DenseDEC-4L use hidden size 128. Again, by reducing the hidden size, we ensure that different 4-layer models have similar model sizes. Our design for the WMT14 English-German model follows the best performance model provided in BIBREF2 . The construction of our model is straightforward: our 15-layer model DenseNMT-En-De-15 uses dense connection with DenseAtt-2, INLINEFORM0 . The hidden number in each layer is INLINEFORM1 that of the original model, while the kernel size maintains the same. Training setting We use Nesterov Accelerated Gradient (NAG) BIBREF21 as our optimizer, and the initial learning rate is set to INLINEFORM0 . For German-English and Turkish-English experiments, the learning rate will shrink by 10 every time the validation loss increases. For the English-German dataset, in consistent with BIBREF2 , the learning rate will shrink by 10 every epoch since the first increment of validation loss. The system stops training until the learning rate is less than INLINEFORM1 . All models are trained end-to-end without any warmstart techniques. We set our batch size for the WMT14 English-German dataset to be 48, and additionally tune the length penalty parameter, in consistent with BIBREF2 . For other datasets, we set batch size to be 32. During inference, we use a beam size of 5. Training curve We first show that DenseNMT helps information flow more efficiently by presenting the training loss curve. All hyperparameters are fixed in each plot, only the models are different. In Figure FIGREF30 , the loss curves for both training and dev sets (before entering the finetuning period) are provided for De-En, Tr-En and Tr-En-morph. For clarity, we compare DenseNMT-4L-2 with BASE-4L. We observe that DenseNMT models are consistently better than residual-connected models, since their loss curves are always below those of the baseline models. The effect is more obvious on the WMT14 English-German dataset. We rerun the best model provided by BIBREF2 and compare with our model. In Figure FIGREF33 , where train/test loss curve are provided, DenseNMT-En-De-15 reaches the same level of loss and starts finetuning (validation loss starts to increase) at epoch 13, which is 35% faster than the baseline. Adding dense connections changes the architecture, and would slightly influence training speed. For the WMT14 En-De experiments, the computing time for both DenseNMT and the baseline (with similar number of parameters and same batch size) tested on single M40 GPU card are 1571 and 1710 word/s, respectively. While adding dense connections influences the per-iteration training slightly (8.1% reduction of speed), it uses many fewer epochs, and achieves a better BLEU score. In terms of training time, DenseNMT uses 29.3%(before finetuning)/22.9%(total) less time than the baseline. DenseNMT improves accuracy with similar architectures and model sizes Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well. Furthermore, in Table TABREF36 , we investigate DenseNMT models through ablation study. In order to make the comparison fair, six models listed have roughly the same number of parameters. On De-En, Tr-En and Tr-En-morph, we see improvement by making the encoder dense, making the decoder dense, and making the attention dense. Fully dense-connected model DenseNMT-4L-1 further improves the translation accuracy. By allowing more flexibility in dense attention, DenseNMT-4L-2 provides the highest BLEU scores for all three experiments. From the experiments, we have seen that enlarging the information flow in the attention block benefits the models. The dense attention block provides multi-layer information transmission from the encoder to the decoder, and to the output as well. Meanwhile, as shown by the ablation study, the dense-connected encoder and decoder both give more powerful representations than the residual-connected counterparts. As a result, the integration of the three parts improve the accuracy significantly. DenseNMT with smaller embedding size From Table TABREF32 , we also observe that DenseNMT performs better with small embedding sizes compared to residual-connected models with regular embedding size. For example, on Tr-En model, the 8-layer DenseNMT-8L-2 model with embedding size 64 matches the BLEU score of the 8-layer BASE model with embedding size 256, while the number of parameter of the former one is only INLINEFORM0 of the later one. In all genres, DenseNMT model with embedding size 128 is comparable or even better than the baseline model with embedding size 256. While overlarge embedding sizes hurt accuracy because of overfitting issues, smaller sizes are not preferable because of insufficient representation power. However, our dense models show that with better model design, the embedding information can be well concentrated on fewer dimensions, e.g., 64. This is extremely helpful when building models on mobile and small devices where the model size is critical. While there are other works that stress the efficiency issue by using techniques such as separable convolution BIBREF3 , and shared embedding BIBREF4 , our DenseNMT framework is orthogonal to those approaches. We believe that other techniques would produce more efficient models through combining with our DenseNMT framework. DenseNMT compares with state-of-the-art results For the IWSLT14 German-English dataset, we compare with the best results reported from literatures. To be consistent with prior works, we also provide results using our model directly on the dataset without BPE preprocessing. As shown in Table TABREF39 , DenseNMT outperforms the phrase-structure based network NPMT BIBREF16 (with beam size 10) by 1.2 BLEU, using a smaller beam size, and outperforms the actor-critic method based algorithm BIBREF15 by 2.8 BLEU. For reference, our model trained on the BPE preprocessed dataset achieves 32.26 BLEU, which is 1.93 BLEU higher than our word-based model. For Turkish-English task, we compare with BIBREF19 which uses the same morphology preprocessing as our Tr-En-morph. As shown in Table TABREF37 , our baseline is higher than the previous result, and we further achieve new benchmark result with 24.36 BLEU average score. For WMT14 English-German, from Table TABREF41 , we can see that DenseNMT outperforms ConvS2S model by 0.36 BLEU score using 35% fewer training iterations and 20% fewer parameters. We also compare with another convolution based NMT model: SliceNet BIBREF3 , which explores depthwise separable convolution architectures. SliceNet-Full matches our result, and SliceNet-Super outperforms by 0.58 BLEU score. However, both models have 2.2x more parameters than our model. We expect DenseNMT structure could help improve their performance as well. Conclusion In this work, we have proposed DenseNMT as a dense-connection framework for translation tasks, which uses the information from embeddings more efficiently, and passes abundant information from the encoder side to the decoder side. Our experiments have shown that DenseNMT is able to speed up the information flow and improve translation accuracy. For the future work, we will combine dense connections with other deep architectures, such as RNNs BIBREF7 and self-attention networks BIBREF4 .
Yes
f1214a05cc0e6d870c789aed24a8d4c768e1db2f
f1214a05cc0e6d870c789aed24a8d4c768e1db2f_0
Q: what language pairs are explored? Text: Introduction Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years. Starting from the encoder-decoder framework BIBREF0 , NMT starts to show promising results in many language pairs. The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable. The attention mechanism BIBREF1 added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions BIBREF2 , BIBREF3 , BIBREF4 . One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages. From the optimization perspective, deeper models are hard to efficiently back-propagate the gradients, and this phenomenon as well as its solution is better explored in the computer vision society. Residual networks (ResNet) BIBREF5 achieve great performance in a wide range of tasks, including image classification and image segmentation. Residual connections allow features from previous layers to be accumulated to the next layer easily, and make the optimization of the model efficiently focus on refining upper layer features. NMT is considered as a challenging problem due to its sequence-to-sequence generation framework, and the goal of comprehension and reorganizing from one language to the other. Apart from the encoder block that works as a feature generator, the decoder network combining with the attention mechanism bring new challenges to the optimization of the models. While nowadays best-performing NMT systems use residual connections, we question whether this is the most efficient way to propagate information through deep models. In this paper, inspired by the idea of using dense connections for training computer vision tasks BIBREF6 , we propose a densely connected NMT framework (DenseNMT) that efficiently propagates information from the encoder to the decoder through the attention component. Taking the CNN-based deep architecture as an example, we verify the efficiency of DenseNMT. Our contributions in this work include: (i) by comparing the loss curve, we show that DenseNMT allows the model to pass information more efficiently, and speeds up training; (ii) we show through ablation study that dense connections in all three blocks altogether help improve the performance, while not increasing the number of parameters; (iii) DenseNMT allows the models to achieve similar performance with much smaller embedding size; (iv) DenseNMT on IWSLT14 German-English and Turkish-English translation tasks achieves new benchmark BLEU scores, and the result on WMT14 English-German task is more competitive than the residual connections based baseline model. DenseNMT In this section, we introduce our DenseNMT architecture. In general, compared with residual connected NMT models, DenseNMT allows each layer to provide its information to all subsequent layers directly. Figure FIGREF9 - FIGREF15 show the design of our model structure by parts. We start with the formulation of a regular NMT model. Given a set of sentence pairs INLINEFORM0 , an NMT model learns parameter INLINEFORM1 by maximizing the log-likelihood function: DISPLAYFORM0 For every sentence pair INLINEFORM0 , INLINEFORM1 is calculated based on the decomposition: DISPLAYFORM0 where INLINEFORM0 is the length of sentence INLINEFORM1 . Typically, NMT models use the encoder-attention-decoder framework BIBREF1 , and potentially use multi-layer structure for both encoder and decoder. Given a source sentence INLINEFORM2 with length INLINEFORM3 , the encoder calculates hidden representations by layer. We denote the representation in the INLINEFORM4 -th layer as INLINEFORM5 , with dimension INLINEFORM6 , where INLINEFORM7 is the dimension of features in layer INLINEFORM8 . The hidden representation at each position INLINEFORM9 is either calculated by: DISPLAYFORM0 for recurrent transformation INLINEFORM0 such as LSTM and GRU, or by: DISPLAYFORM0 for parallel transformation INLINEFORM0 . On the other hand, the decoder layers INLINEFORM1 follow similar structure, while getting extra representations from the encoder side. These extra representations are also called attention, and are especially useful for capturing alignment information. In our experiments, we use convolution based transformation for INLINEFORM0 due to both its efficiency and high performance, more formally, DISPLAYFORM0 INLINEFORM0 is the gated linear unit proposed in BIBREF11 and the kernel size is INLINEFORM1 . DenseNMT is agnostic to the transformation function, and we expect it to also work well combining with other transformations, such as LSTM, self-attention and depthwise separable convolution. Dense encoder and decoder Different from residual connections, later layers in the dense encoder are able to use features from all previous layers by concatenating them: DISPLAYFORM0 Here, INLINEFORM0 is defined in Eq. ( EQREF10 ), INLINEFORM1 represents concatenation operation. Although this brings extra connections to the network, with smaller number of features per layer, the architecture encourages feature reuse, and can be more compact and expressive. As shown in Figure FIGREF9 , when designing the model, the hidden size in each layer is much smaller than the hidden size of the corresponding layer in the residual-connected model. While each encoder layer perceives information from its previous layers, each decoder layer INLINEFORM0 has two information sources: previous layers INLINEFORM1 , and attention values INLINEFORM2 . Therefore, in order to allow dense information flow, we redefine the generation of INLINEFORM3 -th layer as a nonlinear function over all its previous decoder layers and previous attentions. This can be written as: DISPLAYFORM0 where INLINEFORM0 is the attention value using INLINEFORM1 -th decoder layer and information from encoder side, which will be specified later. Figure FIGREF13 shows the comparison of a dense decoder with a regular residual decoder. The dimensions of both attention values and hidden layers are chosen with smaller values, yet the perceived information for each layer consists of a higher dimension vector with more representation power. The output of the decoder is a linear transformation of the concatenation of all layers by default. To compromise to the increment of dimensions, we use summary layers, which will be introduced in Section 3.3. With summary layers, the output of the decoder is only a linear transformation of the concatenation of the upper few layers. Dense attention Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2). However, most of them only use the last encoder layer. In order to pass more abundant information from the encoder side to the decoder side, the attention block needs to be more expressive. Following the recent development of designing attention architectures, we propose DenseAtt as the dense attention block, which serves for the dense connection between the encoder and the decoder side. More specifically, two options are proposed accordingly. For each decoding step in the corresponding decoder layer, the two options both calculate attention using multiple encoder layers. The first option is more compressed, while the second option is more expressive and flexible. We name them as DenseAtt-1 and DenseAtt-2 respectively. Figure FIGREF15 shows the architecture of (a) multi-step attention BIBREF2 , (b) DenseAtt-1, and (c) DenseAtt-2 in order. In general, a popular multiplicative attention module can be written as: DISPLAYFORM0 where INLINEFORM0 represent query, key, value respectively. We will use this function INLINEFORM1 in the following descriptions. In the decoding phase, we use a layer-wise attention mechanism, such that each decoder layer absorbs different attention information to adjust its output. Instead of treating the last hidden layer as the encoder's output, we treat the concatenation of all hidden layers from encoder side as the output. The decoder layer multiplies with the encoder output to obtain the attention weights, which is then multiplied by a linear combination of the encoder output and the sentence embedding. The attention output of each layer INLINEFORM0 can be formally written as: DISPLAYFORM0 where INLINEFORM0 is the multiplicative attention function, INLINEFORM1 is a concatenation operation that combines all features, and INLINEFORM2 is a linear transformation function that maps each variable to a fixed dimension in order to calculate the attention value. Notice that we explicitly write the INLINEFORM3 term in ( EQREF19 ) to keep consistent with the multi-step attention mechanism, as pictorially shown in Figure FIGREF15 (a). Notice that the transformation INLINEFORM0 in DenseAtt-1 forces the encoder layers to be mixed before doing attention. Since we use multiple hidden layers from the encoder side to get an attention value, we can alternatively calculate multiple attention values before concatenating them. In another word, the decoder layer can get different attention values from different encoder layers. This can be formally expressed as: DISPLAYFORM0 where the only difference from Eq. ( EQREF19 ) is that the concatenation operation is substituted by a summation operation, and is put after the attention function INLINEFORM0 . This method further increases the representation power in the attention block, while maintaining the same number of parameters in the model. Summary layers Since the number of features fed into nonlinear operation is accumulated along the path, the parameter size increases accordingly. For example, for the INLINEFORM0 -th encoder layer, the input dimension of features is INLINEFORM1 , where INLINEFORM2 is the feature dimension in previous layers, INLINEFORM3 is the embedding size. In order to avoid the calculation bottleneck for later layers due to large INLINEFORM4 , we introduce the summary layer for deeper models. It summarizes the features for all previous layers and projects back to the embedding size, so that later layers of both the encoder and the decoder side do not need to look back further. The summary layers can be considered as contextualized word vectors in a given sentence BIBREF12 . We add one summary layer after every INLINEFORM5 layers, where INLINEFORM6 is the hyperparameter we introduce. Accordingly, the input dimension of features is at most INLINEFORM7 for the last layer of the encoder. Moreover, combined with the summary layer setting, our DenseAtt mechanism allows each decoder layer to calculate the attention value focusing on the last few encoder layers, which consists of the last contextual embedding layer and several dense connected layers with low dimension. In practice, we set INLINEFORM8 as 5 or 6. Analysis of information flow Figure FIGREF9 and Figure FIGREF13 show the difference of information flow compared with a residual-based encoder/decoder. For residual-based models, each layer can absorb a single high-dimensional vector from its previous layer as the only information, while for DenseNMT, each layer can utilize several low-dimensional vectors from its previous layers and a high-dimensional vector from the first layer (embedding layer) as its information. In DenseNMT, each layer directly provides information to its later layers. Therefore, the structure allows feature reuse, and encourages upper layers to focus on creating new features. Furthermore, the attention block allows the embedding vectors (as well as other hidden layers) to guide the decoder's generation more directly; therefore, during back-propagation, the gradient information can be passed directly to all encoder layers simultaneously. Datasets We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German. We preprocess the IWSLT14 German-English dataset following byte-pair-encoding (BPE) method BIBREF13 . We learn 25k BPE codes using the joint corpus of source and target languages. We randomly select 7k from IWSLT14 German-English as the development set , and the test set is a concatenation of dev2010, tst2010, tst2011 and tst2012, which is widely used in prior works BIBREF14 , BIBREF15 , BIBREF16 . For the Turkish-English translation task, we use the data provided by IWSLT14 BIBREF17 and the SETimes corpus BIBREF17 following BIBREF18 . After removing sentence pairs with length ratio over 9, we obtain 360k sentence pairs. Since there is little commonality between the two languages, we learn 30k size BPE codes separately for Turkish and English. In addition to this, we give another preprocessing for Turkish sentences and use word-level English corpus. For Turkish sentences, following BIBREF19 , BIBREF18 , we use the morphology tool Zemberek with disambiguation by the morphological analysis BIBREF20 and removal of non-surface tokens. Following BIBREF18 , we concatenate tst2011, tst2012, tst2013, tst2014 as our test set. We concatenate dev2010 and tst2010 as the development set. We preprocess the WMT14 English-German dataset using a BPE code size of 40k. We use the concatenation of newstest2013 and newstest2012 as the development set. Model and architect design As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96. Since NMT model usually allocates a large proportion of its parameters to the source/target sentence embedding and softmax matrix, we explore in our experiments to what extent decreasing the dimensions of the three parts would harm the BLEU score. We change the dimensions of the source embedding, the target embedding as well as the softmax matrix simultaneously to smaller values, and then project each word back to the original embedding dimension through a linear transformation. This significantly reduces the number of total parameters, while not influencing the upper layer structure of the model. We also introduce three additional models we use for ablation study, all using 4-layer structure. Based on the residual connected BASE-4L model, (1) DenseENC-4L only makes encoder side dense, (2) DenseDEC-4L only makes decoder side dense, and (3) DenseAtt-4L only makes the attention dense using DenseAtt-2. There is no summary layer in the models, and both DenseENC-4L and DenseDEC-4L use hidden size 128. Again, by reducing the hidden size, we ensure that different 4-layer models have similar model sizes. Our design for the WMT14 English-German model follows the best performance model provided in BIBREF2 . The construction of our model is straightforward: our 15-layer model DenseNMT-En-De-15 uses dense connection with DenseAtt-2, INLINEFORM0 . The hidden number in each layer is INLINEFORM1 that of the original model, while the kernel size maintains the same. Training setting We use Nesterov Accelerated Gradient (NAG) BIBREF21 as our optimizer, and the initial learning rate is set to INLINEFORM0 . For German-English and Turkish-English experiments, the learning rate will shrink by 10 every time the validation loss increases. For the English-German dataset, in consistent with BIBREF2 , the learning rate will shrink by 10 every epoch since the first increment of validation loss. The system stops training until the learning rate is less than INLINEFORM1 . All models are trained end-to-end without any warmstart techniques. We set our batch size for the WMT14 English-German dataset to be 48, and additionally tune the length penalty parameter, in consistent with BIBREF2 . For other datasets, we set batch size to be 32. During inference, we use a beam size of 5. Training curve We first show that DenseNMT helps information flow more efficiently by presenting the training loss curve. All hyperparameters are fixed in each plot, only the models are different. In Figure FIGREF30 , the loss curves for both training and dev sets (before entering the finetuning period) are provided for De-En, Tr-En and Tr-En-morph. For clarity, we compare DenseNMT-4L-2 with BASE-4L. We observe that DenseNMT models are consistently better than residual-connected models, since their loss curves are always below those of the baseline models. The effect is more obvious on the WMT14 English-German dataset. We rerun the best model provided by BIBREF2 and compare with our model. In Figure FIGREF33 , where train/test loss curve are provided, DenseNMT-En-De-15 reaches the same level of loss and starts finetuning (validation loss starts to increase) at epoch 13, which is 35% faster than the baseline. Adding dense connections changes the architecture, and would slightly influence training speed. For the WMT14 En-De experiments, the computing time for both DenseNMT and the baseline (with similar number of parameters and same batch size) tested on single M40 GPU card are 1571 and 1710 word/s, respectively. While adding dense connections influences the per-iteration training slightly (8.1% reduction of speed), it uses many fewer epochs, and achieves a better BLEU score. In terms of training time, DenseNMT uses 29.3%(before finetuning)/22.9%(total) less time than the baseline. DenseNMT improves accuracy with similar architectures and model sizes Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well. Furthermore, in Table TABREF36 , we investigate DenseNMT models through ablation study. In order to make the comparison fair, six models listed have roughly the same number of parameters. On De-En, Tr-En and Tr-En-morph, we see improvement by making the encoder dense, making the decoder dense, and making the attention dense. Fully dense-connected model DenseNMT-4L-1 further improves the translation accuracy. By allowing more flexibility in dense attention, DenseNMT-4L-2 provides the highest BLEU scores for all three experiments. From the experiments, we have seen that enlarging the information flow in the attention block benefits the models. The dense attention block provides multi-layer information transmission from the encoder to the decoder, and to the output as well. Meanwhile, as shown by the ablation study, the dense-connected encoder and decoder both give more powerful representations than the residual-connected counterparts. As a result, the integration of the three parts improve the accuracy significantly. DenseNMT with smaller embedding size From Table TABREF32 , we also observe that DenseNMT performs better with small embedding sizes compared to residual-connected models with regular embedding size. For example, on Tr-En model, the 8-layer DenseNMT-8L-2 model with embedding size 64 matches the BLEU score of the 8-layer BASE model with embedding size 256, while the number of parameter of the former one is only INLINEFORM0 of the later one. In all genres, DenseNMT model with embedding size 128 is comparable or even better than the baseline model with embedding size 256. While overlarge embedding sizes hurt accuracy because of overfitting issues, smaller sizes are not preferable because of insufficient representation power. However, our dense models show that with better model design, the embedding information can be well concentrated on fewer dimensions, e.g., 64. This is extremely helpful when building models on mobile and small devices where the model size is critical. While there are other works that stress the efficiency issue by using techniques such as separable convolution BIBREF3 , and shared embedding BIBREF4 , our DenseNMT framework is orthogonal to those approaches. We believe that other techniques would produce more efficient models through combining with our DenseNMT framework. DenseNMT compares with state-of-the-art results For the IWSLT14 German-English dataset, we compare with the best results reported from literatures. To be consistent with prior works, we also provide results using our model directly on the dataset without BPE preprocessing. As shown in Table TABREF39 , DenseNMT outperforms the phrase-structure based network NPMT BIBREF16 (with beam size 10) by 1.2 BLEU, using a smaller beam size, and outperforms the actor-critic method based algorithm BIBREF15 by 2.8 BLEU. For reference, our model trained on the BPE preprocessed dataset achieves 32.26 BLEU, which is 1.93 BLEU higher than our word-based model. For Turkish-English task, we compare with BIBREF19 which uses the same morphology preprocessing as our Tr-En-morph. As shown in Table TABREF37 , our baseline is higher than the previous result, and we further achieve new benchmark result with 24.36 BLEU average score. For WMT14 English-German, from Table TABREF41 , we can see that DenseNMT outperforms ConvS2S model by 0.36 BLEU score using 35% fewer training iterations and 20% fewer parameters. We also compare with another convolution based NMT model: SliceNet BIBREF3 , which explores depthwise separable convolution architectures. SliceNet-Full matches our result, and SliceNet-Super outperforms by 0.58 BLEU score. However, both models have 2.2x more parameters than our model. We expect DenseNMT structure could help improve their performance as well. Conclusion In this work, we have proposed DenseNMT as a dense-connection framework for translation tasks, which uses the information from embeddings more efficiently, and passes abundant information from the encoder side to the decoder side. Our experiments have shown that DenseNMT is able to speed up the information flow and improve translation accuracy. For the future work, we will combine dense connections with other deep architectures, such as RNNs BIBREF7 and self-attention networks BIBREF4 .
German-English, Turkish-English, English-German
41d3ab045ef8e52e4bbe5418096551a22c5e9c43
41d3ab045ef8e52e4bbe5418096551a22c5e9c43_0
Q: what datasets were used? Text: Introduction Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years. Starting from the encoder-decoder framework BIBREF0 , NMT starts to show promising results in many language pairs. The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable. The attention mechanism BIBREF1 added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions BIBREF2 , BIBREF3 , BIBREF4 . One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages. From the optimization perspective, deeper models are hard to efficiently back-propagate the gradients, and this phenomenon as well as its solution is better explored in the computer vision society. Residual networks (ResNet) BIBREF5 achieve great performance in a wide range of tasks, including image classification and image segmentation. Residual connections allow features from previous layers to be accumulated to the next layer easily, and make the optimization of the model efficiently focus on refining upper layer features. NMT is considered as a challenging problem due to its sequence-to-sequence generation framework, and the goal of comprehension and reorganizing from one language to the other. Apart from the encoder block that works as a feature generator, the decoder network combining with the attention mechanism bring new challenges to the optimization of the models. While nowadays best-performing NMT systems use residual connections, we question whether this is the most efficient way to propagate information through deep models. In this paper, inspired by the idea of using dense connections for training computer vision tasks BIBREF6 , we propose a densely connected NMT framework (DenseNMT) that efficiently propagates information from the encoder to the decoder through the attention component. Taking the CNN-based deep architecture as an example, we verify the efficiency of DenseNMT. Our contributions in this work include: (i) by comparing the loss curve, we show that DenseNMT allows the model to pass information more efficiently, and speeds up training; (ii) we show through ablation study that dense connections in all three blocks altogether help improve the performance, while not increasing the number of parameters; (iii) DenseNMT allows the models to achieve similar performance with much smaller embedding size; (iv) DenseNMT on IWSLT14 German-English and Turkish-English translation tasks achieves new benchmark BLEU scores, and the result on WMT14 English-German task is more competitive than the residual connections based baseline model. DenseNMT In this section, we introduce our DenseNMT architecture. In general, compared with residual connected NMT models, DenseNMT allows each layer to provide its information to all subsequent layers directly. Figure FIGREF9 - FIGREF15 show the design of our model structure by parts. We start with the formulation of a regular NMT model. Given a set of sentence pairs INLINEFORM0 , an NMT model learns parameter INLINEFORM1 by maximizing the log-likelihood function: DISPLAYFORM0 For every sentence pair INLINEFORM0 , INLINEFORM1 is calculated based on the decomposition: DISPLAYFORM0 where INLINEFORM0 is the length of sentence INLINEFORM1 . Typically, NMT models use the encoder-attention-decoder framework BIBREF1 , and potentially use multi-layer structure for both encoder and decoder. Given a source sentence INLINEFORM2 with length INLINEFORM3 , the encoder calculates hidden representations by layer. We denote the representation in the INLINEFORM4 -th layer as INLINEFORM5 , with dimension INLINEFORM6 , where INLINEFORM7 is the dimension of features in layer INLINEFORM8 . The hidden representation at each position INLINEFORM9 is either calculated by: DISPLAYFORM0 for recurrent transformation INLINEFORM0 such as LSTM and GRU, or by: DISPLAYFORM0 for parallel transformation INLINEFORM0 . On the other hand, the decoder layers INLINEFORM1 follow similar structure, while getting extra representations from the encoder side. These extra representations are also called attention, and are especially useful for capturing alignment information. In our experiments, we use convolution based transformation for INLINEFORM0 due to both its efficiency and high performance, more formally, DISPLAYFORM0 INLINEFORM0 is the gated linear unit proposed in BIBREF11 and the kernel size is INLINEFORM1 . DenseNMT is agnostic to the transformation function, and we expect it to also work well combining with other transformations, such as LSTM, self-attention and depthwise separable convolution. Dense encoder and decoder Different from residual connections, later layers in the dense encoder are able to use features from all previous layers by concatenating them: DISPLAYFORM0 Here, INLINEFORM0 is defined in Eq. ( EQREF10 ), INLINEFORM1 represents concatenation operation. Although this brings extra connections to the network, with smaller number of features per layer, the architecture encourages feature reuse, and can be more compact and expressive. As shown in Figure FIGREF9 , when designing the model, the hidden size in each layer is much smaller than the hidden size of the corresponding layer in the residual-connected model. While each encoder layer perceives information from its previous layers, each decoder layer INLINEFORM0 has two information sources: previous layers INLINEFORM1 , and attention values INLINEFORM2 . Therefore, in order to allow dense information flow, we redefine the generation of INLINEFORM3 -th layer as a nonlinear function over all its previous decoder layers and previous attentions. This can be written as: DISPLAYFORM0 where INLINEFORM0 is the attention value using INLINEFORM1 -th decoder layer and information from encoder side, which will be specified later. Figure FIGREF13 shows the comparison of a dense decoder with a regular residual decoder. The dimensions of both attention values and hidden layers are chosen with smaller values, yet the perceived information for each layer consists of a higher dimension vector with more representation power. The output of the decoder is a linear transformation of the concatenation of all layers by default. To compromise to the increment of dimensions, we use summary layers, which will be introduced in Section 3.3. With summary layers, the output of the decoder is only a linear transformation of the concatenation of the upper few layers. Dense attention Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2). However, most of them only use the last encoder layer. In order to pass more abundant information from the encoder side to the decoder side, the attention block needs to be more expressive. Following the recent development of designing attention architectures, we propose DenseAtt as the dense attention block, which serves for the dense connection between the encoder and the decoder side. More specifically, two options are proposed accordingly. For each decoding step in the corresponding decoder layer, the two options both calculate attention using multiple encoder layers. The first option is more compressed, while the second option is more expressive and flexible. We name them as DenseAtt-1 and DenseAtt-2 respectively. Figure FIGREF15 shows the architecture of (a) multi-step attention BIBREF2 , (b) DenseAtt-1, and (c) DenseAtt-2 in order. In general, a popular multiplicative attention module can be written as: DISPLAYFORM0 where INLINEFORM0 represent query, key, value respectively. We will use this function INLINEFORM1 in the following descriptions. In the decoding phase, we use a layer-wise attention mechanism, such that each decoder layer absorbs different attention information to adjust its output. Instead of treating the last hidden layer as the encoder's output, we treat the concatenation of all hidden layers from encoder side as the output. The decoder layer multiplies with the encoder output to obtain the attention weights, which is then multiplied by a linear combination of the encoder output and the sentence embedding. The attention output of each layer INLINEFORM0 can be formally written as: DISPLAYFORM0 where INLINEFORM0 is the multiplicative attention function, INLINEFORM1 is a concatenation operation that combines all features, and INLINEFORM2 is a linear transformation function that maps each variable to a fixed dimension in order to calculate the attention value. Notice that we explicitly write the INLINEFORM3 term in ( EQREF19 ) to keep consistent with the multi-step attention mechanism, as pictorially shown in Figure FIGREF15 (a). Notice that the transformation INLINEFORM0 in DenseAtt-1 forces the encoder layers to be mixed before doing attention. Since we use multiple hidden layers from the encoder side to get an attention value, we can alternatively calculate multiple attention values before concatenating them. In another word, the decoder layer can get different attention values from different encoder layers. This can be formally expressed as: DISPLAYFORM0 where the only difference from Eq. ( EQREF19 ) is that the concatenation operation is substituted by a summation operation, and is put after the attention function INLINEFORM0 . This method further increases the representation power in the attention block, while maintaining the same number of parameters in the model. Summary layers Since the number of features fed into nonlinear operation is accumulated along the path, the parameter size increases accordingly. For example, for the INLINEFORM0 -th encoder layer, the input dimension of features is INLINEFORM1 , where INLINEFORM2 is the feature dimension in previous layers, INLINEFORM3 is the embedding size. In order to avoid the calculation bottleneck for later layers due to large INLINEFORM4 , we introduce the summary layer for deeper models. It summarizes the features for all previous layers and projects back to the embedding size, so that later layers of both the encoder and the decoder side do not need to look back further. The summary layers can be considered as contextualized word vectors in a given sentence BIBREF12 . We add one summary layer after every INLINEFORM5 layers, where INLINEFORM6 is the hyperparameter we introduce. Accordingly, the input dimension of features is at most INLINEFORM7 for the last layer of the encoder. Moreover, combined with the summary layer setting, our DenseAtt mechanism allows each decoder layer to calculate the attention value focusing on the last few encoder layers, which consists of the last contextual embedding layer and several dense connected layers with low dimension. In practice, we set INLINEFORM8 as 5 or 6. Analysis of information flow Figure FIGREF9 and Figure FIGREF13 show the difference of information flow compared with a residual-based encoder/decoder. For residual-based models, each layer can absorb a single high-dimensional vector from its previous layer as the only information, while for DenseNMT, each layer can utilize several low-dimensional vectors from its previous layers and a high-dimensional vector from the first layer (embedding layer) as its information. In DenseNMT, each layer directly provides information to its later layers. Therefore, the structure allows feature reuse, and encourages upper layers to focus on creating new features. Furthermore, the attention block allows the embedding vectors (as well as other hidden layers) to guide the decoder's generation more directly; therefore, during back-propagation, the gradient information can be passed directly to all encoder layers simultaneously. Datasets We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German. We preprocess the IWSLT14 German-English dataset following byte-pair-encoding (BPE) method BIBREF13 . We learn 25k BPE codes using the joint corpus of source and target languages. We randomly select 7k from IWSLT14 German-English as the development set , and the test set is a concatenation of dev2010, tst2010, tst2011 and tst2012, which is widely used in prior works BIBREF14 , BIBREF15 , BIBREF16 . For the Turkish-English translation task, we use the data provided by IWSLT14 BIBREF17 and the SETimes corpus BIBREF17 following BIBREF18 . After removing sentence pairs with length ratio over 9, we obtain 360k sentence pairs. Since there is little commonality between the two languages, we learn 30k size BPE codes separately for Turkish and English. In addition to this, we give another preprocessing for Turkish sentences and use word-level English corpus. For Turkish sentences, following BIBREF19 , BIBREF18 , we use the morphology tool Zemberek with disambiguation by the morphological analysis BIBREF20 and removal of non-surface tokens. Following BIBREF18 , we concatenate tst2011, tst2012, tst2013, tst2014 as our test set. We concatenate dev2010 and tst2010 as the development set. We preprocess the WMT14 English-German dataset using a BPE code size of 40k. We use the concatenation of newstest2013 and newstest2012 as the development set. Model and architect design As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96. Since NMT model usually allocates a large proportion of its parameters to the source/target sentence embedding and softmax matrix, we explore in our experiments to what extent decreasing the dimensions of the three parts would harm the BLEU score. We change the dimensions of the source embedding, the target embedding as well as the softmax matrix simultaneously to smaller values, and then project each word back to the original embedding dimension through a linear transformation. This significantly reduces the number of total parameters, while not influencing the upper layer structure of the model. We also introduce three additional models we use for ablation study, all using 4-layer structure. Based on the residual connected BASE-4L model, (1) DenseENC-4L only makes encoder side dense, (2) DenseDEC-4L only makes decoder side dense, and (3) DenseAtt-4L only makes the attention dense using DenseAtt-2. There is no summary layer in the models, and both DenseENC-4L and DenseDEC-4L use hidden size 128. Again, by reducing the hidden size, we ensure that different 4-layer models have similar model sizes. Our design for the WMT14 English-German model follows the best performance model provided in BIBREF2 . The construction of our model is straightforward: our 15-layer model DenseNMT-En-De-15 uses dense connection with DenseAtt-2, INLINEFORM0 . The hidden number in each layer is INLINEFORM1 that of the original model, while the kernel size maintains the same. Training setting We use Nesterov Accelerated Gradient (NAG) BIBREF21 as our optimizer, and the initial learning rate is set to INLINEFORM0 . For German-English and Turkish-English experiments, the learning rate will shrink by 10 every time the validation loss increases. For the English-German dataset, in consistent with BIBREF2 , the learning rate will shrink by 10 every epoch since the first increment of validation loss. The system stops training until the learning rate is less than INLINEFORM1 . All models are trained end-to-end without any warmstart techniques. We set our batch size for the WMT14 English-German dataset to be 48, and additionally tune the length penalty parameter, in consistent with BIBREF2 . For other datasets, we set batch size to be 32. During inference, we use a beam size of 5. Training curve We first show that DenseNMT helps information flow more efficiently by presenting the training loss curve. All hyperparameters are fixed in each plot, only the models are different. In Figure FIGREF30 , the loss curves for both training and dev sets (before entering the finetuning period) are provided for De-En, Tr-En and Tr-En-morph. For clarity, we compare DenseNMT-4L-2 with BASE-4L. We observe that DenseNMT models are consistently better than residual-connected models, since their loss curves are always below those of the baseline models. The effect is more obvious on the WMT14 English-German dataset. We rerun the best model provided by BIBREF2 and compare with our model. In Figure FIGREF33 , where train/test loss curve are provided, DenseNMT-En-De-15 reaches the same level of loss and starts finetuning (validation loss starts to increase) at epoch 13, which is 35% faster than the baseline. Adding dense connections changes the architecture, and would slightly influence training speed. For the WMT14 En-De experiments, the computing time for both DenseNMT and the baseline (with similar number of parameters and same batch size) tested on single M40 GPU card are 1571 and 1710 word/s, respectively. While adding dense connections influences the per-iteration training slightly (8.1% reduction of speed), it uses many fewer epochs, and achieves a better BLEU score. In terms of training time, DenseNMT uses 29.3%(before finetuning)/22.9%(total) less time than the baseline. DenseNMT improves accuracy with similar architectures and model sizes Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well. Furthermore, in Table TABREF36 , we investigate DenseNMT models through ablation study. In order to make the comparison fair, six models listed have roughly the same number of parameters. On De-En, Tr-En and Tr-En-morph, we see improvement by making the encoder dense, making the decoder dense, and making the attention dense. Fully dense-connected model DenseNMT-4L-1 further improves the translation accuracy. By allowing more flexibility in dense attention, DenseNMT-4L-2 provides the highest BLEU scores for all three experiments. From the experiments, we have seen that enlarging the information flow in the attention block benefits the models. The dense attention block provides multi-layer information transmission from the encoder to the decoder, and to the output as well. Meanwhile, as shown by the ablation study, the dense-connected encoder and decoder both give more powerful representations than the residual-connected counterparts. As a result, the integration of the three parts improve the accuracy significantly. DenseNMT with smaller embedding size From Table TABREF32 , we also observe that DenseNMT performs better with small embedding sizes compared to residual-connected models with regular embedding size. For example, on Tr-En model, the 8-layer DenseNMT-8L-2 model with embedding size 64 matches the BLEU score of the 8-layer BASE model with embedding size 256, while the number of parameter of the former one is only INLINEFORM0 of the later one. In all genres, DenseNMT model with embedding size 128 is comparable or even better than the baseline model with embedding size 256. While overlarge embedding sizes hurt accuracy because of overfitting issues, smaller sizes are not preferable because of insufficient representation power. However, our dense models show that with better model design, the embedding information can be well concentrated on fewer dimensions, e.g., 64. This is extremely helpful when building models on mobile and small devices where the model size is critical. While there are other works that stress the efficiency issue by using techniques such as separable convolution BIBREF3 , and shared embedding BIBREF4 , our DenseNMT framework is orthogonal to those approaches. We believe that other techniques would produce more efficient models through combining with our DenseNMT framework. DenseNMT compares with state-of-the-art results For the IWSLT14 German-English dataset, we compare with the best results reported from literatures. To be consistent with prior works, we also provide results using our model directly on the dataset without BPE preprocessing. As shown in Table TABREF39 , DenseNMT outperforms the phrase-structure based network NPMT BIBREF16 (with beam size 10) by 1.2 BLEU, using a smaller beam size, and outperforms the actor-critic method based algorithm BIBREF15 by 2.8 BLEU. For reference, our model trained on the BPE preprocessed dataset achieves 32.26 BLEU, which is 1.93 BLEU higher than our word-based model. For Turkish-English task, we compare with BIBREF19 which uses the same morphology preprocessing as our Tr-En-morph. As shown in Table TABREF37 , our baseline is higher than the previous result, and we further achieve new benchmark result with 24.36 BLEU average score. For WMT14 English-German, from Table TABREF41 , we can see that DenseNMT outperforms ConvS2S model by 0.36 BLEU score using 35% fewer training iterations and 20% fewer parameters. We also compare with another convolution based NMT model: SliceNet BIBREF3 , which explores depthwise separable convolution architectures. SliceNet-Full matches our result, and SliceNet-Super outperforms by 0.58 BLEU score. However, both models have 2.2x more parameters than our model. We expect DenseNMT structure could help improve their performance as well. Conclusion In this work, we have proposed DenseNMT as a dense-connection framework for translation tasks, which uses the information from embeddings more efficiently, and passes abundant information from the encoder side to the decoder side. Our experiments have shown that DenseNMT is able to speed up the information flow and improve translation accuracy. For the future work, we will combine dense connections with other deep architectures, such as RNNs BIBREF7 and self-attention networks BIBREF4 .
IWSLT14 German-English, IWSLT14 Turkish-English, WMT14 English-German
62736ad71c76a20aee8e003c462869bab4ab4b1e
62736ad71c76a20aee8e003c462869bab4ab4b1e_0
Q: How is order of binomials tracked across time? Text: Introduction Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture. The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.' This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker. Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings. Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context. The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means. We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time. We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time. While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable. Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering. On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first. The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases. Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order. Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale. Introduction ::: Related Work Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules. Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17. Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3. Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering. Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5. Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20. Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22. Data We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments. Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits. We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics). The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting. The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information. We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets. Dimensions of Binomials In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions. For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}. Dimensions of Binomials ::: Definitions Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8. Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\lbrace x,y\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise. Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0. [Asymmetry] The asymmetry of an unordered list $\lbrace x,y\rbrace $ is $A_{x,y} = 2 \cdot \vert o_{x,y} - 0.5 \vert $. The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\lbrace x,y\rbrace $ for data in year $t \in T$. The movement of $\lbrace x,y\rbrace $ is $M_{x,y} = \max _{t \in T} o_{x,y,t} - \min _{t \in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \in C$. The agreement of $\lbrace x,y\rbrace $ is $A_{x,y} = 1 - (\max _{c \in C} o_{x,y,c} - \min _{c \in C} o_{x,y,c})$. Dimensions of Binomials ::: Dimensions Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\lbrace x,y\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8. Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis. Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another. Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27. There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative. Models And Predictions In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success. Models And Predictions ::: Stability of Asymmetry Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\lbrace x,y\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \sim \text{Bernoulli}(0.5)$. If $\lbrace x,y\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \sim \text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5. One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \in L$ in community $c \in C$ by $\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \in L$ let $p^*_{l} = \max _{c \in C}(\hat{p}_{l, c}) - \min _{ c \in C}(\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\hat{p}_{l, y} = o_{l,y}$ for $y \in Y$ our estimates. The distribution of $p^{\prime }_{l} = \max _{y \in Y}(\hat{p}_{l, y}) - \min _{y \in Y}(\hat{p}_{l, y})$ over $l \in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed. We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\lbrace \bar{s}_1, \ldots , \bar{s}_{k}\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13). Models And Predictions ::: Prediction Results Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted. To formalize these notions, first consider any unordered list $\lbrace x,y\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\lbrace x,y\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy. If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$. Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials. Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing. Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18). Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set. Proper Nouns and the Proximity Principle Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle. Proper Nouns and the Proximity Principle ::: NBA Names First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others. Proper Nouns and the Proximity Principle ::: Subreddit and team names Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle. We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25). While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits. Proper Nouns and the Proximity Principle ::: Political Names In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'. One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'. Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company. Formal Text While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above. Formal Text ::: Wine Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']. Formal Text ::: News We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text. Global Structure We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total. If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'. Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.) We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism']. Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady']. Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']. Multinomials Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words. Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows. Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b]. There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from. Discussion Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general. Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different. We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure. It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research. Acknowledgements The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI.
draw our data from news publications, wine reviews, and Reddit, develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time, develop a null model to determine how much variation in binomial orderings we might expect across communities and across time
aaf50a6a9f449389ef212d25d0fae59c10b0df92
aaf50a6a9f449389ef212d25d0fae59c10b0df92_0
Q: What types of various community texts have been investigated for exploring global structure of binomials? Text: Introduction Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture. The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.' This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker. Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings. Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context. The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means. We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time. We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time. While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable. Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering. On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first. The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases. Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order. Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale. Introduction ::: Related Work Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules. Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17. Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3. Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering. Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5. Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20. Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22. Data We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments. Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits. We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics). The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting. The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information. We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets. Dimensions of Binomials In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions. For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}. Dimensions of Binomials ::: Definitions Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8. Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\lbrace x,y\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise. Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0. [Asymmetry] The asymmetry of an unordered list $\lbrace x,y\rbrace $ is $A_{x,y} = 2 \cdot \vert o_{x,y} - 0.5 \vert $. The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\lbrace x,y\rbrace $ for data in year $t \in T$. The movement of $\lbrace x,y\rbrace $ is $M_{x,y} = \max _{t \in T} o_{x,y,t} - \min _{t \in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \in C$. The agreement of $\lbrace x,y\rbrace $ is $A_{x,y} = 1 - (\max _{c \in C} o_{x,y,c} - \min _{c \in C} o_{x,y,c})$. Dimensions of Binomials ::: Dimensions Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\lbrace x,y\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8. Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis. Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another. Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27. There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative. Models And Predictions In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success. Models And Predictions ::: Stability of Asymmetry Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\lbrace x,y\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \sim \text{Bernoulli}(0.5)$. If $\lbrace x,y\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \sim \text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5. One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \in L$ in community $c \in C$ by $\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \in L$ let $p^*_{l} = \max _{c \in C}(\hat{p}_{l, c}) - \min _{ c \in C}(\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\hat{p}_{l, y} = o_{l,y}$ for $y \in Y$ our estimates. The distribution of $p^{\prime }_{l} = \max _{y \in Y}(\hat{p}_{l, y}) - \min _{y \in Y}(\hat{p}_{l, y})$ over $l \in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed. We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\lbrace \bar{s}_1, \ldots , \bar{s}_{k}\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13). Models And Predictions ::: Prediction Results Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted. To formalize these notions, first consider any unordered list $\lbrace x,y\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\lbrace x,y\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy. If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$. Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials. Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing. Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18). Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set. Proper Nouns and the Proximity Principle Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle. Proper Nouns and the Proximity Principle ::: NBA Names First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others. Proper Nouns and the Proximity Principle ::: Subreddit and team names Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle. We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25). While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits. Proper Nouns and the Proximity Principle ::: Political Names In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'. One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'. Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company. Formal Text While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above. Formal Text ::: Wine Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']. Formal Text ::: News We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text. Global Structure We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total. If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'. Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.) We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism']. Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady']. Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']. Multinomials Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words. Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows. Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b]. There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from. Discussion Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general. Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different. We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure. It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research. Acknowledgements The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI.
news publications, wine reviews, and Reddit
a1917232441890a89b9a268ad8f987184fa50f7a
a1917232441890a89b9a268ad8f987184fa50f7a_0
Q: Are there any new finding in analasys of trinomials that was not present binomials? Text: Introduction Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture. The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.' This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker. Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings. Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context. The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means. We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time. We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time. While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable. Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering. On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first. The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases. Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order. Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale. Introduction ::: Related Work Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules. Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17. Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3. Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering. Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5. Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20. Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22. Data We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments. Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits. We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics). The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting. The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information. We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets. Dimensions of Binomials In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions. For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}. Dimensions of Binomials ::: Definitions Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8. Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\lbrace x,y\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise. Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0. [Asymmetry] The asymmetry of an unordered list $\lbrace x,y\rbrace $ is $A_{x,y} = 2 \cdot \vert o_{x,y} - 0.5 \vert $. The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\lbrace x,y\rbrace $ for data in year $t \in T$. The movement of $\lbrace x,y\rbrace $ is $M_{x,y} = \max _{t \in T} o_{x,y,t} - \min _{t \in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \in C$. The agreement of $\lbrace x,y\rbrace $ is $A_{x,y} = 1 - (\max _{c \in C} o_{x,y,c} - \min _{c \in C} o_{x,y,c})$. Dimensions of Binomials ::: Dimensions Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\lbrace x,y\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8. Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis. Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another. Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27. There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative. Models And Predictions In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success. Models And Predictions ::: Stability of Asymmetry Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\lbrace x,y\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \sim \text{Bernoulli}(0.5)$. If $\lbrace x,y\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \sim \text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5. One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \in L$ in community $c \in C$ by $\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \in L$ let $p^*_{l} = \max _{c \in C}(\hat{p}_{l, c}) - \min _{ c \in C}(\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\hat{p}_{l, y} = o_{l,y}$ for $y \in Y$ our estimates. The distribution of $p^{\prime }_{l} = \max _{y \in Y}(\hat{p}_{l, y}) - \min _{y \in Y}(\hat{p}_{l, y})$ over $l \in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed. We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\lbrace \bar{s}_1, \ldots , \bar{s}_{k}\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13). Models And Predictions ::: Prediction Results Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted. To formalize these notions, first consider any unordered list $\lbrace x,y\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\lbrace x,y\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy. If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$. Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials. Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing. Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18). Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set. Proper Nouns and the Proximity Principle Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle. Proper Nouns and the Proximity Principle ::: NBA Names First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others. Proper Nouns and the Proximity Principle ::: Subreddit and team names Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle. We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25). While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits. Proper Nouns and the Proximity Principle ::: Political Names In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'. One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'. Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company. Formal Text While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above. Formal Text ::: Wine Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']. Formal Text ::: News We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text. Global Structure We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total. If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'. Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.) We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism']. Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady']. Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']. Multinomials Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words. Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows. Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b]. There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from. Discussion Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general. Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different. We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure. It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research. Acknowledgements The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI.
Trinomials are likely to appear in exactly one order
574f17134e4dd041c357ebb75a7ef590da294d22
574f17134e4dd041c357ebb75a7ef590da294d22_0
Q: What new model is proposed for binomial lists? Text: Introduction Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture. The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.' This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker. Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings. Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context. The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means. We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time. We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time. While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable. Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering. On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first. The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases. Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order. Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale. Introduction ::: Related Work Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules. Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17. Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3. Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering. Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5. Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20. Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22. Data We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments. Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits. We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics). The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting. The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information. We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets. Dimensions of Binomials In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions. For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}. Dimensions of Binomials ::: Definitions Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8. Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\lbrace x,y\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise. Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0. [Asymmetry] The asymmetry of an unordered list $\lbrace x,y\rbrace $ is $A_{x,y} = 2 \cdot \vert o_{x,y} - 0.5 \vert $. The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\lbrace x,y\rbrace $ for data in year $t \in T$. The movement of $\lbrace x,y\rbrace $ is $M_{x,y} = \max _{t \in T} o_{x,y,t} - \min _{t \in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \in C$. The agreement of $\lbrace x,y\rbrace $ is $A_{x,y} = 1 - (\max _{c \in C} o_{x,y,c} - \min _{c \in C} o_{x,y,c})$. Dimensions of Binomials ::: Dimensions Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\lbrace x,y\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8. Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis. Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another. Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27. There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative. Models And Predictions In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success. Models And Predictions ::: Stability of Asymmetry Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\lbrace x,y\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \sim \text{Bernoulli}(0.5)$. If $\lbrace x,y\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \sim \text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5. One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \in L$ in community $c \in C$ by $\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \in L$ let $p^*_{l} = \max _{c \in C}(\hat{p}_{l, c}) - \min _{ c \in C}(\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\hat{p}_{l, y} = o_{l,y}$ for $y \in Y$ our estimates. The distribution of $p^{\prime }_{l} = \max _{y \in Y}(\hat{p}_{l, y}) - \min _{y \in Y}(\hat{p}_{l, y})$ over $l \in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed. We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\lbrace \bar{s}_1, \ldots , \bar{s}_{k}\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13). Models And Predictions ::: Prediction Results Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted. To formalize these notions, first consider any unordered list $\lbrace x,y\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\lbrace x,y\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy. If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$. Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials. Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing. Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18). Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set. Proper Nouns and the Proximity Principle Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle. Proper Nouns and the Proximity Principle ::: NBA Names First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others. Proper Nouns and the Proximity Principle ::: Subreddit and team names Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle. We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25). While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits. Proper Nouns and the Proximity Principle ::: Political Names In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'. One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'. Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company. Formal Text While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above. Formal Text ::: Wine Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']. Formal Text ::: News We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text. Global Structure We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total. If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'. Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.) We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism']. Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady']. Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']. Multinomials Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words. Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows. Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b]. There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from. Discussion Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general. Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different. We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure. It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research. Acknowledgements The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI.
null model
41fd359b8c1402b31b6f5efd660143d1414783a0
41fd359b8c1402b31b6f5efd660143d1414783a0_0
Q: How was performance of previously proposed rules at very large scale? Text: Introduction Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture. The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.' This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker. Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings. Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context. The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means. We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time. We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time. While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable. Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering. On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first. The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases. Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order. Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale. Introduction ::: Related Work Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules. Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17. Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3. Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering. Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5. Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20. Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22. Data We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments. Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits. We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics). The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting. The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information. We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets. Dimensions of Binomials In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions. For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}. Dimensions of Binomials ::: Definitions Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8. Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\lbrace x,y\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise. Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0. [Asymmetry] The asymmetry of an unordered list $\lbrace x,y\rbrace $ is $A_{x,y} = 2 \cdot \vert o_{x,y} - 0.5 \vert $. The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\lbrace x,y\rbrace $ for data in year $t \in T$. The movement of $\lbrace x,y\rbrace $ is $M_{x,y} = \max _{t \in T} o_{x,y,t} - \min _{t \in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \in C$. The agreement of $\lbrace x,y\rbrace $ is $A_{x,y} = 1 - (\max _{c \in C} o_{x,y,c} - \min _{c \in C} o_{x,y,c})$. Dimensions of Binomials ::: Dimensions Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\lbrace x,y\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8. Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis. Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another. Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27. There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative. Models And Predictions In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success. Models And Predictions ::: Stability of Asymmetry Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\lbrace x,y\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \sim \text{Bernoulli}(0.5)$. If $\lbrace x,y\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \sim \text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5. One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \in L$ in community $c \in C$ by $\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \in L$ let $p^*_{l} = \max _{c \in C}(\hat{p}_{l, c}) - \min _{ c \in C}(\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\hat{p}_{l, y} = o_{l,y}$ for $y \in Y$ our estimates. The distribution of $p^{\prime }_{l} = \max _{y \in Y}(\hat{p}_{l, y}) - \min _{y \in Y}(\hat{p}_{l, y})$ over $l \in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed. We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\lbrace \bar{s}_1, \ldots , \bar{s}_{k}\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13). Models And Predictions ::: Prediction Results Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted. To formalize these notions, first consider any unordered list $\lbrace x,y\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\lbrace x,y\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy. If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$. Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials. Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing. Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18). Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set. Proper Nouns and the Proximity Principle Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle. Proper Nouns and the Proximity Principle ::: NBA Names First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others. Proper Nouns and the Proximity Principle ::: Subreddit and team names Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle. We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25). While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits. Proper Nouns and the Proximity Principle ::: Political Names In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'. One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'. Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company. Formal Text While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above. Formal Text ::: Wine Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']. Formal Text ::: News We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text. Global Structure We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total. If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'. Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.) We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism']. Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady']. Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']. Multinomials Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words. Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows. Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b]. There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from. Discussion Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general. Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different. We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure. It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research. Acknowledgements The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI.
close to random,
d216d715ec27ee2d4949f9e908895a18fb3238e2
d216d715ec27ee2d4949f9e908895a18fb3238e2_0
Q: What previously proposed rules for predicting binoial ordering are used? Text: Introduction Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture. The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.' This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker. Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings. Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context. The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means. We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time. We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time. While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable. Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering. On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first. The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases. Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order. Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale. Introduction ::: Related Work Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules. Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17. Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3. Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering. Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5. Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20. Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22. Data We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments. Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits. We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics). The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting. The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information. We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets. Dimensions of Binomials In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions. For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}. Dimensions of Binomials ::: Definitions Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8. Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\lbrace x,y\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise. Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0. [Asymmetry] The asymmetry of an unordered list $\lbrace x,y\rbrace $ is $A_{x,y} = 2 \cdot \vert o_{x,y} - 0.5 \vert $. The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\lbrace x,y\rbrace $ for data in year $t \in T$. The movement of $\lbrace x,y\rbrace $ is $M_{x,y} = \max _{t \in T} o_{x,y,t} - \min _{t \in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \in C$. The agreement of $\lbrace x,y\rbrace $ is $A_{x,y} = 1 - (\max _{c \in C} o_{x,y,c} - \min _{c \in C} o_{x,y,c})$. Dimensions of Binomials ::: Dimensions Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\lbrace x,y\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8. Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis. Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another. Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27. There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative. Models And Predictions In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success. Models And Predictions ::: Stability of Asymmetry Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\lbrace x,y\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \sim \text{Bernoulli}(0.5)$. If $\lbrace x,y\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \sim \text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5. One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \in L$ in community $c \in C$ by $\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \in L$ let $p^*_{l} = \max _{c \in C}(\hat{p}_{l, c}) - \min _{ c \in C}(\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\hat{p}_{l, y} = o_{l,y}$ for $y \in Y$ our estimates. The distribution of $p^{\prime }_{l} = \max _{y \in Y}(\hat{p}_{l, y}) - \min _{y \in Y}(\hat{p}_{l, y})$ over $l \in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed. We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\lbrace \bar{s}_1, \ldots , \bar{s}_{k}\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13). Models And Predictions ::: Prediction Results Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted. To formalize these notions, first consider any unordered list $\lbrace x,y\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\lbrace x,y\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy. If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$. Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials. Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing. Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18). Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set. Proper Nouns and the Proximity Principle Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle. Proper Nouns and the Proximity Principle ::: NBA Names First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others. Proper Nouns and the Proximity Principle ::: Subreddit and team names Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle. We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25). While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits. Proper Nouns and the Proximity Principle ::: Political Names In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'. One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'. Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company. Formal Text While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above. Formal Text ::: Wine Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']. Formal Text ::: News We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text. Global Structure We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total. If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'. Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.) We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism']. Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady']. Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']. Multinomials Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words. Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows. Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b]. There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from. Discussion Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general. Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different. We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure. It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research. Acknowledgements The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI.
word length, number of phonemes, number of syllables, alphabetical order, and frequency
ba973b13f26cd5eb1da54663c0a72842681d5bf5
ba973b13f26cd5eb1da54663c0a72842681d5bf5_0
Q: What online text resources are used to test binomial lists? Text: Introduction Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture. The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.' This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker. Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings. Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context. The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means. We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time. We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time. While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable. Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering. On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first. The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases. Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order. Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale. Introduction ::: Related Work Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules. Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17. Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3. Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering. Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5. Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20. Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22. Data We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments. Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits. We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics). The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting. The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information. We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets. Dimensions of Binomials In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions. For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}. Dimensions of Binomials ::: Definitions Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8. Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\lbrace x,y\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise. Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0. [Asymmetry] The asymmetry of an unordered list $\lbrace x,y\rbrace $ is $A_{x,y} = 2 \cdot \vert o_{x,y} - 0.5 \vert $. The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\lbrace x,y\rbrace $ for data in year $t \in T$. The movement of $\lbrace x,y\rbrace $ is $M_{x,y} = \max _{t \in T} o_{x,y,t} - \min _{t \in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \in C$. The agreement of $\lbrace x,y\rbrace $ is $A_{x,y} = 1 - (\max _{c \in C} o_{x,y,c} - \min _{c \in C} o_{x,y,c})$. Dimensions of Binomials ::: Dimensions Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\lbrace x,y\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8. Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis. Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another. Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27. There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative. Models And Predictions In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success. Models And Predictions ::: Stability of Asymmetry Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\lbrace x,y\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \sim \text{Bernoulli}(0.5)$. If $\lbrace x,y\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \sim \text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5. One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \in L$ in community $c \in C$ by $\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \in L$ let $p^*_{l} = \max _{c \in C}(\hat{p}_{l, c}) - \min _{ c \in C}(\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\hat{p}_{l, y} = o_{l,y}$ for $y \in Y$ our estimates. The distribution of $p^{\prime }_{l} = \max _{y \in Y}(\hat{p}_{l, y}) - \min _{y \in Y}(\hat{p}_{l, y})$ over $l \in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed. We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\lbrace \bar{s}_1, \ldots , \bar{s}_{k}\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13). Models And Predictions ::: Prediction Results Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted. To formalize these notions, first consider any unordered list $\lbrace x,y\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\lbrace x,y\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy. If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$. Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials. Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing. Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18). Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set. Proper Nouns and the Proximity Principle Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle. Proper Nouns and the Proximity Principle ::: NBA Names First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others. Proper Nouns and the Proximity Principle ::: Subreddit and team names Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle. We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25). While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits. Proper Nouns and the Proximity Principle ::: Political Names In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'. One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'. Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company. Formal Text While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above. Formal Text ::: Wine Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']. Formal Text ::: News We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text. Global Structure We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total. If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'. Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.) We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism']. Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady']. Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']. Multinomials Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words. Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows. Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b]. There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from. Discussion Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general. Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different. We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure. It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research. Acknowledgements The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI.
news publications, wine reviews, and Reddit
508580af51483b5fb0df2630e8ea726ff08d537b
508580af51483b5fb0df2630e8ea726ff08d537b_0
Q: How do they model a city description using embeddings? Text: Introduction Literary critics form interpretations of meaning in works of literature. Building computational models that can help form and test these interpretations is a fundamental goal of digital humanities research BIBREF0 . Within natural language processing, most previous work that engages with literature relies on “distant reading” BIBREF1 , which involves discovering high-level patterns from large collections of stories BIBREF2 , BIBREF3 . We depart from this trend by showing that computational techniques can also engage with literary criticism at a closer distance: concretely, we use recent advances in text representation learning to test a single literary theory about the novel Invisible Cities by Italo Calvino. Framed as a dialogue between the traveler Marco Polo and the emperor Kublai Khan, Invisible Cities consists of 55 prose poems, each of which describes an imaginary city. Calvino categorizes these cities into eleven thematic groups that deal with human emotions (e.g., desires, memories), general objects (eyes, sky, signs), and unusual properties (continuous, hidden, thin). Many critics argue that Calvino's labels are not meaningful, while others believe that there is a distinct thematic separation between the groups, including the author himself BIBREF4 . The unique structure of this novel — each city's description is short and self-contained (Figure FIGREF1 ) — allows us to computationally examine this debate. As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects. While prior work has computationally analyzed a single book BIBREF7 , our work goes beyond simple word frequency or n-gram counts by leveraging the power of pretrained language models to engage with literary criticism. Admittedly, our approach and evaluations are specific to Invisible Cities, but we believe that similar analyses of more conventionally-structured novels could become possible as text representation methods improve. We also highlight two challenges of applying computational methods to literary criticisms: (1) text representation methods are imperfect, especially when given writing as complex as Calvino's; and (2) evaluation is difficult because there is no consensus among literary critics on a single “correct” interpretation. Literary analyses of Invisible Cities Before describing our method and results, we first review critical opinions on both sides of whether Calvino's thematic groups meaningfully characterize his city descriptions. A Computational Analysis We focus on measuring to what extent computers can recover Calvino's thematic groupings when given just raw text of the city descriptions. At a high level, our approach (Figure FIGREF4 ) involves (1) computing a vector representation for every city and (2) performing unsupervised clustering of these representations. The rest of this section describes both of these steps in more detail. Embedding city descriptions While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm. Clustering city representations Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4 Evaluating clustering assignments While the results from the above section allow us to compare our three computational methods against each other, we additionally collect human judgments to further ground our results. In this section, we first describe our human experiment before quantitatively analyzing our results. Quantitative comparison We compare clusters computed on different representations using community purity; additionally, we compare these computational methods to humans by their accuracy on the odd-one-out task. City representations computed using language model-based representation (ELMo and BERT) achieve significantly higher purity than a clustering induced from random representations, indicating that there is at least some meaningful coherence to Calvino's thematic groups (first row of Table TABREF11 ). ELMo representations yield the highest purity among the three methods, which is surprising as BERT is a bigger model trained on data from books (among other domains). Both ELMo and BERT outperform GloVe, which intuitively makes sense because the latter do not model the order or structure of the words in each description. While the purity of our methods is higher than that of a random clustering, it is still far below 1. To provide additional context to these results, we now switch to our “odd-one-out” task and compare directly to human performance. For each triplet of cities, we identify the intruder as the city with the maximum Euclidean distance from the other two. Interestingly, crowd workers achieve only slightly higher accuracy than ELMo city representations; their interannotator agreement is also low, which indicates that close reading to analyze literary coherence between multiple texts is a difficult task, even for human annotators. Overall, results from both computational and human approaches suggests that the author-assigned labels are not entirely arbitrary, as we can reliably recover some of the thematic groups. Examining the learned clusters Our quantitative results suggest that while vector-based city representations capture some thematic similarities, there is much room for improvement. In this section, we first investigate whether the learned clusters provide evidence for any arguments put forth by literary critics on the novel. Then, we explore possible reasons that the learned clusters deviate from Calvino's. Related work Most previous work within the NLP community applies distant reading BIBREF1 to large collections of books, focusing on modeling different aspects of narratives such as plots and event sequences BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , characters BIBREF2 , BIBREF26 , BIBREF27 , BIBREF28 , and narrative similarity BIBREF3 . In the same vein, researchers in computational literary analysis have combined statistical techniques and linguistics theories to perform quantitative analysis on large narrative texts BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , but these attempts largely rely on techniques such as word counting, topic modeling, and naive Bayes classifiers and are therefore not able to capture the meaning of sentences or paragraphs BIBREF34 . While these works discover general patterns from multiple literary works, we are the first to use cutting-edge NLP techniques to engage with specific literary criticism about a single narrative. There has been other computational work that focuses on just a single book or a small number of books, much of it focused on network analysis: BIBREF35 extract character social networks from Alice in Wonderland, while BIBREF36 recover social networks from 19th century British novels. BIBREF37 disentangles multiple narrative threads within the novel Infinite Jest, while BIBREF7 provides several automated statistical methods for close reading and test them on the award-winning novel Cloud Atlas (2004). Compared to this work, we push further on modeling the content of the narrative by leveraging pretrained language models. Conclusion Our work takes a first step towards computationally engaging with literary criticism on a single book using state-of-the-art text representation methods. While we demonstrate that NLP techniques can be used to support literary analyses and obtain new insights, they also have clear limitations (e.g., in understanding abstract themes). As text representation methods become more powerful, we hope that (1) computational tools will become useful for analyzing novels with more conventional structures, and (2) literary criticism will be used as a testbed for evaluating representations. Acknowledgement We thank the anonymous reviewers for their insightful comments. Additionally, we thank Nader Akoury, Garrett Bernstein, Chenghao Lv, Ari Kobren, Kalpesh Krishna, Saumya Lal, Tu Vu, Zhichao Yang, Mengxue Zhang and the UMass NLP group for suggestions that improved the paper's clarity, coverage of related work, and analysis experiments.
We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm.
89d1687270654979c53d0d0e6a845cdc89414c67
89d1687270654979c53d0d0e6a845cdc89414c67_0
Q: How do they obtain human judgements? Text: Introduction Literary critics form interpretations of meaning in works of literature. Building computational models that can help form and test these interpretations is a fundamental goal of digital humanities research BIBREF0 . Within natural language processing, most previous work that engages with literature relies on “distant reading” BIBREF1 , which involves discovering high-level patterns from large collections of stories BIBREF2 , BIBREF3 . We depart from this trend by showing that computational techniques can also engage with literary criticism at a closer distance: concretely, we use recent advances in text representation learning to test a single literary theory about the novel Invisible Cities by Italo Calvino. Framed as a dialogue between the traveler Marco Polo and the emperor Kublai Khan, Invisible Cities consists of 55 prose poems, each of which describes an imaginary city. Calvino categorizes these cities into eleven thematic groups that deal with human emotions (e.g., desires, memories), general objects (eyes, sky, signs), and unusual properties (continuous, hidden, thin). Many critics argue that Calvino's labels are not meaningful, while others believe that there is a distinct thematic separation between the groups, including the author himself BIBREF4 . The unique structure of this novel — each city's description is short and self-contained (Figure FIGREF1 ) — allows us to computationally examine this debate. As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects. While prior work has computationally analyzed a single book BIBREF7 , our work goes beyond simple word frequency or n-gram counts by leveraging the power of pretrained language models to engage with literary criticism. Admittedly, our approach and evaluations are specific to Invisible Cities, but we believe that similar analyses of more conventionally-structured novels could become possible as text representation methods improve. We also highlight two challenges of applying computational methods to literary criticisms: (1) text representation methods are imperfect, especially when given writing as complex as Calvino's; and (2) evaluation is difficult because there is no consensus among literary critics on a single “correct” interpretation. Literary analyses of Invisible Cities Before describing our method and results, we first review critical opinions on both sides of whether Calvino's thematic groups meaningfully characterize his city descriptions. A Computational Analysis We focus on measuring to what extent computers can recover Calvino's thematic groupings when given just raw text of the city descriptions. At a high level, our approach (Figure FIGREF4 ) involves (1) computing a vector representation for every city and (2) performing unsupervised clustering of these representations. The rest of this section describes both of these steps in more detail. Embedding city descriptions While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm. Clustering city representations Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4 Evaluating clustering assignments While the results from the above section allow us to compare our three computational methods against each other, we additionally collect human judgments to further ground our results. In this section, we first describe our human experiment before quantitatively analyzing our results. Quantitative comparison We compare clusters computed on different representations using community purity; additionally, we compare these computational methods to humans by their accuracy on the odd-one-out task. City representations computed using language model-based representation (ELMo and BERT) achieve significantly higher purity than a clustering induced from random representations, indicating that there is at least some meaningful coherence to Calvino's thematic groups (first row of Table TABREF11 ). ELMo representations yield the highest purity among the three methods, which is surprising as BERT is a bigger model trained on data from books (among other domains). Both ELMo and BERT outperform GloVe, which intuitively makes sense because the latter do not model the order or structure of the words in each description. While the purity of our methods is higher than that of a random clustering, it is still far below 1. To provide additional context to these results, we now switch to our “odd-one-out” task and compare directly to human performance. For each triplet of cities, we identify the intruder as the city with the maximum Euclidean distance from the other two. Interestingly, crowd workers achieve only slightly higher accuracy than ELMo city representations; their interannotator agreement is also low, which indicates that close reading to analyze literary coherence between multiple texts is a difficult task, even for human annotators. Overall, results from both computational and human approaches suggests that the author-assigned labels are not entirely arbitrary, as we can reliably recover some of the thematic groups. Examining the learned clusters Our quantitative results suggest that while vector-based city representations capture some thematic similarities, there is much room for improvement. In this section, we first investigate whether the learned clusters provide evidence for any arguments put forth by literary critics on the novel. Then, we explore possible reasons that the learned clusters deviate from Calvino's. Related work Most previous work within the NLP community applies distant reading BIBREF1 to large collections of books, focusing on modeling different aspects of narratives such as plots and event sequences BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , characters BIBREF2 , BIBREF26 , BIBREF27 , BIBREF28 , and narrative similarity BIBREF3 . In the same vein, researchers in computational literary analysis have combined statistical techniques and linguistics theories to perform quantitative analysis on large narrative texts BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , but these attempts largely rely on techniques such as word counting, topic modeling, and naive Bayes classifiers and are therefore not able to capture the meaning of sentences or paragraphs BIBREF34 . While these works discover general patterns from multiple literary works, we are the first to use cutting-edge NLP techniques to engage with specific literary criticism about a single narrative. There has been other computational work that focuses on just a single book or a small number of books, much of it focused on network analysis: BIBREF35 extract character social networks from Alice in Wonderland, while BIBREF36 recover social networks from 19th century British novels. BIBREF37 disentangles multiple narrative threads within the novel Infinite Jest, while BIBREF7 provides several automated statistical methods for close reading and test them on the award-winning novel Cloud Atlas (2004). Compared to this work, we push further on modeling the content of the narrative by leveraging pretrained language models. Conclusion Our work takes a first step towards computationally engaging with literary criticism on a single book using state-of-the-art text representation methods. While we demonstrate that NLP techniques can be used to support literary analyses and obtain new insights, they also have clear limitations (e.g., in understanding abstract themes). As text representation methods become more powerful, we hope that (1) computational tools will become useful for analyzing novels with more conventional structures, and (2) literary criticism will be used as a testbed for evaluating representations. Acknowledgement We thank the anonymous reviewers for their insightful comments. Additionally, we thank Nader Akoury, Garrett Bernstein, Chenghao Lv, Ari Kobren, Kalpesh Krishna, Saumya Lal, Tu Vu, Zhichao Yang, Mengxue Zhang and the UMass NLP group for suggestions that improved the paper's clarity, coverage of related work, and analysis experiments.
Using crowdsourcing
fc6cfac99636adda28654e1e19931c7394d76c7c
fc6cfac99636adda28654e1e19931c7394d76c7c_0
Q: Which clustering method do they use to cluster city description embeddings? Text: Introduction Literary critics form interpretations of meaning in works of literature. Building computational models that can help form and test these interpretations is a fundamental goal of digital humanities research BIBREF0 . Within natural language processing, most previous work that engages with literature relies on “distant reading” BIBREF1 , which involves discovering high-level patterns from large collections of stories BIBREF2 , BIBREF3 . We depart from this trend by showing that computational techniques can also engage with literary criticism at a closer distance: concretely, we use recent advances in text representation learning to test a single literary theory about the novel Invisible Cities by Italo Calvino. Framed as a dialogue between the traveler Marco Polo and the emperor Kublai Khan, Invisible Cities consists of 55 prose poems, each of which describes an imaginary city. Calvino categorizes these cities into eleven thematic groups that deal with human emotions (e.g., desires, memories), general objects (eyes, sky, signs), and unusual properties (continuous, hidden, thin). Many critics argue that Calvino's labels are not meaningful, while others believe that there is a distinct thematic separation between the groups, including the author himself BIBREF4 . The unique structure of this novel — each city's description is short and self-contained (Figure FIGREF1 ) — allows us to computationally examine this debate. As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects. While prior work has computationally analyzed a single book BIBREF7 , our work goes beyond simple word frequency or n-gram counts by leveraging the power of pretrained language models to engage with literary criticism. Admittedly, our approach and evaluations are specific to Invisible Cities, but we believe that similar analyses of more conventionally-structured novels could become possible as text representation methods improve. We also highlight two challenges of applying computational methods to literary criticisms: (1) text representation methods are imperfect, especially when given writing as complex as Calvino's; and (2) evaluation is difficult because there is no consensus among literary critics on a single “correct” interpretation. Literary analyses of Invisible Cities Before describing our method and results, we first review critical opinions on both sides of whether Calvino's thematic groups meaningfully characterize his city descriptions. A Computational Analysis We focus on measuring to what extent computers can recover Calvino's thematic groupings when given just raw text of the city descriptions. At a high level, our approach (Figure FIGREF4 ) involves (1) computing a vector representation for every city and (2) performing unsupervised clustering of these representations. The rest of this section describes both of these steps in more detail. Embedding city descriptions While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm. Clustering city representations Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4 Evaluating clustering assignments While the results from the above section allow us to compare our three computational methods against each other, we additionally collect human judgments to further ground our results. In this section, we first describe our human experiment before quantitatively analyzing our results. Quantitative comparison We compare clusters computed on different representations using community purity; additionally, we compare these computational methods to humans by their accuracy on the odd-one-out task. City representations computed using language model-based representation (ELMo and BERT) achieve significantly higher purity than a clustering induced from random representations, indicating that there is at least some meaningful coherence to Calvino's thematic groups (first row of Table TABREF11 ). ELMo representations yield the highest purity among the three methods, which is surprising as BERT is a bigger model trained on data from books (among other domains). Both ELMo and BERT outperform GloVe, which intuitively makes sense because the latter do not model the order or structure of the words in each description. While the purity of our methods is higher than that of a random clustering, it is still far below 1. To provide additional context to these results, we now switch to our “odd-one-out” task and compare directly to human performance. For each triplet of cities, we identify the intruder as the city with the maximum Euclidean distance from the other two. Interestingly, crowd workers achieve only slightly higher accuracy than ELMo city representations; their interannotator agreement is also low, which indicates that close reading to analyze literary coherence between multiple texts is a difficult task, even for human annotators. Overall, results from both computational and human approaches suggests that the author-assigned labels are not entirely arbitrary, as we can reliably recover some of the thematic groups. Examining the learned clusters Our quantitative results suggest that while vector-based city representations capture some thematic similarities, there is much room for improvement. In this section, we first investigate whether the learned clusters provide evidence for any arguments put forth by literary critics on the novel. Then, we explore possible reasons that the learned clusters deviate from Calvino's. Related work Most previous work within the NLP community applies distant reading BIBREF1 to large collections of books, focusing on modeling different aspects of narratives such as plots and event sequences BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , characters BIBREF2 , BIBREF26 , BIBREF27 , BIBREF28 , and narrative similarity BIBREF3 . In the same vein, researchers in computational literary analysis have combined statistical techniques and linguistics theories to perform quantitative analysis on large narrative texts BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , but these attempts largely rely on techniques such as word counting, topic modeling, and naive Bayes classifiers and are therefore not able to capture the meaning of sentences or paragraphs BIBREF34 . While these works discover general patterns from multiple literary works, we are the first to use cutting-edge NLP techniques to engage with specific literary criticism about a single narrative. There has been other computational work that focuses on just a single book or a small number of books, much of it focused on network analysis: BIBREF35 extract character social networks from Alice in Wonderland, while BIBREF36 recover social networks from 19th century British novels. BIBREF37 disentangles multiple narrative threads within the novel Infinite Jest, while BIBREF7 provides several automated statistical methods for close reading and test them on the award-winning novel Cloud Atlas (2004). Compared to this work, we push further on modeling the content of the narrative by leveraging pretrained language models. Conclusion Our work takes a first step towards computationally engaging with literary criticism on a single book using state-of-the-art text representation methods. While we demonstrate that NLP techniques can be used to support literary analyses and obtain new insights, they also have clear limitations (e.g., in understanding abstract themes). As text representation methods become more powerful, we hope that (1) computational tools will become useful for analyzing novels with more conventional structures, and (2) literary criticism will be used as a testbed for evaluating representations. Acknowledgement We thank the anonymous reviewers for their insightful comments. Additionally, we thank Nader Akoury, Garrett Bernstein, Chenghao Lv, Ari Kobren, Kalpesh Krishna, Saumya Lal, Tu Vu, Zhichao Yang, Mengxue Zhang and the UMass NLP group for suggestions that improved the paper's clarity, coverage of related work, and analysis experiments.
We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence.
ed7a3e7fc1672f85a768613e7d1b419475950ab4
ed7a3e7fc1672f85a768613e7d1b419475950ab4_0
Q: Does this approach perform better in the multi-domain or single-domain setting? Text: Introduction A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system BIBREF7 . For each dialogue turn, a DST module takes a user utterance and the dialogue history as input, and outputs a belief estimate of the dialogue state. Then a machine action is decided based on the dialogue state according to a dialogue policy module, after which a machine response is generated. Traditionally, a dialogue state consists of a set of requests and joint goals, both of which are represented by a set of slot-value pairs (e.g. (request, phone), (area, north), (food, Japanese)) BIBREF8 . In a recently proposed multi-domain dialogue state tracking dataset, MultiWoZ BIBREF9 , a representation of dialogue state consists of a hierarchical structure of domain, slot, and value is proposed. This is a more practical scenario since dialogues often include multiple domains simultaneously. Many recently proposed DSTs BIBREF2 , BIBREF10 are based on pre-defined ontology lists that specify all possible slot values in advance. To generate a distribution over the candidate set, previous works often take each of the slot-value pairs as input for scoring. However, in real-world scenarios, it is often not practical to enumerate all possible slot value pairs and perform scoring from a large dynamically changing knowledge base BIBREF11 . To tackle this problem, a popular direction is to build a fixed-length candidate set that is dynamically updated throughout the dialogue development. cpt briefly summaries the inference time complexity of multiple state-of-the-art DST models following this direction. Since the inference complexity of all of previous model is at least proportional to the number of the slots, these models will struggle to scale to multi-domain datasets with much larger numbers of pre-defined slots. In this work, we formulate the dialogue state tracking task as a sequence generation problem, instead of formulating the task as a pair-wise prediction problem as in existing work. We propose the COnditional MEmory Relation Network (COMER), a scalable and accurate dialogue state tracker that has a constant inference time complexity. Specifically, our model consists of an encoder-decoder network with a hierarchically stacked decoder to first generate the slot sequences in the belief state and then for each slot generate the corresponding value sequences. The parameters are shared among all of our decoders for the scalability of the depth of the hierarchical structure of the belief states. COMER applies BERT contextualized word embeddings BIBREF12 and BPE BIBREF13 for sequence encoding to ensure the uniqueness of the representations of the unseen words. The word embeddings for sequence generation are initialized and fixed with the static word embeddings generated from BERT to have the potential of generating unseen words. Motivation f1 shows a multi-domain dialogue in which the user wants the system to first help book a train and then reserve a hotel. For each turn, the DST will need to track the slot-value pairs (e.g. (arrive by, 20:45)) representing the user goals as well as the domain that the slot-value pairs belongs to (e.g. train, hotel). Instead of representing the belief state via a hierarchical structure, one can also combine the domain and slot together to form a combined slot-value pair (e.g. (train; arrive by, 20:45) where the combined slot is “train; arrive by"), which ignores the subordination relationship between the domain and the slots. A typical fallacy in dialogue state tracking datasets is that they make an assumption that the slot in a belief state can only be mapped to a single value in a dialogue turn. We call this the single value assumption. Figure 2 shows an example of this fallacy from the WoZ2.0 dataset: Based on the belief state label (food, seafood), it will be impossible for the downstream module in the dialogue system to generate sample responses that return information about Chinese restaurants. A correct representation of the belief state could be (food, seafood $>$ chinese). This would tell the system to first search the database for information about seafood and then Chinese restaurants. The logical operator “ $>$ " indicates which retrieved information should have a higher priority to be returned to the user. Thus we are interested in building DST modules capable of generating structured sequences, since this kind of sequence representation of the value is critical for accurately capturing the belief states of a dialogue. Hierarchical Sequence Generation for DST Given a dialogue $D$ which consists of $T$ turns of user utterances and system actions, our target is to predict the state at each turn. Different from previous methods which formulate multi-label state prediction as a collection of binary prediction problems, COMER adapts the task into a sequence generation problem via a Seq2Seq framework. As shown in f3, COMER consists of three encoders and three hierarchically stacked decoders. We propose a novel Conditional Memory Relation Decoder (CMRD) for sequence decoding. Each encoder includes an embedding layer and a BiLSTM. The encoders take in the user utterance, the previous system actions, and the previous belief states at the current turn, and encodes them into the embedding space. The user encoder and the system encoder use the fixed BERT model as the embedding layer. Since the slot value pairs are un-ordered set elements of a domain in the belief states, we first order the sequence of domain according to their frequencies as they appear in the training set BIBREF14 , and then order the slot value pairs in the domain according to the slot's frequencies of as they appear in a domain. After the sorting of the state elements, We represent the belief states following the paradigm: (Domain1- Slot1, Value1; Slot2, Value2; ... Domain2- Slot1, Value1; ...) for a more concise representation compared with the nested tuple representation. All the CMRDs take the same representations from the system encoder, user encoder and the belief encoder as part of the input. In the procedure of hierarchical sequence generation, the first CMRD takes a zero vector for its condition input $\mathbf {c}$ , and generates a sequence of the domains, $D$ , as well as the hidden representation of domains $H_D$ . For each $d$ in $D$ , the second CMRD then takes the corresponding $h_d$ as the condition input and generates the slot sequence $S_d$ , and representations, $H_{S,d}$ . Then for each $s$ in $S$ , the third CMRD generates the value sequence $D$0 based on the corresponding $D$1 . We update the belief state with the new $D$2 pairs and perform the procedure iteratively until a dialogue is completed. All the CMR decoders share all of their parameters. Since our model generates domains and slots instead of taking pre-defined slots as inputs, and the number of domains and slots generated each turn is only related to the complexity of the contents covered in a specific dialogue, the inference time complexity of COMER is $O(1)$ with respect to the number of pre-defined slots and values. Encoding Module Let $X$ represent a user utterance or system transcript consisting of a sequence of words $\lbrace w_1,\ldots ,w_T\rbrace $ . The encoder first passes the sequence $\lbrace \mathit {[CLS]},w_1,\ldots ,w_T,\mathit {[SEP]}\rbrace $ into a pre-trained BERT model and obtains its contextual embeddings $E_{X}$ . Specifically, we leverage the output of all layers of BERT and take the average to obtain the contextual embeddings. For each domain/slot appeared in the training set, if it has more than one word, such as `price range', `leave at', etc., we feed it into BERT and take the average of the word vectors to form the extra slot embedding $E_{s}$ . In this way, we map each domain/slot to a fixed embedding, which allows us to generate a domain/slot as a whole instead of a token at each time step of domain/slot sequence decoding. We also construct a static vocabulary embedding $E_{v}$ by feeding each token in the BERT vocabulary into BERT. The final static word embedding $E$ is the concatenation of the $E_{v}$ and $E_{s}$ . After we obtain the contextual embeddings for the user utterance, system action, and the static embeddings for the previous belief state, we feed each of them into a Bidirectional LSTM BIBREF15 . $$\begin{aligned} \mathbf {h}_{a_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{a_t}}, \mathbf {h}_{a_{t-1}}) \\ \mathbf {h}_{u_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{u_t}}, \mathbf {h}_{u_{t-1}}) \\ \mathbf {h}_{b_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{b_t}}, \mathbf {h}_{b_{t-1}}) \\ \mathbf {h}_{a_0} & = \mathbf {h}_{u_0} = \mathbf {h}_{b_0} = c_{0}, \\ \end{aligned}$$ (Eq. 7) where $c_{0}$ is the zero-initialized hidden state for the BiLSTM. The hidden size of the BiLSTM is $d_m/2$ . We concatenate the forward and the backward hidden representations of each token from the BiLSTM to obtain the token representation $\mathbf {h}_{k_t}\in R^{d_m}$ , $k\in \lbrace a,u,b\rbrace $ at each time step $t$ . The hidden states of all time steps are concatenated to obtain the final representation of $H_{k}\in R^{T \times d_m}, k \in \lbrace a,u,B\rbrace $ . The parameters are shared between all of the BiLSTMs. Conditional Memory Relation Decoder Inspired by Residual Dense Networks BIBREF16 , End-to-End Memory Networks BIBREF17 and Relation Networks BIBREF18 , we here propose the Conditional Memory Relation Decoder (CMRD). Given a token embedding, $\mathbf {e}_x$ , CMRD outputs the next token, $s$ , and the hidden representation, $h_s$ , with the hierarchical memory access of different encoded information sources, $H_B$ , $H_a$ , $H_u$ , and the relation reasoning under a certain given condition $\mathbf {c}$ , $ \mathbf {s}, \mathbf {h}_s= \textrm {CMRD}(\mathbf {e}_x, \mathbf {c}, H_B, H_a, H_u), $ the final output matrices $S,H_s \in R^{l_s\times d_m}$ are concatenations of all generated $\mathbf {s}$ and $\mathbf {h}_s$ (respectively) along the sequence length dimension, where $d_m$ is the model size, and $l_s$ is the generated sequence length. The general structure of the CMR decoder is shown in Figure 4 . Note that the CMR decoder can support additional memory sources by adding the residual connection and the attention block, but here we only show the structure with three sources: belief state representation ( $H_B$ ), system transcript representation ( $H_a$ ), and user utterance representation ( $H_u$ ), corresponding to a dialogue state tracking scenario. Since we share the parameters between all of the decoders, thus CMRD is actually a 2-dimensional auto-regressive model with respect to both the condition generation and the sequence generation task. At each time step $t$ , the CMR decoder first embeds the token $x_t$ with a fixed token embedding $E\in R^{d_e\times d_v}$ , where $d_e$ is the embedding size and $d_v$ is the vocabulary size. The initial token $x_0$ is “[CLS]". The embedded vector $\textbf {e}_{x_t}$ is then encoded with an LSTM, which emits a hidden representation $\textbf {h}_0 \in R^{d_m}$ , $ \textbf {h}_0= \textrm {LSTM}(\textbf {e}_{x_t},\textbf {q}_{t-1}). $ where $\textbf {q}_t$ is the hidden state of the LSTM. $\textbf {q}_0$ is initialized with an average of the hidden states of the belief encoder, the system encoder and the user encoder which produces $H_B$ , $H_a$ , $H_u$ respectively. $\mathbf {h}_0$ is then summed (element-wise) with the condition representation $\mathbf {c}\in R^{d_m}$ to produce $\mathbf {h}_1$ , which is (1) fed into the attention module; (2) used for residual connection; and (3) concatenated with other $\mathbf {h}_i$ , ( $i>1$ ) to produce the concatenated working memory, $\mathbf {r_0}$ , for relation reasoning, $ \mathbf {h}_1 & =\mathbf {h}_0+\mathbf {c},\\ \mathbf {h}_2 & =\mathbf {h}_1+\text{Attn}_{\text{belief}}(\mathbf {h}_1,H_e),\\ \mathbf {h}_3 & = \mathbf {h}_2+\text{Attn}_{\text{sys}}(\mathbf {h}_2,H_a),\\ \mathbf {h}_4 & = \mathbf {h}_3+\text{Attn}_{\text{usr}}(\mathbf {h}_3,H_u),\\ \mathbf {r} & = \mathbf {h}_1\oplus \mathbf {h}_2\oplus \mathbf {h}_3\oplus \mathbf {h}_4 \in R^{4d_m}, $ where $\text{Attn}_k$ ( $k\in \lbrace \text{belief}, \text{sys},\text{usr}\rbrace $ ) are the attention modules applied respectively to $H_B$ , $H_a$ , $H_u$ , and $\oplus $ means the concatenation operator. The gradients are blocked for $ \mathbf {h}_1,\mathbf {h}_2,\mathbf {h}_3$ during the back-propagation stage, since we only need them to work as the supplementary memories for the relation reasoning followed. The attention module takes a vector, $\mathbf {h}\in R^{d_m}$ , and a matrix, $H\in R^{d_m\times l}$ as input, where $l$ is the sequence length of the representation, and outputs $\mathbf {h}_a$ , a weighted sum of the column vectors in $H$ . $ \mathbf {a} & =W_1^T\mathbf {h}+\mathbf {b}_1& &\in R^{d_m},\\ \mathbf {c} &=\text{softmax}(H^Ta)& &\in R^l,\\ \mathbf {h} &=H\mathbf {c}& &\in R^{d_m},\\ \mathbf {h}_a &=W_2^T\mathbf {h}+\mathbf {b}_2& &\in R^{d_m}, $ where the weights $W_1\in R^{d_m \times d_m}$ , $W_2\in R^{d_m \times d_m}$ and the bias $b_1\in R^{d_m}$ , $b_2\in R^{d_m}$ are the learnable parameters. The order of the attention modules, i.e., first attend to the system and the user and then the belief, is decided empirically. We can interpret this hierarchical structure as the internal order for the memory processing, since from the daily life experience, people tend to attend to the most contemporary memories (system/user utterance) first and then attend to the older history (belief states). All of the parameters are shared between the attention modules. The concatenated working memory, $\mathbf {r}_0$ , is then fed into a Multi-Layer Perceptron (MLP) with four layers, $ \mathbf {r}_1 & =\sigma (W_1^T\mathbf {r}_0+\mathbf {b}_1),\\ \mathbf {r}_2 & =\sigma (W_2^T\mathbf {r}_1+\mathbf {b}_2),\\ \mathbf {r}_3 & = \sigma (W_3^T\mathbf {r}_2+\mathbf {b}_3),\\ \mathbf {h}_s & = \sigma (W_4^T\mathbf {r}_3+\mathbf {b}_4), $ where $\sigma $ is a non-linear activation, and the weights $W_1 \in R^{4d_m \times d_m}$ , $W_i \in R^{d_m \times d_m}$ and the bias $b_1 \in R^{d_m}$ , $b_i \in R^{d_m}$ are learnable parameters, and $2\le i\le 4$ . The number of layers for the MLP is decided by the grid search. The hidden representation of the next token, $\mathbf {h}_s$ , is then (1) emitted out of the decoder as a representation; and (2) fed into a dropout layer with drop rate $p$ , and a linear layer to generate the next token, $ \mathbf {h}_k & =\text{dropout}(\mathbf {h}_s)& &\in R^{d_m},\\ \mathbf {h}_o & =W_k^T\mathbf {h}_k+\mathbf {b}_k& &\in R^{d_e},\\ \mathbf {p}_s & =\text{softmax}(E^T\mathbf {h}_o)& &\in R^{d_v},\\ s & =\text{argmax}(\mathbf {p}_s)& &\in R, $ where the weight $W_k\in R^{d_m \times d_e}$ and the bias $b_k\in R^{d_e}$ are learnable parameters. Since $d_e$ is the embedding size and the model parameters are independent of the vocabulary size, the CMR decoder can make predictions on a dynamic vocabulary and implicitly supports the generation of unseen words. When training the model, we minimize the cross-entropy loss between the output probabilities, $\mathbf {p}_s$ , and the given labels. Experimental Setting We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 . Based on the statistics from these two datasets, we can calculate the theoretical Inference Time Multiplier (ITM), $K$ , as a metric of scalability. Given the inference time complexity, ITM measures how many times a model will be slower when being transferred from the WoZ2.0 dataset, $d_1$ , to the MultiWoZ dataset, $d_2$ , $ K= h(t)h(s)h(n)h(m)\\ $ $ h(x)=\left\lbrace \begin{array}{lcl} 1 & &O(x)=O(1),\\ \frac{x_{d_2}}{x_{d_1}}& & \text{otherwise},\\ \end{array}\right. $ where $O(x)$ means the Inference Time Complexity (ITC) of the variable $x$ . For a model having an ITC of $O(1)$ with respect to the number of slots $n$ , and values $m$ , the ITM will be a multiplier of 2.15x, while for an ITC of $O(n)$ , it will be a multiplier of 25.1, and 1,143 for $O(mn)$ . As a convention, the metric of joint goal accuracy is used to compare our model to previous work. The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided. This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system. In this work, the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement. Implementation Details We use the $\text{BERT}_\text{large}$ model for both contextual and static embedding generation. All LSTMs in the model are stacked with 2 layers, and only the output of the last layer is taken as a hidden representation. ReLU non-linearity is used for the activation function, $\sigma $ . The hyper-parameters of our model are identical for both the WoZ2.0 and the MultiwoZ datasets: dropout rate $p=0.5$ , model size $d_m=512$ , embedding size $d_e=1024$ . For training on WoZ2.0, the model is trained with a batch size of 32 and the ADAM optimizer BIBREF21 for 150 epochs, while for MultiWoZ, the AMSGrad optimizer BIBREF22 and a batch size of 16 is adopted for 15 epochs of training. For both optimizers, we use a learning rate of 0.0005 with a gradient clip of 2.0. We initialize all weights in our model with Kaiming initialization BIBREF23 and adopt zero initialization for the bias. All experiments are conducted on a single NVIDIA GTX 1080Ti GPU. Results To measure the actual inference time multiplier of our model, we evaluate the runtime of the best-performing models on the validation sets of both the WoZ2.0 and MultiWoZ datasets. During evaluation, we set the batch size to 1 to avoid the influence of data parallelism and sequence padding. On the validation set of WoZ2.0, we obtain a runtime of 65.6 seconds, while on MultiWoZ, the runtime is 835.2 seconds. Results are averaged across 5 runs. Considering that the validation set of MultiWoZ is 5 times larger than that of WoZ2.0, the actual inference time multiplier is 2.54 for our model. Since the actual inference time multiplier roughly of the same magnitude as the theoretical value of 2.15, we can confirm empirically that we have the $O(1)$ inference time complexity and thus obtain full scalability to the number of slots and values pre-defined in an ontology. c compares our model with the previous state-of-the-art on both the WoZ2.0 test set and the MultiWoZ test set. For the WoZ2.0 dataset, we maintain performance at the level of the state-of-the-art, with a marginal drop of 0.3% compared with previous work. Considering the fact that WoZ2.0 is a relatively small dataset, this small difference does not represent a significant big performance drop. On the muli-domain dataset, MultiWoZ, our model achieves a joint goal accuracy of 45.72%, which is significant better than most of the previous models other than TRADE which applies the copy mechanism and gains better generalization ability on named entity coping. Ablation Study To prove the effectiveness of our structure of the Conditional Memory Relation Decoder (CMRD), we conduct ablation experiments on the WoZ2.0 dataset. We observe an accuracy drop of 1.95% after removing residual connections and the hierarchical stack of our attention modules. This proves the effectiveness of our hierarchical attention design. After the MLP is replaced with a linear layer of hidden size 512 and the ReLU activation function, the accuracy further drops by 3.45%. This drop is partly due to the reduction of the number of the model parameters, but it also proves that stacking more layers in an MLP can improve the relational reasoning performance given a concatenation of multiple representations from different sources. We also conduct the ablation study on the MultiWoZ dataset for a more precise analysis on the hierarchical generation process. For joint domain accuracy, we calculate the probability that all domains generated in each turn are exactly matched with the labels provided. The joint domain-slot accuracy further calculate the probability that all domains and slots generated are correct, while the joint goal accuracy requires all the domains, slots and values generated are exactly matched with the labels. From abm, We can further calculate that given the correct slot prediction COMER has 83.52% chance to make the correct value prediction. While COMER has done great job on domain prediction (95.53%) and value prediction (83.52%), the accuracy of the slot prediction given the correct domain is only 57.30%. We suspect that this is because we only use the previous belief state to represent the dialogue history, and the inter-turn reasoning ability on the slot prediction suffers from the limited context and the accuracy is harmed due to the multi-turn mapping problem BIBREF4 . We can also see that the JDS Acc. has an absolute boost of 5.48% when we switch from the combined slot representation to the nested tuple representation. This is because the subordinate relationship between the domains and the slots can be captured by the hierarchical sequence generation, while this relationship is missed when generating the domain and slot together via the combined slot representation. Qualitative Analysis f5 shows an example of the belief state prediction result in one turn of a dialogue on the MultiWoZ test set. The visualization includes the CMRD attention scores over the belief states, system transcript and user utterance during the decoding stage of the slot sequence. From the system attention (top right), since it is the first attention module and no previous context information is given, it can only find the information indicating the slot “departure” from the system utterance under the domain condition, and attend to the evidence “leaving” correctly during the generation step of “departure”. From the user attention, we can see that it captures the most helpful keywords that are necessary for correct prediction, such as “after" for “day" and “leave at”, “to" for “destination". Moreover, during the generation step of “departure”, the user attention successfully discerns that, based on the context, the word “leave” is not the evidence that need to be accumulated and choose to attend nothing in this step. For the belief attention, we can see that the belief attention module correctly attends to a previous slot for each generation step of a slot that has been presented in the previous state. For the generation step of the new slot “destination", since the previous state does not have the “destination" slot, the belief attention module only attends to the `-' mark after the `train' domain to indicate that the generated word should belong to this domain. Related Work Semi-scalable Belief Tracker BIBREF1 proposed an approach that can generate fixed-length candidate sets for each of the slots from the dialogue history. Although they only need to perform inference for a fixed number of values, they still need to iterate over all slots defined in the ontology to make a prediction for a given dialogue turn. In addition, their method needs an external language understanding module to extract the exact entities from a dialogue to form candidates, which will not work if the label value is an abstraction and does not have the exact match with the words in the dialogue. StateNet BIBREF3 achieves state-of-the-art performance with the property that its parameters are independent of the number of slot values in the candidate set, and it also supports online training or inference with dynamically changing slots and values. Given a slot that needs tracking, it only needs to perform inference once to make the prediction for a turn, but this also means that its inference time complexity is proportional to the number of slots. TRADE BIBREF4 achieves state-of-the-art performance on the MultiWoZ dataset by applying the copy mechanism for the value sequence generation. Since TRADE takes $n$ combinations of the domains and slots as the input, the inference time complexity of TRADE is $O(n)$ . The performance improvement achieved by TRADE is mainly due to the fact that it incorporates the copy mechanism that can boost the accuracy on the ‘name’ slot, which mainly needs the ability in copying names from the dialogue history. However, TRADE does not report its performance on the WoZ2.0 dataset which does not have the ‘name’ slot. DSTRead BIBREF6 formulate the dialogue state tracking task as a reading comprehension problem by asking slot specified questions to the BERT model and find the answer span in the dialogue history for each of the pre-defined combined slot. Thus its inference time complexity is still $O(n)$ . This method suffers from the fact that its generation vocabulary is limited to the words occurred in the dialogue history, and it has to do a manual combination strategy with another joint state tracking model on the development set to achieve better performance. Contextualized Word Embedding (CWE) was first proposed by BIBREF25 . Based on the intuition that the meaning of a word is highly correlated with its context, CWE takes the complete context (sentences, passages, etc.) as the input, and outputs the corresponding word vectors that are unique under the given context. Recently, with the success of language models (e.g. BIBREF12 ) that are trained on large scale data, contextualizeds word embedding have been further improved and can achieve the same performance compared to (less flexible) finely-tuned pipelines. Sequence Generation Models. Recently, sequence generation models have been successfully applied in the realm of multi-label classification (MLC) BIBREF14 . Different from traditional binary relevance methods, they proposed a sequence generation model for MLC tasks which takes into consideration the correlations between labels. Specifically, the model follows the encoder-decoder structure with an attention mechanism BIBREF26 , where the decoder generates a sequence of labels. Similar to language modeling tasks, the decoder output at each time step will be conditioned on the previous predictions during generation. Therefore the correlation between generated labels is captured by the decoder. Conclusion In this work, we proposed the Conditional Memory Relation Network (COMER), the first dialogue state tracking model that has a constant inference time complexity with respect to the number of domains, slots and values pre-defined in an ontology. Besides its scalability, the joint goal accuracy of our model also achieve the similar performance compared with the state-of-the-arts on both the MultiWoZ dataset and the WoZ dataset. Due to the flexibility of our hierarchical encoder-decoder framework and the CMR decoder, abundant future research direction remains as applying the transformer structure, incorporating open vocabulary and copy mechanism for explicit unseen words generation, and inventing better dialogue history access mechanism to accommodate efficient inter-turn reasoning. Acknowledgements. This work is partly supported by NSF #1750063. We thank all the reviewers for their constructive suggestions. We also want to thank Zhuowen Tu and Shengnan Zhang for the early discussions of the project.
single-domain setting
72ceeb58e783e3981055c70a3483ea706511fac3
72ceeb58e783e3981055c70a3483ea706511fac3_0
Q: What are the performance metrics used? Text: Introduction A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system BIBREF7 . For each dialogue turn, a DST module takes a user utterance and the dialogue history as input, and outputs a belief estimate of the dialogue state. Then a machine action is decided based on the dialogue state according to a dialogue policy module, after which a machine response is generated. Traditionally, a dialogue state consists of a set of requests and joint goals, both of which are represented by a set of slot-value pairs (e.g. (request, phone), (area, north), (food, Japanese)) BIBREF8 . In a recently proposed multi-domain dialogue state tracking dataset, MultiWoZ BIBREF9 , a representation of dialogue state consists of a hierarchical structure of domain, slot, and value is proposed. This is a more practical scenario since dialogues often include multiple domains simultaneously. Many recently proposed DSTs BIBREF2 , BIBREF10 are based on pre-defined ontology lists that specify all possible slot values in advance. To generate a distribution over the candidate set, previous works often take each of the slot-value pairs as input for scoring. However, in real-world scenarios, it is often not practical to enumerate all possible slot value pairs and perform scoring from a large dynamically changing knowledge base BIBREF11 . To tackle this problem, a popular direction is to build a fixed-length candidate set that is dynamically updated throughout the dialogue development. cpt briefly summaries the inference time complexity of multiple state-of-the-art DST models following this direction. Since the inference complexity of all of previous model is at least proportional to the number of the slots, these models will struggle to scale to multi-domain datasets with much larger numbers of pre-defined slots. In this work, we formulate the dialogue state tracking task as a sequence generation problem, instead of formulating the task as a pair-wise prediction problem as in existing work. We propose the COnditional MEmory Relation Network (COMER), a scalable and accurate dialogue state tracker that has a constant inference time complexity. Specifically, our model consists of an encoder-decoder network with a hierarchically stacked decoder to first generate the slot sequences in the belief state and then for each slot generate the corresponding value sequences. The parameters are shared among all of our decoders for the scalability of the depth of the hierarchical structure of the belief states. COMER applies BERT contextualized word embeddings BIBREF12 and BPE BIBREF13 for sequence encoding to ensure the uniqueness of the representations of the unseen words. The word embeddings for sequence generation are initialized and fixed with the static word embeddings generated from BERT to have the potential of generating unseen words. Motivation f1 shows a multi-domain dialogue in which the user wants the system to first help book a train and then reserve a hotel. For each turn, the DST will need to track the slot-value pairs (e.g. (arrive by, 20:45)) representing the user goals as well as the domain that the slot-value pairs belongs to (e.g. train, hotel). Instead of representing the belief state via a hierarchical structure, one can also combine the domain and slot together to form a combined slot-value pair (e.g. (train; arrive by, 20:45) where the combined slot is “train; arrive by"), which ignores the subordination relationship between the domain and the slots. A typical fallacy in dialogue state tracking datasets is that they make an assumption that the slot in a belief state can only be mapped to a single value in a dialogue turn. We call this the single value assumption. Figure 2 shows an example of this fallacy from the WoZ2.0 dataset: Based on the belief state label (food, seafood), it will be impossible for the downstream module in the dialogue system to generate sample responses that return information about Chinese restaurants. A correct representation of the belief state could be (food, seafood $>$ chinese). This would tell the system to first search the database for information about seafood and then Chinese restaurants. The logical operator “ $>$ " indicates which retrieved information should have a higher priority to be returned to the user. Thus we are interested in building DST modules capable of generating structured sequences, since this kind of sequence representation of the value is critical for accurately capturing the belief states of a dialogue. Hierarchical Sequence Generation for DST Given a dialogue $D$ which consists of $T$ turns of user utterances and system actions, our target is to predict the state at each turn. Different from previous methods which formulate multi-label state prediction as a collection of binary prediction problems, COMER adapts the task into a sequence generation problem via a Seq2Seq framework. As shown in f3, COMER consists of three encoders and three hierarchically stacked decoders. We propose a novel Conditional Memory Relation Decoder (CMRD) for sequence decoding. Each encoder includes an embedding layer and a BiLSTM. The encoders take in the user utterance, the previous system actions, and the previous belief states at the current turn, and encodes them into the embedding space. The user encoder and the system encoder use the fixed BERT model as the embedding layer. Since the slot value pairs are un-ordered set elements of a domain in the belief states, we first order the sequence of domain according to their frequencies as they appear in the training set BIBREF14 , and then order the slot value pairs in the domain according to the slot's frequencies of as they appear in a domain. After the sorting of the state elements, We represent the belief states following the paradigm: (Domain1- Slot1, Value1; Slot2, Value2; ... Domain2- Slot1, Value1; ...) for a more concise representation compared with the nested tuple representation. All the CMRDs take the same representations from the system encoder, user encoder and the belief encoder as part of the input. In the procedure of hierarchical sequence generation, the first CMRD takes a zero vector for its condition input $\mathbf {c}$ , and generates a sequence of the domains, $D$ , as well as the hidden representation of domains $H_D$ . For each $d$ in $D$ , the second CMRD then takes the corresponding $h_d$ as the condition input and generates the slot sequence $S_d$ , and representations, $H_{S,d}$ . Then for each $s$ in $S$ , the third CMRD generates the value sequence $D$0 based on the corresponding $D$1 . We update the belief state with the new $D$2 pairs and perform the procedure iteratively until a dialogue is completed. All the CMR decoders share all of their parameters. Since our model generates domains and slots instead of taking pre-defined slots as inputs, and the number of domains and slots generated each turn is only related to the complexity of the contents covered in a specific dialogue, the inference time complexity of COMER is $O(1)$ with respect to the number of pre-defined slots and values. Encoding Module Let $X$ represent a user utterance or system transcript consisting of a sequence of words $\lbrace w_1,\ldots ,w_T\rbrace $ . The encoder first passes the sequence $\lbrace \mathit {[CLS]},w_1,\ldots ,w_T,\mathit {[SEP]}\rbrace $ into a pre-trained BERT model and obtains its contextual embeddings $E_{X}$ . Specifically, we leverage the output of all layers of BERT and take the average to obtain the contextual embeddings. For each domain/slot appeared in the training set, if it has more than one word, such as `price range', `leave at', etc., we feed it into BERT and take the average of the word vectors to form the extra slot embedding $E_{s}$ . In this way, we map each domain/slot to a fixed embedding, which allows us to generate a domain/slot as a whole instead of a token at each time step of domain/slot sequence decoding. We also construct a static vocabulary embedding $E_{v}$ by feeding each token in the BERT vocabulary into BERT. The final static word embedding $E$ is the concatenation of the $E_{v}$ and $E_{s}$ . After we obtain the contextual embeddings for the user utterance, system action, and the static embeddings for the previous belief state, we feed each of them into a Bidirectional LSTM BIBREF15 . $$\begin{aligned} \mathbf {h}_{a_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{a_t}}, \mathbf {h}_{a_{t-1}}) \\ \mathbf {h}_{u_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{u_t}}, \mathbf {h}_{u_{t-1}}) \\ \mathbf {h}_{b_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{b_t}}, \mathbf {h}_{b_{t-1}}) \\ \mathbf {h}_{a_0} & = \mathbf {h}_{u_0} = \mathbf {h}_{b_0} = c_{0}, \\ \end{aligned}$$ (Eq. 7) where $c_{0}$ is the zero-initialized hidden state for the BiLSTM. The hidden size of the BiLSTM is $d_m/2$ . We concatenate the forward and the backward hidden representations of each token from the BiLSTM to obtain the token representation $\mathbf {h}_{k_t}\in R^{d_m}$ , $k\in \lbrace a,u,b\rbrace $ at each time step $t$ . The hidden states of all time steps are concatenated to obtain the final representation of $H_{k}\in R^{T \times d_m}, k \in \lbrace a,u,B\rbrace $ . The parameters are shared between all of the BiLSTMs. Conditional Memory Relation Decoder Inspired by Residual Dense Networks BIBREF16 , End-to-End Memory Networks BIBREF17 and Relation Networks BIBREF18 , we here propose the Conditional Memory Relation Decoder (CMRD). Given a token embedding, $\mathbf {e}_x$ , CMRD outputs the next token, $s$ , and the hidden representation, $h_s$ , with the hierarchical memory access of different encoded information sources, $H_B$ , $H_a$ , $H_u$ , and the relation reasoning under a certain given condition $\mathbf {c}$ , $ \mathbf {s}, \mathbf {h}_s= \textrm {CMRD}(\mathbf {e}_x, \mathbf {c}, H_B, H_a, H_u), $ the final output matrices $S,H_s \in R^{l_s\times d_m}$ are concatenations of all generated $\mathbf {s}$ and $\mathbf {h}_s$ (respectively) along the sequence length dimension, where $d_m$ is the model size, and $l_s$ is the generated sequence length. The general structure of the CMR decoder is shown in Figure 4 . Note that the CMR decoder can support additional memory sources by adding the residual connection and the attention block, but here we only show the structure with three sources: belief state representation ( $H_B$ ), system transcript representation ( $H_a$ ), and user utterance representation ( $H_u$ ), corresponding to a dialogue state tracking scenario. Since we share the parameters between all of the decoders, thus CMRD is actually a 2-dimensional auto-regressive model with respect to both the condition generation and the sequence generation task. At each time step $t$ , the CMR decoder first embeds the token $x_t$ with a fixed token embedding $E\in R^{d_e\times d_v}$ , where $d_e$ is the embedding size and $d_v$ is the vocabulary size. The initial token $x_0$ is “[CLS]". The embedded vector $\textbf {e}_{x_t}$ is then encoded with an LSTM, which emits a hidden representation $\textbf {h}_0 \in R^{d_m}$ , $ \textbf {h}_0= \textrm {LSTM}(\textbf {e}_{x_t},\textbf {q}_{t-1}). $ where $\textbf {q}_t$ is the hidden state of the LSTM. $\textbf {q}_0$ is initialized with an average of the hidden states of the belief encoder, the system encoder and the user encoder which produces $H_B$ , $H_a$ , $H_u$ respectively. $\mathbf {h}_0$ is then summed (element-wise) with the condition representation $\mathbf {c}\in R^{d_m}$ to produce $\mathbf {h}_1$ , which is (1) fed into the attention module; (2) used for residual connection; and (3) concatenated with other $\mathbf {h}_i$ , ( $i>1$ ) to produce the concatenated working memory, $\mathbf {r_0}$ , for relation reasoning, $ \mathbf {h}_1 & =\mathbf {h}_0+\mathbf {c},\\ \mathbf {h}_2 & =\mathbf {h}_1+\text{Attn}_{\text{belief}}(\mathbf {h}_1,H_e),\\ \mathbf {h}_3 & = \mathbf {h}_2+\text{Attn}_{\text{sys}}(\mathbf {h}_2,H_a),\\ \mathbf {h}_4 & = \mathbf {h}_3+\text{Attn}_{\text{usr}}(\mathbf {h}_3,H_u),\\ \mathbf {r} & = \mathbf {h}_1\oplus \mathbf {h}_2\oplus \mathbf {h}_3\oplus \mathbf {h}_4 \in R^{4d_m}, $ where $\text{Attn}_k$ ( $k\in \lbrace \text{belief}, \text{sys},\text{usr}\rbrace $ ) are the attention modules applied respectively to $H_B$ , $H_a$ , $H_u$ , and $\oplus $ means the concatenation operator. The gradients are blocked for $ \mathbf {h}_1,\mathbf {h}_2,\mathbf {h}_3$ during the back-propagation stage, since we only need them to work as the supplementary memories for the relation reasoning followed. The attention module takes a vector, $\mathbf {h}\in R^{d_m}$ , and a matrix, $H\in R^{d_m\times l}$ as input, where $l$ is the sequence length of the representation, and outputs $\mathbf {h}_a$ , a weighted sum of the column vectors in $H$ . $ \mathbf {a} & =W_1^T\mathbf {h}+\mathbf {b}_1& &\in R^{d_m},\\ \mathbf {c} &=\text{softmax}(H^Ta)& &\in R^l,\\ \mathbf {h} &=H\mathbf {c}& &\in R^{d_m},\\ \mathbf {h}_a &=W_2^T\mathbf {h}+\mathbf {b}_2& &\in R^{d_m}, $ where the weights $W_1\in R^{d_m \times d_m}$ , $W_2\in R^{d_m \times d_m}$ and the bias $b_1\in R^{d_m}$ , $b_2\in R^{d_m}$ are the learnable parameters. The order of the attention modules, i.e., first attend to the system and the user and then the belief, is decided empirically. We can interpret this hierarchical structure as the internal order for the memory processing, since from the daily life experience, people tend to attend to the most contemporary memories (system/user utterance) first and then attend to the older history (belief states). All of the parameters are shared between the attention modules. The concatenated working memory, $\mathbf {r}_0$ , is then fed into a Multi-Layer Perceptron (MLP) with four layers, $ \mathbf {r}_1 & =\sigma (W_1^T\mathbf {r}_0+\mathbf {b}_1),\\ \mathbf {r}_2 & =\sigma (W_2^T\mathbf {r}_1+\mathbf {b}_2),\\ \mathbf {r}_3 & = \sigma (W_3^T\mathbf {r}_2+\mathbf {b}_3),\\ \mathbf {h}_s & = \sigma (W_4^T\mathbf {r}_3+\mathbf {b}_4), $ where $\sigma $ is a non-linear activation, and the weights $W_1 \in R^{4d_m \times d_m}$ , $W_i \in R^{d_m \times d_m}$ and the bias $b_1 \in R^{d_m}$ , $b_i \in R^{d_m}$ are learnable parameters, and $2\le i\le 4$ . The number of layers for the MLP is decided by the grid search. The hidden representation of the next token, $\mathbf {h}_s$ , is then (1) emitted out of the decoder as a representation; and (2) fed into a dropout layer with drop rate $p$ , and a linear layer to generate the next token, $ \mathbf {h}_k & =\text{dropout}(\mathbf {h}_s)& &\in R^{d_m},\\ \mathbf {h}_o & =W_k^T\mathbf {h}_k+\mathbf {b}_k& &\in R^{d_e},\\ \mathbf {p}_s & =\text{softmax}(E^T\mathbf {h}_o)& &\in R^{d_v},\\ s & =\text{argmax}(\mathbf {p}_s)& &\in R, $ where the weight $W_k\in R^{d_m \times d_e}$ and the bias $b_k\in R^{d_e}$ are learnable parameters. Since $d_e$ is the embedding size and the model parameters are independent of the vocabulary size, the CMR decoder can make predictions on a dynamic vocabulary and implicitly supports the generation of unseen words. When training the model, we minimize the cross-entropy loss between the output probabilities, $\mathbf {p}_s$ , and the given labels. Experimental Setting We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 . Based on the statistics from these two datasets, we can calculate the theoretical Inference Time Multiplier (ITM), $K$ , as a metric of scalability. Given the inference time complexity, ITM measures how many times a model will be slower when being transferred from the WoZ2.0 dataset, $d_1$ , to the MultiWoZ dataset, $d_2$ , $ K= h(t)h(s)h(n)h(m)\\ $ $ h(x)=\left\lbrace \begin{array}{lcl} 1 & &O(x)=O(1),\\ \frac{x_{d_2}}{x_{d_1}}& & \text{otherwise},\\ \end{array}\right. $ where $O(x)$ means the Inference Time Complexity (ITC) of the variable $x$ . For a model having an ITC of $O(1)$ with respect to the number of slots $n$ , and values $m$ , the ITM will be a multiplier of 2.15x, while for an ITC of $O(n)$ , it will be a multiplier of 25.1, and 1,143 for $O(mn)$ . As a convention, the metric of joint goal accuracy is used to compare our model to previous work. The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided. This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system. In this work, the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement. Implementation Details We use the $\text{BERT}_\text{large}$ model for both contextual and static embedding generation. All LSTMs in the model are stacked with 2 layers, and only the output of the last layer is taken as a hidden representation. ReLU non-linearity is used for the activation function, $\sigma $ . The hyper-parameters of our model are identical for both the WoZ2.0 and the MultiwoZ datasets: dropout rate $p=0.5$ , model size $d_m=512$ , embedding size $d_e=1024$ . For training on WoZ2.0, the model is trained with a batch size of 32 and the ADAM optimizer BIBREF21 for 150 epochs, while for MultiWoZ, the AMSGrad optimizer BIBREF22 and a batch size of 16 is adopted for 15 epochs of training. For both optimizers, we use a learning rate of 0.0005 with a gradient clip of 2.0. We initialize all weights in our model with Kaiming initialization BIBREF23 and adopt zero initialization for the bias. All experiments are conducted on a single NVIDIA GTX 1080Ti GPU. Results To measure the actual inference time multiplier of our model, we evaluate the runtime of the best-performing models on the validation sets of both the WoZ2.0 and MultiWoZ datasets. During evaluation, we set the batch size to 1 to avoid the influence of data parallelism and sequence padding. On the validation set of WoZ2.0, we obtain a runtime of 65.6 seconds, while on MultiWoZ, the runtime is 835.2 seconds. Results are averaged across 5 runs. Considering that the validation set of MultiWoZ is 5 times larger than that of WoZ2.0, the actual inference time multiplier is 2.54 for our model. Since the actual inference time multiplier roughly of the same magnitude as the theoretical value of 2.15, we can confirm empirically that we have the $O(1)$ inference time complexity and thus obtain full scalability to the number of slots and values pre-defined in an ontology. c compares our model with the previous state-of-the-art on both the WoZ2.0 test set and the MultiWoZ test set. For the WoZ2.0 dataset, we maintain performance at the level of the state-of-the-art, with a marginal drop of 0.3% compared with previous work. Considering the fact that WoZ2.0 is a relatively small dataset, this small difference does not represent a significant big performance drop. On the muli-domain dataset, MultiWoZ, our model achieves a joint goal accuracy of 45.72%, which is significant better than most of the previous models other than TRADE which applies the copy mechanism and gains better generalization ability on named entity coping. Ablation Study To prove the effectiveness of our structure of the Conditional Memory Relation Decoder (CMRD), we conduct ablation experiments on the WoZ2.0 dataset. We observe an accuracy drop of 1.95% after removing residual connections and the hierarchical stack of our attention modules. This proves the effectiveness of our hierarchical attention design. After the MLP is replaced with a linear layer of hidden size 512 and the ReLU activation function, the accuracy further drops by 3.45%. This drop is partly due to the reduction of the number of the model parameters, but it also proves that stacking more layers in an MLP can improve the relational reasoning performance given a concatenation of multiple representations from different sources. We also conduct the ablation study on the MultiWoZ dataset for a more precise analysis on the hierarchical generation process. For joint domain accuracy, we calculate the probability that all domains generated in each turn are exactly matched with the labels provided. The joint domain-slot accuracy further calculate the probability that all domains and slots generated are correct, while the joint goal accuracy requires all the domains, slots and values generated are exactly matched with the labels. From abm, We can further calculate that given the correct slot prediction COMER has 83.52% chance to make the correct value prediction. While COMER has done great job on domain prediction (95.53%) and value prediction (83.52%), the accuracy of the slot prediction given the correct domain is only 57.30%. We suspect that this is because we only use the previous belief state to represent the dialogue history, and the inter-turn reasoning ability on the slot prediction suffers from the limited context and the accuracy is harmed due to the multi-turn mapping problem BIBREF4 . We can also see that the JDS Acc. has an absolute boost of 5.48% when we switch from the combined slot representation to the nested tuple representation. This is because the subordinate relationship between the domains and the slots can be captured by the hierarchical sequence generation, while this relationship is missed when generating the domain and slot together via the combined slot representation. Qualitative Analysis f5 shows an example of the belief state prediction result in one turn of a dialogue on the MultiWoZ test set. The visualization includes the CMRD attention scores over the belief states, system transcript and user utterance during the decoding stage of the slot sequence. From the system attention (top right), since it is the first attention module and no previous context information is given, it can only find the information indicating the slot “departure” from the system utterance under the domain condition, and attend to the evidence “leaving” correctly during the generation step of “departure”. From the user attention, we can see that it captures the most helpful keywords that are necessary for correct prediction, such as “after" for “day" and “leave at”, “to" for “destination". Moreover, during the generation step of “departure”, the user attention successfully discerns that, based on the context, the word “leave” is not the evidence that need to be accumulated and choose to attend nothing in this step. For the belief attention, we can see that the belief attention module correctly attends to a previous slot for each generation step of a slot that has been presented in the previous state. For the generation step of the new slot “destination", since the previous state does not have the “destination" slot, the belief attention module only attends to the `-' mark after the `train' domain to indicate that the generated word should belong to this domain. Related Work Semi-scalable Belief Tracker BIBREF1 proposed an approach that can generate fixed-length candidate sets for each of the slots from the dialogue history. Although they only need to perform inference for a fixed number of values, they still need to iterate over all slots defined in the ontology to make a prediction for a given dialogue turn. In addition, their method needs an external language understanding module to extract the exact entities from a dialogue to form candidates, which will not work if the label value is an abstraction and does not have the exact match with the words in the dialogue. StateNet BIBREF3 achieves state-of-the-art performance with the property that its parameters are independent of the number of slot values in the candidate set, and it also supports online training or inference with dynamically changing slots and values. Given a slot that needs tracking, it only needs to perform inference once to make the prediction for a turn, but this also means that its inference time complexity is proportional to the number of slots. TRADE BIBREF4 achieves state-of-the-art performance on the MultiWoZ dataset by applying the copy mechanism for the value sequence generation. Since TRADE takes $n$ combinations of the domains and slots as the input, the inference time complexity of TRADE is $O(n)$ . The performance improvement achieved by TRADE is mainly due to the fact that it incorporates the copy mechanism that can boost the accuracy on the ‘name’ slot, which mainly needs the ability in copying names from the dialogue history. However, TRADE does not report its performance on the WoZ2.0 dataset which does not have the ‘name’ slot. DSTRead BIBREF6 formulate the dialogue state tracking task as a reading comprehension problem by asking slot specified questions to the BERT model and find the answer span in the dialogue history for each of the pre-defined combined slot. Thus its inference time complexity is still $O(n)$ . This method suffers from the fact that its generation vocabulary is limited to the words occurred in the dialogue history, and it has to do a manual combination strategy with another joint state tracking model on the development set to achieve better performance. Contextualized Word Embedding (CWE) was first proposed by BIBREF25 . Based on the intuition that the meaning of a word is highly correlated with its context, CWE takes the complete context (sentences, passages, etc.) as the input, and outputs the corresponding word vectors that are unique under the given context. Recently, with the success of language models (e.g. BIBREF12 ) that are trained on large scale data, contextualizeds word embedding have been further improved and can achieve the same performance compared to (less flexible) finely-tuned pipelines. Sequence Generation Models. Recently, sequence generation models have been successfully applied in the realm of multi-label classification (MLC) BIBREF14 . Different from traditional binary relevance methods, they proposed a sequence generation model for MLC tasks which takes into consideration the correlations between labels. Specifically, the model follows the encoder-decoder structure with an attention mechanism BIBREF26 , where the decoder generates a sequence of labels. Similar to language modeling tasks, the decoder output at each time step will be conditioned on the previous predictions during generation. Therefore the correlation between generated labels is captured by the decoder. Conclusion In this work, we proposed the Conditional Memory Relation Network (COMER), the first dialogue state tracking model that has a constant inference time complexity with respect to the number of domains, slots and values pre-defined in an ontology. Besides its scalability, the joint goal accuracy of our model also achieve the similar performance compared with the state-of-the-arts on both the MultiWoZ dataset and the WoZ dataset. Due to the flexibility of our hierarchical encoder-decoder framework and the CMR decoder, abundant future research direction remains as applying the transformer structure, incorporating open vocabulary and copy mechanism for explicit unseen words generation, and inventing better dialogue history access mechanism to accommodate efficient inter-turn reasoning. Acknowledgements. This work is partly supported by NSF #1750063. We thank all the reviewers for their constructive suggestions. We also want to thank Zhuowen Tu and Shengnan Zhang for the early discussions of the project.
joint goal accuracy
9bfa46ad55136f2a365e090ce585fc012495393c
9bfa46ad55136f2a365e090ce585fc012495393c_0
Q: Which datasets are used to evaluate performance? Text: Introduction A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system BIBREF7 . For each dialogue turn, a DST module takes a user utterance and the dialogue history as input, and outputs a belief estimate of the dialogue state. Then a machine action is decided based on the dialogue state according to a dialogue policy module, after which a machine response is generated. Traditionally, a dialogue state consists of a set of requests and joint goals, both of which are represented by a set of slot-value pairs (e.g. (request, phone), (area, north), (food, Japanese)) BIBREF8 . In a recently proposed multi-domain dialogue state tracking dataset, MultiWoZ BIBREF9 , a representation of dialogue state consists of a hierarchical structure of domain, slot, and value is proposed. This is a more practical scenario since dialogues often include multiple domains simultaneously. Many recently proposed DSTs BIBREF2 , BIBREF10 are based on pre-defined ontology lists that specify all possible slot values in advance. To generate a distribution over the candidate set, previous works often take each of the slot-value pairs as input for scoring. However, in real-world scenarios, it is often not practical to enumerate all possible slot value pairs and perform scoring from a large dynamically changing knowledge base BIBREF11 . To tackle this problem, a popular direction is to build a fixed-length candidate set that is dynamically updated throughout the dialogue development. cpt briefly summaries the inference time complexity of multiple state-of-the-art DST models following this direction. Since the inference complexity of all of previous model is at least proportional to the number of the slots, these models will struggle to scale to multi-domain datasets with much larger numbers of pre-defined slots. In this work, we formulate the dialogue state tracking task as a sequence generation problem, instead of formulating the task as a pair-wise prediction problem as in existing work. We propose the COnditional MEmory Relation Network (COMER), a scalable and accurate dialogue state tracker that has a constant inference time complexity. Specifically, our model consists of an encoder-decoder network with a hierarchically stacked decoder to first generate the slot sequences in the belief state and then for each slot generate the corresponding value sequences. The parameters are shared among all of our decoders for the scalability of the depth of the hierarchical structure of the belief states. COMER applies BERT contextualized word embeddings BIBREF12 and BPE BIBREF13 for sequence encoding to ensure the uniqueness of the representations of the unseen words. The word embeddings for sequence generation are initialized and fixed with the static word embeddings generated from BERT to have the potential of generating unseen words. Motivation f1 shows a multi-domain dialogue in which the user wants the system to first help book a train and then reserve a hotel. For each turn, the DST will need to track the slot-value pairs (e.g. (arrive by, 20:45)) representing the user goals as well as the domain that the slot-value pairs belongs to (e.g. train, hotel). Instead of representing the belief state via a hierarchical structure, one can also combine the domain and slot together to form a combined slot-value pair (e.g. (train; arrive by, 20:45) where the combined slot is “train; arrive by"), which ignores the subordination relationship between the domain and the slots. A typical fallacy in dialogue state tracking datasets is that they make an assumption that the slot in a belief state can only be mapped to a single value in a dialogue turn. We call this the single value assumption. Figure 2 shows an example of this fallacy from the WoZ2.0 dataset: Based on the belief state label (food, seafood), it will be impossible for the downstream module in the dialogue system to generate sample responses that return information about Chinese restaurants. A correct representation of the belief state could be (food, seafood $>$ chinese). This would tell the system to first search the database for information about seafood and then Chinese restaurants. The logical operator “ $>$ " indicates which retrieved information should have a higher priority to be returned to the user. Thus we are interested in building DST modules capable of generating structured sequences, since this kind of sequence representation of the value is critical for accurately capturing the belief states of a dialogue. Hierarchical Sequence Generation for DST Given a dialogue $D$ which consists of $T$ turns of user utterances and system actions, our target is to predict the state at each turn. Different from previous methods which formulate multi-label state prediction as a collection of binary prediction problems, COMER adapts the task into a sequence generation problem via a Seq2Seq framework. As shown in f3, COMER consists of three encoders and three hierarchically stacked decoders. We propose a novel Conditional Memory Relation Decoder (CMRD) for sequence decoding. Each encoder includes an embedding layer and a BiLSTM. The encoders take in the user utterance, the previous system actions, and the previous belief states at the current turn, and encodes them into the embedding space. The user encoder and the system encoder use the fixed BERT model as the embedding layer. Since the slot value pairs are un-ordered set elements of a domain in the belief states, we first order the sequence of domain according to their frequencies as they appear in the training set BIBREF14 , and then order the slot value pairs in the domain according to the slot's frequencies of as they appear in a domain. After the sorting of the state elements, We represent the belief states following the paradigm: (Domain1- Slot1, Value1; Slot2, Value2; ... Domain2- Slot1, Value1; ...) for a more concise representation compared with the nested tuple representation. All the CMRDs take the same representations from the system encoder, user encoder and the belief encoder as part of the input. In the procedure of hierarchical sequence generation, the first CMRD takes a zero vector for its condition input $\mathbf {c}$ , and generates a sequence of the domains, $D$ , as well as the hidden representation of domains $H_D$ . For each $d$ in $D$ , the second CMRD then takes the corresponding $h_d$ as the condition input and generates the slot sequence $S_d$ , and representations, $H_{S,d}$ . Then for each $s$ in $S$ , the third CMRD generates the value sequence $D$0 based on the corresponding $D$1 . We update the belief state with the new $D$2 pairs and perform the procedure iteratively until a dialogue is completed. All the CMR decoders share all of their parameters. Since our model generates domains and slots instead of taking pre-defined slots as inputs, and the number of domains and slots generated each turn is only related to the complexity of the contents covered in a specific dialogue, the inference time complexity of COMER is $O(1)$ with respect to the number of pre-defined slots and values. Encoding Module Let $X$ represent a user utterance or system transcript consisting of a sequence of words $\lbrace w_1,\ldots ,w_T\rbrace $ . The encoder first passes the sequence $\lbrace \mathit {[CLS]},w_1,\ldots ,w_T,\mathit {[SEP]}\rbrace $ into a pre-trained BERT model and obtains its contextual embeddings $E_{X}$ . Specifically, we leverage the output of all layers of BERT and take the average to obtain the contextual embeddings. For each domain/slot appeared in the training set, if it has more than one word, such as `price range', `leave at', etc., we feed it into BERT and take the average of the word vectors to form the extra slot embedding $E_{s}$ . In this way, we map each domain/slot to a fixed embedding, which allows us to generate a domain/slot as a whole instead of a token at each time step of domain/slot sequence decoding. We also construct a static vocabulary embedding $E_{v}$ by feeding each token in the BERT vocabulary into BERT. The final static word embedding $E$ is the concatenation of the $E_{v}$ and $E_{s}$ . After we obtain the contextual embeddings for the user utterance, system action, and the static embeddings for the previous belief state, we feed each of them into a Bidirectional LSTM BIBREF15 . $$\begin{aligned} \mathbf {h}_{a_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{a_t}}, \mathbf {h}_{a_{t-1}}) \\ \mathbf {h}_{u_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{u_t}}, \mathbf {h}_{u_{t-1}}) \\ \mathbf {h}_{b_t} & = \textrm {BiLSTM}(\mathbf {e}_{X_{b_t}}, \mathbf {h}_{b_{t-1}}) \\ \mathbf {h}_{a_0} & = \mathbf {h}_{u_0} = \mathbf {h}_{b_0} = c_{0}, \\ \end{aligned}$$ (Eq. 7) where $c_{0}$ is the zero-initialized hidden state for the BiLSTM. The hidden size of the BiLSTM is $d_m/2$ . We concatenate the forward and the backward hidden representations of each token from the BiLSTM to obtain the token representation $\mathbf {h}_{k_t}\in R^{d_m}$ , $k\in \lbrace a,u,b\rbrace $ at each time step $t$ . The hidden states of all time steps are concatenated to obtain the final representation of $H_{k}\in R^{T \times d_m}, k \in \lbrace a,u,B\rbrace $ . The parameters are shared between all of the BiLSTMs. Conditional Memory Relation Decoder Inspired by Residual Dense Networks BIBREF16 , End-to-End Memory Networks BIBREF17 and Relation Networks BIBREF18 , we here propose the Conditional Memory Relation Decoder (CMRD). Given a token embedding, $\mathbf {e}_x$ , CMRD outputs the next token, $s$ , and the hidden representation, $h_s$ , with the hierarchical memory access of different encoded information sources, $H_B$ , $H_a$ , $H_u$ , and the relation reasoning under a certain given condition $\mathbf {c}$ , $ \mathbf {s}, \mathbf {h}_s= \textrm {CMRD}(\mathbf {e}_x, \mathbf {c}, H_B, H_a, H_u), $ the final output matrices $S,H_s \in R^{l_s\times d_m}$ are concatenations of all generated $\mathbf {s}$ and $\mathbf {h}_s$ (respectively) along the sequence length dimension, where $d_m$ is the model size, and $l_s$ is the generated sequence length. The general structure of the CMR decoder is shown in Figure 4 . Note that the CMR decoder can support additional memory sources by adding the residual connection and the attention block, but here we only show the structure with three sources: belief state representation ( $H_B$ ), system transcript representation ( $H_a$ ), and user utterance representation ( $H_u$ ), corresponding to a dialogue state tracking scenario. Since we share the parameters between all of the decoders, thus CMRD is actually a 2-dimensional auto-regressive model with respect to both the condition generation and the sequence generation task. At each time step $t$ , the CMR decoder first embeds the token $x_t$ with a fixed token embedding $E\in R^{d_e\times d_v}$ , where $d_e$ is the embedding size and $d_v$ is the vocabulary size. The initial token $x_0$ is “[CLS]". The embedded vector $\textbf {e}_{x_t}$ is then encoded with an LSTM, which emits a hidden representation $\textbf {h}_0 \in R^{d_m}$ , $ \textbf {h}_0= \textrm {LSTM}(\textbf {e}_{x_t},\textbf {q}_{t-1}). $ where $\textbf {q}_t$ is the hidden state of the LSTM. $\textbf {q}_0$ is initialized with an average of the hidden states of the belief encoder, the system encoder and the user encoder which produces $H_B$ , $H_a$ , $H_u$ respectively. $\mathbf {h}_0$ is then summed (element-wise) with the condition representation $\mathbf {c}\in R^{d_m}$ to produce $\mathbf {h}_1$ , which is (1) fed into the attention module; (2) used for residual connection; and (3) concatenated with other $\mathbf {h}_i$ , ( $i>1$ ) to produce the concatenated working memory, $\mathbf {r_0}$ , for relation reasoning, $ \mathbf {h}_1 & =\mathbf {h}_0+\mathbf {c},\\ \mathbf {h}_2 & =\mathbf {h}_1+\text{Attn}_{\text{belief}}(\mathbf {h}_1,H_e),\\ \mathbf {h}_3 & = \mathbf {h}_2+\text{Attn}_{\text{sys}}(\mathbf {h}_2,H_a),\\ \mathbf {h}_4 & = \mathbf {h}_3+\text{Attn}_{\text{usr}}(\mathbf {h}_3,H_u),\\ \mathbf {r} & = \mathbf {h}_1\oplus \mathbf {h}_2\oplus \mathbf {h}_3\oplus \mathbf {h}_4 \in R^{4d_m}, $ where $\text{Attn}_k$ ( $k\in \lbrace \text{belief}, \text{sys},\text{usr}\rbrace $ ) are the attention modules applied respectively to $H_B$ , $H_a$ , $H_u$ , and $\oplus $ means the concatenation operator. The gradients are blocked for $ \mathbf {h}_1,\mathbf {h}_2,\mathbf {h}_3$ during the back-propagation stage, since we only need them to work as the supplementary memories for the relation reasoning followed. The attention module takes a vector, $\mathbf {h}\in R^{d_m}$ , and a matrix, $H\in R^{d_m\times l}$ as input, where $l$ is the sequence length of the representation, and outputs $\mathbf {h}_a$ , a weighted sum of the column vectors in $H$ . $ \mathbf {a} & =W_1^T\mathbf {h}+\mathbf {b}_1& &\in R^{d_m},\\ \mathbf {c} &=\text{softmax}(H^Ta)& &\in R^l,\\ \mathbf {h} &=H\mathbf {c}& &\in R^{d_m},\\ \mathbf {h}_a &=W_2^T\mathbf {h}+\mathbf {b}_2& &\in R^{d_m}, $ where the weights $W_1\in R^{d_m \times d_m}$ , $W_2\in R^{d_m \times d_m}$ and the bias $b_1\in R^{d_m}$ , $b_2\in R^{d_m}$ are the learnable parameters. The order of the attention modules, i.e., first attend to the system and the user and then the belief, is decided empirically. We can interpret this hierarchical structure as the internal order for the memory processing, since from the daily life experience, people tend to attend to the most contemporary memories (system/user utterance) first and then attend to the older history (belief states). All of the parameters are shared between the attention modules. The concatenated working memory, $\mathbf {r}_0$ , is then fed into a Multi-Layer Perceptron (MLP) with four layers, $ \mathbf {r}_1 & =\sigma (W_1^T\mathbf {r}_0+\mathbf {b}_1),\\ \mathbf {r}_2 & =\sigma (W_2^T\mathbf {r}_1+\mathbf {b}_2),\\ \mathbf {r}_3 & = \sigma (W_3^T\mathbf {r}_2+\mathbf {b}_3),\\ \mathbf {h}_s & = \sigma (W_4^T\mathbf {r}_3+\mathbf {b}_4), $ where $\sigma $ is a non-linear activation, and the weights $W_1 \in R^{4d_m \times d_m}$ , $W_i \in R^{d_m \times d_m}$ and the bias $b_1 \in R^{d_m}$ , $b_i \in R^{d_m}$ are learnable parameters, and $2\le i\le 4$ . The number of layers for the MLP is decided by the grid search. The hidden representation of the next token, $\mathbf {h}_s$ , is then (1) emitted out of the decoder as a representation; and (2) fed into a dropout layer with drop rate $p$ , and a linear layer to generate the next token, $ \mathbf {h}_k & =\text{dropout}(\mathbf {h}_s)& &\in R^{d_m},\\ \mathbf {h}_o & =W_k^T\mathbf {h}_k+\mathbf {b}_k& &\in R^{d_e},\\ \mathbf {p}_s & =\text{softmax}(E^T\mathbf {h}_o)& &\in R^{d_v},\\ s & =\text{argmax}(\mathbf {p}_s)& &\in R, $ where the weight $W_k\in R^{d_m \times d_e}$ and the bias $b_k\in R^{d_e}$ are learnable parameters. Since $d_e$ is the embedding size and the model parameters are independent of the vocabulary size, the CMR decoder can make predictions on a dynamic vocabulary and implicitly supports the generation of unseen words. When training the model, we minimize the cross-entropy loss between the output probabilities, $\mathbf {p}_s$ , and the given labels. Experimental Setting We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 . Based on the statistics from these two datasets, we can calculate the theoretical Inference Time Multiplier (ITM), $K$ , as a metric of scalability. Given the inference time complexity, ITM measures how many times a model will be slower when being transferred from the WoZ2.0 dataset, $d_1$ , to the MultiWoZ dataset, $d_2$ , $ K= h(t)h(s)h(n)h(m)\\ $ $ h(x)=\left\lbrace \begin{array}{lcl} 1 & &O(x)=O(1),\\ \frac{x_{d_2}}{x_{d_1}}& & \text{otherwise},\\ \end{array}\right. $ where $O(x)$ means the Inference Time Complexity (ITC) of the variable $x$ . For a model having an ITC of $O(1)$ with respect to the number of slots $n$ , and values $m$ , the ITM will be a multiplier of 2.15x, while for an ITC of $O(n)$ , it will be a multiplier of 25.1, and 1,143 for $O(mn)$ . As a convention, the metric of joint goal accuracy is used to compare our model to previous work. The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided. This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system. In this work, the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement. Implementation Details We use the $\text{BERT}_\text{large}$ model for both contextual and static embedding generation. All LSTMs in the model are stacked with 2 layers, and only the output of the last layer is taken as a hidden representation. ReLU non-linearity is used for the activation function, $\sigma $ . The hyper-parameters of our model are identical for both the WoZ2.0 and the MultiwoZ datasets: dropout rate $p=0.5$ , model size $d_m=512$ , embedding size $d_e=1024$ . For training on WoZ2.0, the model is trained with a batch size of 32 and the ADAM optimizer BIBREF21 for 150 epochs, while for MultiWoZ, the AMSGrad optimizer BIBREF22 and a batch size of 16 is adopted for 15 epochs of training. For both optimizers, we use a learning rate of 0.0005 with a gradient clip of 2.0. We initialize all weights in our model with Kaiming initialization BIBREF23 and adopt zero initialization for the bias. All experiments are conducted on a single NVIDIA GTX 1080Ti GPU. Results To measure the actual inference time multiplier of our model, we evaluate the runtime of the best-performing models on the validation sets of both the WoZ2.0 and MultiWoZ datasets. During evaluation, we set the batch size to 1 to avoid the influence of data parallelism and sequence padding. On the validation set of WoZ2.0, we obtain a runtime of 65.6 seconds, while on MultiWoZ, the runtime is 835.2 seconds. Results are averaged across 5 runs. Considering that the validation set of MultiWoZ is 5 times larger than that of WoZ2.0, the actual inference time multiplier is 2.54 for our model. Since the actual inference time multiplier roughly of the same magnitude as the theoretical value of 2.15, we can confirm empirically that we have the $O(1)$ inference time complexity and thus obtain full scalability to the number of slots and values pre-defined in an ontology. c compares our model with the previous state-of-the-art on both the WoZ2.0 test set and the MultiWoZ test set. For the WoZ2.0 dataset, we maintain performance at the level of the state-of-the-art, with a marginal drop of 0.3% compared with previous work. Considering the fact that WoZ2.0 is a relatively small dataset, this small difference does not represent a significant big performance drop. On the muli-domain dataset, MultiWoZ, our model achieves a joint goal accuracy of 45.72%, which is significant better than most of the previous models other than TRADE which applies the copy mechanism and gains better generalization ability on named entity coping. Ablation Study To prove the effectiveness of our structure of the Conditional Memory Relation Decoder (CMRD), we conduct ablation experiments on the WoZ2.0 dataset. We observe an accuracy drop of 1.95% after removing residual connections and the hierarchical stack of our attention modules. This proves the effectiveness of our hierarchical attention design. After the MLP is replaced with a linear layer of hidden size 512 and the ReLU activation function, the accuracy further drops by 3.45%. This drop is partly due to the reduction of the number of the model parameters, but it also proves that stacking more layers in an MLP can improve the relational reasoning performance given a concatenation of multiple representations from different sources. We also conduct the ablation study on the MultiWoZ dataset for a more precise analysis on the hierarchical generation process. For joint domain accuracy, we calculate the probability that all domains generated in each turn are exactly matched with the labels provided. The joint domain-slot accuracy further calculate the probability that all domains and slots generated are correct, while the joint goal accuracy requires all the domains, slots and values generated are exactly matched with the labels. From abm, We can further calculate that given the correct slot prediction COMER has 83.52% chance to make the correct value prediction. While COMER has done great job on domain prediction (95.53%) and value prediction (83.52%), the accuracy of the slot prediction given the correct domain is only 57.30%. We suspect that this is because we only use the previous belief state to represent the dialogue history, and the inter-turn reasoning ability on the slot prediction suffers from the limited context and the accuracy is harmed due to the multi-turn mapping problem BIBREF4 . We can also see that the JDS Acc. has an absolute boost of 5.48% when we switch from the combined slot representation to the nested tuple representation. This is because the subordinate relationship between the domains and the slots can be captured by the hierarchical sequence generation, while this relationship is missed when generating the domain and slot together via the combined slot representation. Qualitative Analysis f5 shows an example of the belief state prediction result in one turn of a dialogue on the MultiWoZ test set. The visualization includes the CMRD attention scores over the belief states, system transcript and user utterance during the decoding stage of the slot sequence. From the system attention (top right), since it is the first attention module and no previous context information is given, it can only find the information indicating the slot “departure” from the system utterance under the domain condition, and attend to the evidence “leaving” correctly during the generation step of “departure”. From the user attention, we can see that it captures the most helpful keywords that are necessary for correct prediction, such as “after" for “day" and “leave at”, “to" for “destination". Moreover, during the generation step of “departure”, the user attention successfully discerns that, based on the context, the word “leave” is not the evidence that need to be accumulated and choose to attend nothing in this step. For the belief attention, we can see that the belief attention module correctly attends to a previous slot for each generation step of a slot that has been presented in the previous state. For the generation step of the new slot “destination", since the previous state does not have the “destination" slot, the belief attention module only attends to the `-' mark after the `train' domain to indicate that the generated word should belong to this domain. Related Work Semi-scalable Belief Tracker BIBREF1 proposed an approach that can generate fixed-length candidate sets for each of the slots from the dialogue history. Although they only need to perform inference for a fixed number of values, they still need to iterate over all slots defined in the ontology to make a prediction for a given dialogue turn. In addition, their method needs an external language understanding module to extract the exact entities from a dialogue to form candidates, which will not work if the label value is an abstraction and does not have the exact match with the words in the dialogue. StateNet BIBREF3 achieves state-of-the-art performance with the property that its parameters are independent of the number of slot values in the candidate set, and it also supports online training or inference with dynamically changing slots and values. Given a slot that needs tracking, it only needs to perform inference once to make the prediction for a turn, but this also means that its inference time complexity is proportional to the number of slots. TRADE BIBREF4 achieves state-of-the-art performance on the MultiWoZ dataset by applying the copy mechanism for the value sequence generation. Since TRADE takes $n$ combinations of the domains and slots as the input, the inference time complexity of TRADE is $O(n)$ . The performance improvement achieved by TRADE is mainly due to the fact that it incorporates the copy mechanism that can boost the accuracy on the ‘name’ slot, which mainly needs the ability in copying names from the dialogue history. However, TRADE does not report its performance on the WoZ2.0 dataset which does not have the ‘name’ slot. DSTRead BIBREF6 formulate the dialogue state tracking task as a reading comprehension problem by asking slot specified questions to the BERT model and find the answer span in the dialogue history for each of the pre-defined combined slot. Thus its inference time complexity is still $O(n)$ . This method suffers from the fact that its generation vocabulary is limited to the words occurred in the dialogue history, and it has to do a manual combination strategy with another joint state tracking model on the development set to achieve better performance. Contextualized Word Embedding (CWE) was first proposed by BIBREF25 . Based on the intuition that the meaning of a word is highly correlated with its context, CWE takes the complete context (sentences, passages, etc.) as the input, and outputs the corresponding word vectors that are unique under the given context. Recently, with the success of language models (e.g. BIBREF12 ) that are trained on large scale data, contextualizeds word embedding have been further improved and can achieve the same performance compared to (less flexible) finely-tuned pipelines. Sequence Generation Models. Recently, sequence generation models have been successfully applied in the realm of multi-label classification (MLC) BIBREF14 . Different from traditional binary relevance methods, they proposed a sequence generation model for MLC tasks which takes into consideration the correlations between labels. Specifically, the model follows the encoder-decoder structure with an attention mechanism BIBREF26 , where the decoder generates a sequence of labels. Similar to language modeling tasks, the decoder output at each time step will be conditioned on the previous predictions during generation. Therefore the correlation between generated labels is captured by the decoder. Conclusion In this work, we proposed the Conditional Memory Relation Network (COMER), the first dialogue state tracking model that has a constant inference time complexity with respect to the number of domains, slots and values pre-defined in an ontology. Besides its scalability, the joint goal accuracy of our model also achieve the similar performance compared with the state-of-the-arts on both the MultiWoZ dataset and the WoZ dataset. Due to the flexibility of our hierarchical encoder-decoder framework and the CMR decoder, abundant future research direction remains as applying the transformer structure, incorporating open vocabulary and copy mechanism for explicit unseen words generation, and inventing better dialogue history access mechanism to accommodate efficient inter-turn reasoning. Acknowledgements. This work is partly supported by NSF #1750063. We thank all the reviewers for their constructive suggestions. We also want to thank Zhuowen Tu and Shengnan Zhang for the early discussions of the project.
the single domain dataset, WoZ2.0 , the multi-domain dataset, MultiWoZ
42812113ec720b560eb9463ff5e74df8764d1bff
42812113ec720b560eb9463ff5e74df8764d1bff_0
Q: How does the automatic theorem prover infer the relation? Text: Introduction & related work State-of-the-art models for almost all popular natural language processing tasks are based on deep neural networks, trained on massive amounts of data. A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language, and to what extent they rely on storing massive amounts of exemplars and only make `local' generalizations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . This question has led to (sometimes heated) debates between deep learning enthusiasts that are convinced neural networks can do almost anything, and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems, pointing out that crucial tests distinguishing between generalization and memorization have not been applied. In this paper, we take a pragmatic perspective on these issues. As the target for learning we use entailment relations in an artificial language, defined using first order logic (FOL), that is unambiguously compositional. We ask whether popular deep learning methods are capable in principle of acquiring the compositional rules that characterize it, and focus in particular on recurrent neural networks that are unambiguously `connectionist': trained recurrent nets do not rely on symbolic data and control structures such as trees and global variable binding, and can straightforwardly be implemented in biological networks BIBREF8 or neuromorphic hardware BIBREF9 . We report positive results on this challenge, and in the process develop a series of tests for compositional generalization that address the concerns of deep learning skeptics. The paper makes three main contributions. First, we develop a protocol for automatically generating data that can be used in entailment recognition tasks. Second, we demonstrate that several deep learning architectures succeed at one such task. Third, we present and apply a number of experiments to test whether models are capable of compositional generalization. Task definition & data generation The data generation process is inspired by BIBREF13 : an artificial language is defined, sentences are generated according to its grammar and the entailment relation between pairs of such sentences is established according to a fixed background logic. However, our language is significantly more complex, and instead of natural logic we use FOL. Learning models Our main model is a recurrent network, sketched in Figure 4 . It is a so-called `Siamese' network because it uses the same parameters to process the left and the right sentence. The upper part of the model is identical to BIBREF13 's recursive networks. It consists of a comparison layer and a classification layer, after which a softmax function is applied to determine the most probable target class. The comparison layer takes the concatenation of two sentence vectors as input. The number of cells equals the number of words, so it differs per sentence. Our set-up resembles the Siamese architecture for learning sentence similarity of BIBREF25 and the LSTM classifier described in BIBREF18 . In the diagram, the dashed box indicates the location of an arbitrary recurrent unit. We consider SRN BIBREF26 , GRU BIBREF27 and LSTM BIBREF28 . Results Training and testing accuracies after 50 training epochs, averaged over five different model runs, are shown in Table UID18 . All recurrent models outperform the summing baseline. Even the simplest recurrent network, the SRN, achieves higher training and testing accuracy scores than the tree-shaped matrix model. The GRU and LSTM even beat the tensor model. The LSTM obtains slightly lower scores than the GRU, which is unexpected given its more complex design, but perhaps the current challenge does not require separate forget and input gates. For more insight into the types of errors made by the best-performing (GRU-based) model, we refer to the confusion matrices in Appendix "Error statistics" . The consistently higher testing accuracy provides evidence that the recurrent networks are not only capable of recognizing FOL entailment relations between unseen sentences. They can also outperform the tree-shaped models on this task, although they do not use any of the symbolic structure that seemed to explain the success of their recursive predecessors. The recurrent classifiers have learned to apply their own strategies, which we will investigate in the remainder of this paper. Zero-shot, compositional generalization Compositionality is the ability to interpret and generate a possibly infinite number of constructions from known constituents, and is commonly understood as one of the fundamental aspects of human learning and reasoning ( BIBREF30 , BIBREF31 ). It has often been claimed that neural networks operate on a merely associative basis, lacking the compositional capacities to develop systematicity without an abundance of training data. See e.g. BIBREF1 , BIBREF2 , BIBREF32 . Especially recurrent models have recently been regarded quite sceptically in this respect, following the negative results established by BIBREF3 and BIBREF4 . Their research suggests that recurrent networks only perform well provided that there are no systematic discrepancies between train and test data, whereas human learning is robust with respect to such differences thanks to compositionality. In this section, we report more positive results on compositional reasoning of our Siamese networks. We focus on zero-shot generalization: correct classification of examples of a type that has not been observed before. Provided that atomic constituents and production rules are understood, compositionality does not require that abundantly many instances embodying a semantic category are observed. We will consider in turn what set-up is required to demonstrate zero-shot generalization to unseen lengths, and to generalization to sentences composed of novel words. Unseen lengths We test if our recurrent models are capable of generalization to unseen lengths. Neural models are often considered incapable of such generalization, allegedly because they are limited to the training space BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . We want to test if this is the case for the recurrent models studied in this paper. The language $\mathcal {L}$ licenses a heavily constrained set of grammatical configurations, but it does allow the sentence length to vary according to the number of included negations. A perfectly compositional model should be able to interpret statements containing any number of negations, on condition that it has seen an instantiation at least once at each position where this is allowed. In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table UID19 . All recurrent models obtain (near-)perfect training accuracy scores. What happens on the test set is interesting. It turns out that the GRU and LSTM can generalize from lengths 5, 7 and 8 to 6 and 9 very well, while the SRN faces serious difficulties. It seems that training on lengths 5, 7 and 8, and thereby skipping length 6, enables the GRU and LSTM to generalize to unseen sentence lengths 6 and 9. Training on lengths 5-7 and testing on lengths 8-9 yields low test scores for all models. The GRU and LSTM gates appear to play a crucial role, because the results show that the SRN does not have this capacity at all. Unseen words In the next experiment, we assess whether our GRU-based model, which performed best in the preceding experiments, is capable of zero-shot generalization to sentences with novel words. The current set-up cannot deal with unknown words, so instead of randomly initializing an embedding matrix that is updated during training, we use pretrained, 50-dimensional GloVe embeddings BIBREF37 that are kept constant. Using GloVe embeddings, the GRU model obtains a mean training accuracy of 100.0% and a testing accuracy of 95.9% (averaged over five runs). The best-performing model (with 100.0% training and 97.1% testing accuracy) is used in the following zero-shot experiments. One of the most basic relations on the level of lexical semantics is synonymy, which holds between words with equivalent meanings. In the language $\mathcal {L}$ , a word can be substituted with one of its synonyms without altering the entailment relation assigned to the sentence pairs that contain it. If the GRU manages to perform well on such a modified data set after receiving the pretrained GloVe embedding of the unseen word, this is a first piece of evidence for its zero-shot generalization skills. We test this for several pairs of synonymous words. The best-performing GRU is first evaluated with respect to the fragment of the test data containing the original word $w$ , and consequently with respect to that same fragment after replacing the original word with its synonym $s(w)$ . The pairs of words, the cosine distance $cos\_dist(w,s(w))$ between their GloVe embeddings and the obtained results are listed in Table 6 . For the first three examples in Table 6 , substitution only decreases testing accuracy by a few percentage points. Apparently, the word embeddings of the synonyms encode the lexical properties that the GRU needs to recognize that the same entailment relations apply to the sentence pairs. This does not prove that the model has distilled essential information about hyponymy from the GloVe embeddings. It could also be that the word embeddings of the replacement words are geometrically very similar to the originals, so that it is an algebraic necessity that the same results arise. However, this suspicion is inconsistent with the result of changing `hate' into `detest'. The cosine distance between these words is 0.56, so according to this measure their vectors are more similar than those representing `love' and `adore' (which have a cosine distance of 0.57). Nonetheless, replacing `hate' with `detest' confuses the model, whereas substitution of `love' into `adore' only decreases testing accuracy by 4.5 percentage points. This illustrates that robustness of the GRU in this respect is not a matter of simple vector similarity. In those cases where substitution into synonyms does not confuse the model it must have recognized a non-trivial property of the new word embedding that licenses particular inferences. In our next experiment, we replace a word not by its synonym, but by a word that has the same semantics in the context of artificial language $\mathcal {L}$ . We thus consider pairs of words that can be substituted with each other without affecting the entailment relation between any pair of sentences in which they feature. We call such terms `ontological twins'. Technically, if $\odot $ is an arbitrary lexical entailment relation and $\mathcal {O}$ is an ontology, then $w$ and $v$ are ontological twins if and only if $w, v \in \mathcal {O}$ and for all $u \in \mathcal {O}$ , if $u \notin \lbrace w,v \rbrace $ then $w \odot u \Leftrightarrow v \odot u$ . This trivially applies to self-identical terms or synonyms, but in the strictly defined hierarchy of $\mathcal {L}$ it is also the case for pairs of terms $\odot $0 that maintain the same lexical entailment relations to all other terms in the taxonomy. Examples of ontological twins in the taxonomy of nouns $\mathcal {N}^{\mathcal {L}}$ are `Romans' and `Venetians' . This can easily be verified in the Venn diagram of Figure 1 by replacing `Romans' with `Venetians' and observing that the same hierarchy applies. The same holds for e.g. `Germans' and `Polish' or for `children' and `students'. For several such word-twin pairs the GRU is evaluated with respect to the fragment of the test data containing the original word $w$ , and with respect to that same fragment after replacing the original word with ontological twin $t(w)$ . Results are shown in Table 7 . The examples in Table 7 suggest that the best-performing GRU is largely robust with respect to substitution into ontological twins. Replacing `Romans' with other urban Italian demonyms hardly affects model accuracy on the modified fragment of the test data. As before, there appears to be no correlation with vector similarity because the cosine distance between the different twin pairs has a much higher variation than the corresponding accuracy scores. `Germans' can be changed into `Polish' without significant deterioration, but substitution with `Dutch' greatly decreases testing accuracy. The situation is even worse for `Spanish'. Again, cosine similarity provides no explanation - `Spanish' is still closer to `Germans' than `Neapolitans' to `Romans'. Rather, the accuracy appears to be negatively correlated with the geographical distance between the national demonyms. After replacing `children' with `students', `women' or `linguists', testing scores are still decent. So far, we replaced individual words in order to assess whether the GRU can generalize from the vocabulary to new notions that have comparable semantics in the context of this entailment recognition task. The examples have illustrated that the model tends to do this quite well. In the last zero-shot learning experiment, we replace sets of nouns instead of single words, in order to assess the flexibility of the relational semantics that our networks have learned. Formally, the replacement can be regarded as a function $r$ , mapping words $w$ to substitutes $r(w)$ . Not all items have to be replaced. For an ontology $\mathcal {O}$ , the function $r$ must be such that for any $w, v \in \mathcal {O}$ and lexical entailment relation $\odot $ , $w \odot v \Leftrightarrow r(w) \odot r(v)$ . The result of applying $r$ can be called an `alternative hierarchy'. An example of an alternative hierarchy is the result of the replacement function $r_1$ that maps `Romans' to `Parisians' and `Italians' to `French'. Performing this substitution in the Venn diagram of Figure 1 shows that the taxonomy remains structurally intact. The best-performing GRU is evaluated on the fragment of the test data containing `Romans' or `Italians', and consequently on the same fragment after implementing replacement $r_1$ and providing the model with the GloVe embeddings of the unseen words. Replacement $r_1$ is incrementally modified up until replacement $r_4$ , which substitutes all nouns in $\mathcal {N}^{\mathcal {L}}$ . The results of applying $r_1$ to $r_4$ are shown in Table 8 . The results are positive: the GRU obtains 86.7% accuracy even after applying $r_4$ , which substitutes the entire ontology $\mathcal {N}^{\mathcal {L}}$ so that no previously encountered nouns are present in the test set anymore, although the sentences remain thematically somewhat similar to the original sentences. Testing scores are above 87% for the intermediate substitutions $r_1$ to $r_3$ . This outcome clearly shows that the classifier does not depend on a strongly customized word vector distribution in order to recognize higher-level entailment relations. Even if all nouns are replaced by alternatives with embeddings that have not been witnessed or optimized beforehand, the model obtains a high testing accuracy. This establishes obvious compositional capacities, because familiarity with structure and information about lexical semantics in the form of word embeddings are enough for the model to accommodate configurations of unseen words. What happens when we consider ontologies that have the same structure, but are thematically very different from the original ontology? Three such alternative hierarchies are considered: $r_{animals}$ , $r_{religion}$ and $r_{America}$ . Each of these functions relocalizes the noun ontology in a totally different domain of discourse, as indicated by their names. Table 9 specifies the functions and their effect. Testing accuracy decreases drastically, which indicates that the model is sensitive to the changing topic. Variation between the scores obtained after the three transformations is limited. Although they are much lower than before, they are still far above chance level for a seven-class problem. This suggests that the model is not at a complete loss as to the alternative noun hierarchies. Possibly, including a few relevant instances during training could already improve the results. Discussion & Conclusions We established that our Siamese recurrent networks (with SRN, GRU or LSTM cells) are able to recognize logical entailment relations without any a priori cues about syntax or semantics of the input expressions. Indeed, some of the recurrent set-ups even outperform tree-shaped networks, whose topology is specifically designed to deal with such tasks. This indicates that recurrent networks can develop representations that can adequately process a formal language with a nontrivial hierarchical structure. The formal language we defined did not exploit the full expressive power of first-order predicate logic; nevertheless by using standard first-order predicate logic, a standard theorem prover, and a set-up where the training set only covers a tiny fraction of the space of possible logical expressions, our experiments avoid the problems observed in earlier attempts to demonstrate logical reasoning in recurrent networks. The experiments performed in the last few sections moreover show that the GRU and LSTM architectures exhibit at least basic forms of compositional generalization. In particular, the results of the zero-shot generalization experiments with novel lengths and novel words cannot be explained with a `memorize-and-interpolate' account, i.e. an account of the working of deep neural networks that assumes all they do is store enormous training sets and generalize only locally. These results are relevant pieces of evidence in the decades-long debate on whether or not connectionist networks are fundamentally able to learn compositional solutions. Although we do not have the illusion that our work will put this debate to an end, we hope that it will help bring deep learning enthusiasts and skeptics a small step closer.
Unanswerable
4f4892f753b1d9c5e5e74c7c94d8c9b6ef523e7b
4f4892f753b1d9c5e5e74c7c94d8c9b6ef523e7b_0
Q: If these model can learn the first-order logic on artificial language, why can't it lear for natural language? Text: Introduction & related work State-of-the-art models for almost all popular natural language processing tasks are based on deep neural networks, trained on massive amounts of data. A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language, and to what extent they rely on storing massive amounts of exemplars and only make `local' generalizations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . This question has led to (sometimes heated) debates between deep learning enthusiasts that are convinced neural networks can do almost anything, and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems, pointing out that crucial tests distinguishing between generalization and memorization have not been applied. In this paper, we take a pragmatic perspective on these issues. As the target for learning we use entailment relations in an artificial language, defined using first order logic (FOL), that is unambiguously compositional. We ask whether popular deep learning methods are capable in principle of acquiring the compositional rules that characterize it, and focus in particular on recurrent neural networks that are unambiguously `connectionist': trained recurrent nets do not rely on symbolic data and control structures such as trees and global variable binding, and can straightforwardly be implemented in biological networks BIBREF8 or neuromorphic hardware BIBREF9 . We report positive results on this challenge, and in the process develop a series of tests for compositional generalization that address the concerns of deep learning skeptics. The paper makes three main contributions. First, we develop a protocol for automatically generating data that can be used in entailment recognition tasks. Second, we demonstrate that several deep learning architectures succeed at one such task. Third, we present and apply a number of experiments to test whether models are capable of compositional generalization. Task definition & data generation The data generation process is inspired by BIBREF13 : an artificial language is defined, sentences are generated according to its grammar and the entailment relation between pairs of such sentences is established according to a fixed background logic. However, our language is significantly more complex, and instead of natural logic we use FOL. Learning models Our main model is a recurrent network, sketched in Figure 4 . It is a so-called `Siamese' network because it uses the same parameters to process the left and the right sentence. The upper part of the model is identical to BIBREF13 's recursive networks. It consists of a comparison layer and a classification layer, after which a softmax function is applied to determine the most probable target class. The comparison layer takes the concatenation of two sentence vectors as input. The number of cells equals the number of words, so it differs per sentence. Our set-up resembles the Siamese architecture for learning sentence similarity of BIBREF25 and the LSTM classifier described in BIBREF18 . In the diagram, the dashed box indicates the location of an arbitrary recurrent unit. We consider SRN BIBREF26 , GRU BIBREF27 and LSTM BIBREF28 . Results Training and testing accuracies after 50 training epochs, averaged over five different model runs, are shown in Table UID18 . All recurrent models outperform the summing baseline. Even the simplest recurrent network, the SRN, achieves higher training and testing accuracy scores than the tree-shaped matrix model. The GRU and LSTM even beat the tensor model. The LSTM obtains slightly lower scores than the GRU, which is unexpected given its more complex design, but perhaps the current challenge does not require separate forget and input gates. For more insight into the types of errors made by the best-performing (GRU-based) model, we refer to the confusion matrices in Appendix "Error statistics" . The consistently higher testing accuracy provides evidence that the recurrent networks are not only capable of recognizing FOL entailment relations between unseen sentences. They can also outperform the tree-shaped models on this task, although they do not use any of the symbolic structure that seemed to explain the success of their recursive predecessors. The recurrent classifiers have learned to apply their own strategies, which we will investigate in the remainder of this paper. Zero-shot, compositional generalization Compositionality is the ability to interpret and generate a possibly infinite number of constructions from known constituents, and is commonly understood as one of the fundamental aspects of human learning and reasoning ( BIBREF30 , BIBREF31 ). It has often been claimed that neural networks operate on a merely associative basis, lacking the compositional capacities to develop systematicity without an abundance of training data. See e.g. BIBREF1 , BIBREF2 , BIBREF32 . Especially recurrent models have recently been regarded quite sceptically in this respect, following the negative results established by BIBREF3 and BIBREF4 . Their research suggests that recurrent networks only perform well provided that there are no systematic discrepancies between train and test data, whereas human learning is robust with respect to such differences thanks to compositionality. In this section, we report more positive results on compositional reasoning of our Siamese networks. We focus on zero-shot generalization: correct classification of examples of a type that has not been observed before. Provided that atomic constituents and production rules are understood, compositionality does not require that abundantly many instances embodying a semantic category are observed. We will consider in turn what set-up is required to demonstrate zero-shot generalization to unseen lengths, and to generalization to sentences composed of novel words. Unseen lengths We test if our recurrent models are capable of generalization to unseen lengths. Neural models are often considered incapable of such generalization, allegedly because they are limited to the training space BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . We want to test if this is the case for the recurrent models studied in this paper. The language $\mathcal {L}$ licenses a heavily constrained set of grammatical configurations, but it does allow the sentence length to vary according to the number of included negations. A perfectly compositional model should be able to interpret statements containing any number of negations, on condition that it has seen an instantiation at least once at each position where this is allowed. In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table UID19 . All recurrent models obtain (near-)perfect training accuracy scores. What happens on the test set is interesting. It turns out that the GRU and LSTM can generalize from lengths 5, 7 and 8 to 6 and 9 very well, while the SRN faces serious difficulties. It seems that training on lengths 5, 7 and 8, and thereby skipping length 6, enables the GRU and LSTM to generalize to unseen sentence lengths 6 and 9. Training on lengths 5-7 and testing on lengths 8-9 yields low test scores for all models. The GRU and LSTM gates appear to play a crucial role, because the results show that the SRN does not have this capacity at all. Unseen words In the next experiment, we assess whether our GRU-based model, which performed best in the preceding experiments, is capable of zero-shot generalization to sentences with novel words. The current set-up cannot deal with unknown words, so instead of randomly initializing an embedding matrix that is updated during training, we use pretrained, 50-dimensional GloVe embeddings BIBREF37 that are kept constant. Using GloVe embeddings, the GRU model obtains a mean training accuracy of 100.0% and a testing accuracy of 95.9% (averaged over five runs). The best-performing model (with 100.0% training and 97.1% testing accuracy) is used in the following zero-shot experiments. One of the most basic relations on the level of lexical semantics is synonymy, which holds between words with equivalent meanings. In the language $\mathcal {L}$ , a word can be substituted with one of its synonyms without altering the entailment relation assigned to the sentence pairs that contain it. If the GRU manages to perform well on such a modified data set after receiving the pretrained GloVe embedding of the unseen word, this is a first piece of evidence for its zero-shot generalization skills. We test this for several pairs of synonymous words. The best-performing GRU is first evaluated with respect to the fragment of the test data containing the original word $w$ , and consequently with respect to that same fragment after replacing the original word with its synonym $s(w)$ . The pairs of words, the cosine distance $cos\_dist(w,s(w))$ between their GloVe embeddings and the obtained results are listed in Table 6 . For the first three examples in Table 6 , substitution only decreases testing accuracy by a few percentage points. Apparently, the word embeddings of the synonyms encode the lexical properties that the GRU needs to recognize that the same entailment relations apply to the sentence pairs. This does not prove that the model has distilled essential information about hyponymy from the GloVe embeddings. It could also be that the word embeddings of the replacement words are geometrically very similar to the originals, so that it is an algebraic necessity that the same results arise. However, this suspicion is inconsistent with the result of changing `hate' into `detest'. The cosine distance between these words is 0.56, so according to this measure their vectors are more similar than those representing `love' and `adore' (which have a cosine distance of 0.57). Nonetheless, replacing `hate' with `detest' confuses the model, whereas substitution of `love' into `adore' only decreases testing accuracy by 4.5 percentage points. This illustrates that robustness of the GRU in this respect is not a matter of simple vector similarity. In those cases where substitution into synonyms does not confuse the model it must have recognized a non-trivial property of the new word embedding that licenses particular inferences. In our next experiment, we replace a word not by its synonym, but by a word that has the same semantics in the context of artificial language $\mathcal {L}$ . We thus consider pairs of words that can be substituted with each other without affecting the entailment relation between any pair of sentences in which they feature. We call such terms `ontological twins'. Technically, if $\odot $ is an arbitrary lexical entailment relation and $\mathcal {O}$ is an ontology, then $w$ and $v$ are ontological twins if and only if $w, v \in \mathcal {O}$ and for all $u \in \mathcal {O}$ , if $u \notin \lbrace w,v \rbrace $ then $w \odot u \Leftrightarrow v \odot u$ . This trivially applies to self-identical terms or synonyms, but in the strictly defined hierarchy of $\mathcal {L}$ it is also the case for pairs of terms $\odot $0 that maintain the same lexical entailment relations to all other terms in the taxonomy. Examples of ontological twins in the taxonomy of nouns $\mathcal {N}^{\mathcal {L}}$ are `Romans' and `Venetians' . This can easily be verified in the Venn diagram of Figure 1 by replacing `Romans' with `Venetians' and observing that the same hierarchy applies. The same holds for e.g. `Germans' and `Polish' or for `children' and `students'. For several such word-twin pairs the GRU is evaluated with respect to the fragment of the test data containing the original word $w$ , and with respect to that same fragment after replacing the original word with ontological twin $t(w)$ . Results are shown in Table 7 . The examples in Table 7 suggest that the best-performing GRU is largely robust with respect to substitution into ontological twins. Replacing `Romans' with other urban Italian demonyms hardly affects model accuracy on the modified fragment of the test data. As before, there appears to be no correlation with vector similarity because the cosine distance between the different twin pairs has a much higher variation than the corresponding accuracy scores. `Germans' can be changed into `Polish' without significant deterioration, but substitution with `Dutch' greatly decreases testing accuracy. The situation is even worse for `Spanish'. Again, cosine similarity provides no explanation - `Spanish' is still closer to `Germans' than `Neapolitans' to `Romans'. Rather, the accuracy appears to be negatively correlated with the geographical distance between the national demonyms. After replacing `children' with `students', `women' or `linguists', testing scores are still decent. So far, we replaced individual words in order to assess whether the GRU can generalize from the vocabulary to new notions that have comparable semantics in the context of this entailment recognition task. The examples have illustrated that the model tends to do this quite well. In the last zero-shot learning experiment, we replace sets of nouns instead of single words, in order to assess the flexibility of the relational semantics that our networks have learned. Formally, the replacement can be regarded as a function $r$ , mapping words $w$ to substitutes $r(w)$ . Not all items have to be replaced. For an ontology $\mathcal {O}$ , the function $r$ must be such that for any $w, v \in \mathcal {O}$ and lexical entailment relation $\odot $ , $w \odot v \Leftrightarrow r(w) \odot r(v)$ . The result of applying $r$ can be called an `alternative hierarchy'. An example of an alternative hierarchy is the result of the replacement function $r_1$ that maps `Romans' to `Parisians' and `Italians' to `French'. Performing this substitution in the Venn diagram of Figure 1 shows that the taxonomy remains structurally intact. The best-performing GRU is evaluated on the fragment of the test data containing `Romans' or `Italians', and consequently on the same fragment after implementing replacement $r_1$ and providing the model with the GloVe embeddings of the unseen words. Replacement $r_1$ is incrementally modified up until replacement $r_4$ , which substitutes all nouns in $\mathcal {N}^{\mathcal {L}}$ . The results of applying $r_1$ to $r_4$ are shown in Table 8 . The results are positive: the GRU obtains 86.7% accuracy even after applying $r_4$ , which substitutes the entire ontology $\mathcal {N}^{\mathcal {L}}$ so that no previously encountered nouns are present in the test set anymore, although the sentences remain thematically somewhat similar to the original sentences. Testing scores are above 87% for the intermediate substitutions $r_1$ to $r_3$ . This outcome clearly shows that the classifier does not depend on a strongly customized word vector distribution in order to recognize higher-level entailment relations. Even if all nouns are replaced by alternatives with embeddings that have not been witnessed or optimized beforehand, the model obtains a high testing accuracy. This establishes obvious compositional capacities, because familiarity with structure and information about lexical semantics in the form of word embeddings are enough for the model to accommodate configurations of unseen words. What happens when we consider ontologies that have the same structure, but are thematically very different from the original ontology? Three such alternative hierarchies are considered: $r_{animals}$ , $r_{religion}$ and $r_{America}$ . Each of these functions relocalizes the noun ontology in a totally different domain of discourse, as indicated by their names. Table 9 specifies the functions and their effect. Testing accuracy decreases drastically, which indicates that the model is sensitive to the changing topic. Variation between the scores obtained after the three transformations is limited. Although they are much lower than before, they are still far above chance level for a seven-class problem. This suggests that the model is not at a complete loss as to the alternative noun hierarchies. Possibly, including a few relevant instances during training could already improve the results. Discussion & Conclusions We established that our Siamese recurrent networks (with SRN, GRU or LSTM cells) are able to recognize logical entailment relations without any a priori cues about syntax or semantics of the input expressions. Indeed, some of the recurrent set-ups even outperform tree-shaped networks, whose topology is specifically designed to deal with such tasks. This indicates that recurrent networks can develop representations that can adequately process a formal language with a nontrivial hierarchical structure. The formal language we defined did not exploit the full expressive power of first-order predicate logic; nevertheless by using standard first-order predicate logic, a standard theorem prover, and a set-up where the training set only covers a tiny fraction of the space of possible logical expressions, our experiments avoid the problems observed in earlier attempts to demonstrate logical reasoning in recurrent networks. The experiments performed in the last few sections moreover show that the GRU and LSTM architectures exhibit at least basic forms of compositional generalization. In particular, the results of the zero-shot generalization experiments with novel lengths and novel words cannot be explained with a `memorize-and-interpolate' account, i.e. an account of the working of deep neural networks that assumes all they do is store enormous training sets and generalize only locally. These results are relevant pieces of evidence in the decades-long debate on whether or not connectionist networks are fundamentally able to learn compositional solutions. Although we do not have the illusion that our work will put this debate to an end, we hope that it will help bring deep learning enthusiasts and skeptics a small step closer.
Unanswerable
f258ada8577bb71873581820a94695f4a2c223b3
f258ada8577bb71873581820a94695f4a2c223b3_0
Q: How many samples did they generate for the artificial language? Text: Introduction & related work State-of-the-art models for almost all popular natural language processing tasks are based on deep neural networks, trained on massive amounts of data. A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language, and to what extent they rely on storing massive amounts of exemplars and only make `local' generalizations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . This question has led to (sometimes heated) debates between deep learning enthusiasts that are convinced neural networks can do almost anything, and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems, pointing out that crucial tests distinguishing between generalization and memorization have not been applied. In this paper, we take a pragmatic perspective on these issues. As the target for learning we use entailment relations in an artificial language, defined using first order logic (FOL), that is unambiguously compositional. We ask whether popular deep learning methods are capable in principle of acquiring the compositional rules that characterize it, and focus in particular on recurrent neural networks that are unambiguously `connectionist': trained recurrent nets do not rely on symbolic data and control structures such as trees and global variable binding, and can straightforwardly be implemented in biological networks BIBREF8 or neuromorphic hardware BIBREF9 . We report positive results on this challenge, and in the process develop a series of tests for compositional generalization that address the concerns of deep learning skeptics. The paper makes three main contributions. First, we develop a protocol for automatically generating data that can be used in entailment recognition tasks. Second, we demonstrate that several deep learning architectures succeed at one such task. Third, we present and apply a number of experiments to test whether models are capable of compositional generalization. Task definition & data generation The data generation process is inspired by BIBREF13 : an artificial language is defined, sentences are generated according to its grammar and the entailment relation between pairs of such sentences is established according to a fixed background logic. However, our language is significantly more complex, and instead of natural logic we use FOL. Learning models Our main model is a recurrent network, sketched in Figure 4 . It is a so-called `Siamese' network because it uses the same parameters to process the left and the right sentence. The upper part of the model is identical to BIBREF13 's recursive networks. It consists of a comparison layer and a classification layer, after which a softmax function is applied to determine the most probable target class. The comparison layer takes the concatenation of two sentence vectors as input. The number of cells equals the number of words, so it differs per sentence. Our set-up resembles the Siamese architecture for learning sentence similarity of BIBREF25 and the LSTM classifier described in BIBREF18 . In the diagram, the dashed box indicates the location of an arbitrary recurrent unit. We consider SRN BIBREF26 , GRU BIBREF27 and LSTM BIBREF28 . Results Training and testing accuracies after 50 training epochs, averaged over five different model runs, are shown in Table UID18 . All recurrent models outperform the summing baseline. Even the simplest recurrent network, the SRN, achieves higher training and testing accuracy scores than the tree-shaped matrix model. The GRU and LSTM even beat the tensor model. The LSTM obtains slightly lower scores than the GRU, which is unexpected given its more complex design, but perhaps the current challenge does not require separate forget and input gates. For more insight into the types of errors made by the best-performing (GRU-based) model, we refer to the confusion matrices in Appendix "Error statistics" . The consistently higher testing accuracy provides evidence that the recurrent networks are not only capable of recognizing FOL entailment relations between unseen sentences. They can also outperform the tree-shaped models on this task, although they do not use any of the symbolic structure that seemed to explain the success of their recursive predecessors. The recurrent classifiers have learned to apply their own strategies, which we will investigate in the remainder of this paper. Zero-shot, compositional generalization Compositionality is the ability to interpret and generate a possibly infinite number of constructions from known constituents, and is commonly understood as one of the fundamental aspects of human learning and reasoning ( BIBREF30 , BIBREF31 ). It has often been claimed that neural networks operate on a merely associative basis, lacking the compositional capacities to develop systematicity without an abundance of training data. See e.g. BIBREF1 , BIBREF2 , BIBREF32 . Especially recurrent models have recently been regarded quite sceptically in this respect, following the negative results established by BIBREF3 and BIBREF4 . Their research suggests that recurrent networks only perform well provided that there are no systematic discrepancies between train and test data, whereas human learning is robust with respect to such differences thanks to compositionality. In this section, we report more positive results on compositional reasoning of our Siamese networks. We focus on zero-shot generalization: correct classification of examples of a type that has not been observed before. Provided that atomic constituents and production rules are understood, compositionality does not require that abundantly many instances embodying a semantic category are observed. We will consider in turn what set-up is required to demonstrate zero-shot generalization to unseen lengths, and to generalization to sentences composed of novel words. Unseen lengths We test if our recurrent models are capable of generalization to unseen lengths. Neural models are often considered incapable of such generalization, allegedly because they are limited to the training space BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . We want to test if this is the case for the recurrent models studied in this paper. The language $\mathcal {L}$ licenses a heavily constrained set of grammatical configurations, but it does allow the sentence length to vary according to the number of included negations. A perfectly compositional model should be able to interpret statements containing any number of negations, on condition that it has seen an instantiation at least once at each position where this is allowed. In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table UID19 . All recurrent models obtain (near-)perfect training accuracy scores. What happens on the test set is interesting. It turns out that the GRU and LSTM can generalize from lengths 5, 7 and 8 to 6 and 9 very well, while the SRN faces serious difficulties. It seems that training on lengths 5, 7 and 8, and thereby skipping length 6, enables the GRU and LSTM to generalize to unseen sentence lengths 6 and 9. Training on lengths 5-7 and testing on lengths 8-9 yields low test scores for all models. The GRU and LSTM gates appear to play a crucial role, because the results show that the SRN does not have this capacity at all. Unseen words In the next experiment, we assess whether our GRU-based model, which performed best in the preceding experiments, is capable of zero-shot generalization to sentences with novel words. The current set-up cannot deal with unknown words, so instead of randomly initializing an embedding matrix that is updated during training, we use pretrained, 50-dimensional GloVe embeddings BIBREF37 that are kept constant. Using GloVe embeddings, the GRU model obtains a mean training accuracy of 100.0% and a testing accuracy of 95.9% (averaged over five runs). The best-performing model (with 100.0% training and 97.1% testing accuracy) is used in the following zero-shot experiments. One of the most basic relations on the level of lexical semantics is synonymy, which holds between words with equivalent meanings. In the language $\mathcal {L}$ , a word can be substituted with one of its synonyms without altering the entailment relation assigned to the sentence pairs that contain it. If the GRU manages to perform well on such a modified data set after receiving the pretrained GloVe embedding of the unseen word, this is a first piece of evidence for its zero-shot generalization skills. We test this for several pairs of synonymous words. The best-performing GRU is first evaluated with respect to the fragment of the test data containing the original word $w$ , and consequently with respect to that same fragment after replacing the original word with its synonym $s(w)$ . The pairs of words, the cosine distance $cos\_dist(w,s(w))$ between their GloVe embeddings and the obtained results are listed in Table 6 . For the first three examples in Table 6 , substitution only decreases testing accuracy by a few percentage points. Apparently, the word embeddings of the synonyms encode the lexical properties that the GRU needs to recognize that the same entailment relations apply to the sentence pairs. This does not prove that the model has distilled essential information about hyponymy from the GloVe embeddings. It could also be that the word embeddings of the replacement words are geometrically very similar to the originals, so that it is an algebraic necessity that the same results arise. However, this suspicion is inconsistent with the result of changing `hate' into `detest'. The cosine distance between these words is 0.56, so according to this measure their vectors are more similar than those representing `love' and `adore' (which have a cosine distance of 0.57). Nonetheless, replacing `hate' with `detest' confuses the model, whereas substitution of `love' into `adore' only decreases testing accuracy by 4.5 percentage points. This illustrates that robustness of the GRU in this respect is not a matter of simple vector similarity. In those cases where substitution into synonyms does not confuse the model it must have recognized a non-trivial property of the new word embedding that licenses particular inferences. In our next experiment, we replace a word not by its synonym, but by a word that has the same semantics in the context of artificial language $\mathcal {L}$ . We thus consider pairs of words that can be substituted with each other without affecting the entailment relation between any pair of sentences in which they feature. We call such terms `ontological twins'. Technically, if $\odot $ is an arbitrary lexical entailment relation and $\mathcal {O}$ is an ontology, then $w$ and $v$ are ontological twins if and only if $w, v \in \mathcal {O}$ and for all $u \in \mathcal {O}$ , if $u \notin \lbrace w,v \rbrace $ then $w \odot u \Leftrightarrow v \odot u$ . This trivially applies to self-identical terms or synonyms, but in the strictly defined hierarchy of $\mathcal {L}$ it is also the case for pairs of terms $\odot $0 that maintain the same lexical entailment relations to all other terms in the taxonomy. Examples of ontological twins in the taxonomy of nouns $\mathcal {N}^{\mathcal {L}}$ are `Romans' and `Venetians' . This can easily be verified in the Venn diagram of Figure 1 by replacing `Romans' with `Venetians' and observing that the same hierarchy applies. The same holds for e.g. `Germans' and `Polish' or for `children' and `students'. For several such word-twin pairs the GRU is evaluated with respect to the fragment of the test data containing the original word $w$ , and with respect to that same fragment after replacing the original word with ontological twin $t(w)$ . Results are shown in Table 7 . The examples in Table 7 suggest that the best-performing GRU is largely robust with respect to substitution into ontological twins. Replacing `Romans' with other urban Italian demonyms hardly affects model accuracy on the modified fragment of the test data. As before, there appears to be no correlation with vector similarity because the cosine distance between the different twin pairs has a much higher variation than the corresponding accuracy scores. `Germans' can be changed into `Polish' without significant deterioration, but substitution with `Dutch' greatly decreases testing accuracy. The situation is even worse for `Spanish'. Again, cosine similarity provides no explanation - `Spanish' is still closer to `Germans' than `Neapolitans' to `Romans'. Rather, the accuracy appears to be negatively correlated with the geographical distance between the national demonyms. After replacing `children' with `students', `women' or `linguists', testing scores are still decent. So far, we replaced individual words in order to assess whether the GRU can generalize from the vocabulary to new notions that have comparable semantics in the context of this entailment recognition task. The examples have illustrated that the model tends to do this quite well. In the last zero-shot learning experiment, we replace sets of nouns instead of single words, in order to assess the flexibility of the relational semantics that our networks have learned. Formally, the replacement can be regarded as a function $r$ , mapping words $w$ to substitutes $r(w)$ . Not all items have to be replaced. For an ontology $\mathcal {O}$ , the function $r$ must be such that for any $w, v \in \mathcal {O}$ and lexical entailment relation $\odot $ , $w \odot v \Leftrightarrow r(w) \odot r(v)$ . The result of applying $r$ can be called an `alternative hierarchy'. An example of an alternative hierarchy is the result of the replacement function $r_1$ that maps `Romans' to `Parisians' and `Italians' to `French'. Performing this substitution in the Venn diagram of Figure 1 shows that the taxonomy remains structurally intact. The best-performing GRU is evaluated on the fragment of the test data containing `Romans' or `Italians', and consequently on the same fragment after implementing replacement $r_1$ and providing the model with the GloVe embeddings of the unseen words. Replacement $r_1$ is incrementally modified up until replacement $r_4$ , which substitutes all nouns in $\mathcal {N}^{\mathcal {L}}$ . The results of applying $r_1$ to $r_4$ are shown in Table 8 . The results are positive: the GRU obtains 86.7% accuracy even after applying $r_4$ , which substitutes the entire ontology $\mathcal {N}^{\mathcal {L}}$ so that no previously encountered nouns are present in the test set anymore, although the sentences remain thematically somewhat similar to the original sentences. Testing scores are above 87% for the intermediate substitutions $r_1$ to $r_3$ . This outcome clearly shows that the classifier does not depend on a strongly customized word vector distribution in order to recognize higher-level entailment relations. Even if all nouns are replaced by alternatives with embeddings that have not been witnessed or optimized beforehand, the model obtains a high testing accuracy. This establishes obvious compositional capacities, because familiarity with structure and information about lexical semantics in the form of word embeddings are enough for the model to accommodate configurations of unseen words. What happens when we consider ontologies that have the same structure, but are thematically very different from the original ontology? Three such alternative hierarchies are considered: $r_{animals}$ , $r_{religion}$ and $r_{America}$ . Each of these functions relocalizes the noun ontology in a totally different domain of discourse, as indicated by their names. Table 9 specifies the functions and their effect. Testing accuracy decreases drastically, which indicates that the model is sensitive to the changing topic. Variation between the scores obtained after the three transformations is limited. Although they are much lower than before, they are still far above chance level for a seven-class problem. This suggests that the model is not at a complete loss as to the alternative noun hierarchies. Possibly, including a few relevant instances during training could already improve the results. Discussion & Conclusions We established that our Siamese recurrent networks (with SRN, GRU or LSTM cells) are able to recognize logical entailment relations without any a priori cues about syntax or semantics of the input expressions. Indeed, some of the recurrent set-ups even outperform tree-shaped networks, whose topology is specifically designed to deal with such tasks. This indicates that recurrent networks can develop representations that can adequately process a formal language with a nontrivial hierarchical structure. The formal language we defined did not exploit the full expressive power of first-order predicate logic; nevertheless by using standard first-order predicate logic, a standard theorem prover, and a set-up where the training set only covers a tiny fraction of the space of possible logical expressions, our experiments avoid the problems observed in earlier attempts to demonstrate logical reasoning in recurrent networks. The experiments performed in the last few sections moreover show that the GRU and LSTM architectures exhibit at least basic forms of compositional generalization. In particular, the results of the zero-shot generalization experiments with novel lengths and novel words cannot be explained with a `memorize-and-interpolate' account, i.e. an account of the working of deep neural networks that assumes all they do is store enormous training sets and generalize only locally. These results are relevant pieces of evidence in the decades-long debate on whether or not connectionist networks are fundamentally able to learn compositional solutions. Although we do not have the illusion that our work will put this debate to an end, we hope that it will help bring deep learning enthusiasts and skeptics a small step closer.
70,000
05bb75a1e1202850efa9191d6901de0a34744af0
05bb75a1e1202850efa9191d6901de0a34744af0_0
Q: Which of their training domains improves performance the most? Text: Introduction Although deep neural networks have achieved remarkable successes (e.g., BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ), their dependence on supervised learning has been challenged as a significant weakness. This dependence prevents deep neural networks from being applied to problems where labeled data is scarce. An example of such problems is common sense reasoning, such as the Winograd Schema Challenge BIBREF0 , where the labeled set is typically very small, on the order of hundreds of examples. Below is an example question from this dataset: Although it is straightforward for us to choose the answer to be "the trophy" according to our common sense, answering this type of question is a great challenge for machines because there is no training data, or very little of it. In this paper, we present a surprisingly simple method for common sense reasoning with Winograd schema multiple choice questions. Key to our method is th e use of language models (LMs), trained on a large amount of unlabeled data, to score multiple choice questions posed by the challenge and similar datasets. More concretely, in the above example, we will first substitute the pronoun ("it") with the candidates ("the trophy" and "the suitcase"), and then use LMs to compute the probability of the two resulting sentences ("The trophy doesn’t fit in the suitcase because the trophy is too big." and "The trophy doesn’t fit in the suitcase because the suitcase is too big."). The substitution that results in a more probable sentence will be the correct answer. A unique feature of Winograd Schema questions is the presence of a special word that decides the correct reference choice. In the above example, "big" is this special word. When "big" is replaced by "small", the correct answer switches to "the suitcase". Although detecting this feature is not part of the challenge, further analysis shows that our system successfully discovers this special word to make its decisions in many cases, indicating a good grasp of commonsense knowledge. Related Work Unsupervised learning has been used to discover simple commonsense relationships. For example, Mikolov et al. BIBREF15 , BIBREF16 show that by learning to predict adjacent words in a sentence, word vectors can be used to answer analogy questions such as: Man:King::Woman:?. Our work uses a similar intuition that language modeling can naturally capture common sense knowledge. The difference is that Winograd Schema questions require more contextual information, hence our use of LMs instead of just word vectors. Neural LMs have also been applied successfully to improve downstream applications BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , researchers have shown that pre-trained LMs can be used as feature representations for a sentence, or a paragraph to improve NLP applications such as document classification, machine translation, question answering, etc. The combined evidence suggests that LMs trained on a massive amount of unlabeled data can capture many aspects of natural language and the world's knowledge, especially commonsense information. Previous attempts on solving the Winograd Schema Challenge usually involve heavy utilization of annotated knowledge bases, rule-based reasoning, or hand-crafted features BIBREF21 , BIBREF22 , BIBREF23 . In particular, Rahman and Ng BIBREF24 employ human annotators to build more supervised training data. Their model utilizes nearly 70K hand-crafted features, including querying data from Google Search API. Sharma et al. BIBREF25 rely on a semantic parser to understand the question, query texts through Google Search, and reason on the graph produced by the parser. Similarly, Schüller BIBREF23 formalizes the knowledge-graph data structure and a reasoning process based on cognitive linguistics theories. Bailey et al. BIBREF22 introduces a framework for reasoning, using expensive annotated knowledge bases as axioms. The current best approach makes use of the skip-gram model to learn word representations BIBREF26 . The model incorporates several knowledge bases to regularize its training process, resulting in Knowledge Enhanced Embeddings (KEE). A semantic similarity scorer and a deep neural network classifier are then combined on top of KEE to predict the answers. The final system, therefore, includes both supervised and unsupervised models, besides three different knowledge bases. In contrast, our unsupervised method is simpler while having significantly higher accuracy. Unsupervised training is done on text corpora which can be cheaply curated. Using language models in reading comprehension tests also produced many great successes. Namely Chu et al. BIBREF27 used bi-directional RNNs to predict the last word of a passage in the LAMBADA challenge. Similarly, LMs are also used to produce features for a classifier in the Store Close Test 2017, giving best accuracy against other methods BIBREF28 . In a broader context, LMs are used to produce good word embeddings, significantly improved a wide variety of downstream tasks, including the general problem of question answering BIBREF19 , BIBREF29 . Methods We first substitute the pronoun in the original sentence with each of the candidate choices. The problem of coreference resolution then reduces to identifying which substitution results in a more probable sentence. By reframing the problem this way, language modeling becomes a natural solution by its definition. Namely, LMs are trained on text corpora, which encodes human knowledge in the form of natural language. During inference, LMs are able to assign probability to any given text based on what they have learned from training data. An overview of our method is shown in Figure 1 . Suppose the sentence $S$ of $n$ consecutive words has its pronoun to be resolved specified at the $k^{th}$ position: $S = \lbrace w_1, .., w_{k-1}, w_{k} \equiv p, w_{k+1}, .., w_{n}\rbrace $ . We make use of a trained language model $P_\theta (w_t | w_{1}, w_2, .., w_{t-1})$ , which defines the probability of word $w_t$ conditioned on the previous words $w_1, ..., w_{t-1}$ . The substitution of a candidate reference $c$ in to the pronoun position $k$ results in a new sentence $S_{w_k\leftarrow c}$ (we use notation $n$0 to mean that word $n$1 is substituted by candidate $n$2 ). We consider two different ways of scoring the substitution: which scores how probable the resulting full sentence is, and which scores how probable the part of the resulting sentence following $c$ is, given its antecedent. In other words, it only scores a part $S_{w_k\leftarrow c}$ conditioned on the rest of the substituted sentence. An example of these two scores is shown in Table 1 . In our experiments, we find that partial scoring strategy is generally better than the naive full scoring strategy. Experimental settings In this section we describe tests for commonsense reasoning and the LMs used to solve these tasks. We also detail training text corpora used in our experiments. Main results Our experiments start with testing LMs trained on all text corpora with PDP-60 and WSC-273. Next, we show that it is possible to customize training data to obtain even better results. The first challenge in 2016: PDP-60 We first examine unsupervised single-model resolvers on PDP-60 by training one character-level and one word-level LM on the Gutenberg corpus. In Table 2 , these two resolvers outperform previous results by a large margin. For this task, we found full scoring gives better results than partial scoring. In Section "Partial scoring is better than full scoring." , we provide evidences that this is an atypical case due to the very small size of PDP-60. Next, we allow systems to take in necessary components to maximize their test performance. This includes making use of supervised training data that maps commonsense reasoning questions to their correct answer. Here we simply train another three variants of LMs on LM-1-Billion, CommonCrawl, and SQuAD and ensemble all of them. As reported in Table 3 , this ensemble of five unsupervised models outperform the best system in the 2016 competition (58.3%) by a large margin. Specifically, we achieve 70.0% accuracy, better than the more recent reported results from Quan Liu et al (66.7%) BIBREF26 , who makes use of three knowledge bases and a supervised deep neural network. Winograd Schema Challenge On the harder task WSC-273, our single-model resolvers also outperform the current state-of-the-art by a large margin, as shown in Table 4 . Namely, our word-level resolver achieves an accuracy of 56.4%. By training another 4 LMs, each on one of the 4 text corpora LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and add to the previous ensemble, we are able to reach 61.5%, nearly 10% of accuracy above the previous best result. This is a drastic improvement considering this previous best system outperforms random guess by only 3% in accuracy. This task is more difficult than PDP-60. First, the overall performance of all competing systems are much lower than that of PDP-60. Second, incorporating supervised learning and expensive annotated knowledge bases to USSM provides insignificant gain this time (+3%), comparing to the large gain on PDP-60 (+19%). Customized training data for Winograd Schema Challenge As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 , BIBREF25 , we also explore using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers. In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. The score for each document is a weighted sum of $F_1(n)$ scores when counting overlapping n-grams: $Similarity\_Score_{document} = \frac{\sum _{n=1}^4nF_1(n)}{\sum _{n=1}^4n}$ The top 0.1% of highest ranked documents is chosen as our new training corpus. Details of the ranking is shown in Figure 2 . This procedure resulted in nearly 1,000,000 documents, with the highest ranking document having a score of $8\times 10^{-2}$ , still relatively small to a perfect score of $1.0$ . We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events. We train four different LMs on STORIES and add them to the previous ensemble of 10 LMs, resulting in a gain of 2% accuracy in the final system as shown in Table 5 . Remarkably, single models trained on this corpus are already extremely strong, with a word-level LM achieving 62.6% accuracy, even better than the ensemble of 10 models previously trained on 4 other text corpora (61.5%). Discovery of special words in Winograd Schema We introduce a method to potentially detect keywords at which our proposed resolvers make decision between the two candidates $c_{correct}$ and $c_{incorrect}$ . Namely, we look at the following ratio: $q_t = \frac{P_\theta (w_t | w_1, w_2, ..., w_{t-1}; w_k \leftarrow c_{correct})}{P_\theta (w_t | w_1, w_2, ..., w_{t-1}; w_k \leftarrow c_{incorrect})}$ Where $1 \le t \le n$ for full scoring, and $k +1 \le t \le n$ for partial scoring. It follows that the choice between $c_{correct}$ or $c_{incorrect}$ is made by the value of $Q = \prod _tq_t$ being bigger than $1.0$ or not. By looking at the value of each individual $q_t$ , it is possible to retrieve words with the largest values of $q_t$ and hence most responsible for the final value of $Q$ . We visualize the probability ratios $q_t$ to have more insights into the decisions of our resolvers. Figure 3 displays a sample of incorrect decisions made by full scoring and is corrected by partial scoring. Interestingly, we found $q_t$ with large values coincides with the special keyword of each Winograd Schema in several cases. Intuitively, this means the LMs assigned very low probability for the keyword after observing the wrong substitution. It follows that we can predict the keyword in each the Winograd Schema question by selecting top word positions with the highest value of $q_t$ . For questions with keyword appearing before the reference, we detect them by backward-scoring models. Namely, we ensemble 6 LMs, each trained on one text corpora with word order reversed. This ensemble also outperforms the previous best system on WSC-273 with a remarkable accuracy of 58.2%. Overall, we are able to discover a significant amount of special keywords (115 out of 178 correctly answered questions) as shown in Table 6 . This strongly indicates a correct understanding of the context and a good grasp of commonsense knowledge in the resolver's decision process. Partial scoring is better than full scoring. In this set of experiments, we look at wrong predictions from a word-level LM. With full scoring strategy, we observe that $q_t$ at the pronoun position is most responsible for a very large percentage of incorrect decisions as shown in Figfure 3 and Table 7 . For example, with the test "The trophy cannot fit in the suitcase because it is too big.", the system might return $c_{incorrect} = $ "suitcase" simply because $c_{correct} = $ "trophy" is a very rare word in its training corpus and therefore, is assigned a very low probability, overpowering subsequent $q_t$ values. Following this reasoning, we apply a simple fix to full scoring by normalizing its score with the unigram count of $c$ : $Score_{full~normalized} = Score_{full} / Count(c)$ . Partial scoring, on the other hand, disregards $c$ altogether. As shown in Figure 4 , this normalization fixes full scoring in 9 out of 10 tested LMs on PDP-122. On WSC-273, the result is very decisive as partial scoring strongly outperforms the other two scoring in all cases. Since PDP-122 is a larger superset of PDP-60, we attribute the different behaviour observed on PDP-60 as an atypical case due to its very small size. Importance of training corpus In this set of experiments, we examine the effect of training data on commonsense reasoning test performance. Namely, we train both word-level and character-level LMs on each of the five corpora: LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and STORIES. A held-out dataset from each text corpus is used for early stopping on the corresponding training data. To speed up training on these large corpora, we first train the models on the LM-1-Billion text corpus. Each trained model is then divided into three groups of parameters: Embedding, Recurrent Cell, and Softmax. Each of the three is optionally transferred to train the same architectures on CommonCrawl, SQuAD and Gutenberg Books. The best transferring combination is chosen by cross-validation. Figure 5 -left and middle show that STORIES always yield the highest accuracy for both types of input processing. We next rank the text corpora based on ensemble performance for more reliable results. Namely, we compare the previous ensemble of 10 models against the same set of models trained on each single text corpus. This time, the original ensemble trained on a diverse set of text corpora outperforms all other single-corpus ensembles including STORIES. This highlights the important role of diversity in training data for commonsense reasoning accuracy of the final system. Conclusion We introduce a simple unsupervised method for Commonsense Reasoning tasks. Key to our proposal are large language models, trained on a number of massive and diverse text corpora. The resulting systems outperform previous best systems on both Pronoun Disambiguation Problems and Winograd Schema Challenge. Remarkably on the later benchmark, we are able to achieve 63.7% accuracy, comparing to 52.8% accuracy of the previous state-of-the-art, who utilizes supervised learning and expensively annotated knowledge bases. We analyze our system's answers and observe that it discovers key features of the question that decides the correct answer, indicating good understanding of the context and commonsense knowledge. We also demonstrated that ensembles of models benefit the most when trained on a diverse set of text corpora. We anticipate that this simple technique will be a strong building block for future systems that utilize reasoning ability on commonsense knowledge. Recurrent language models The base model consists of two layers of Long-Short Term Memory (LSTM) BIBREF31 with 8192 hidden units. The output gate of each LSTM uses peepholes and a projection layer to reduce its output dimensionality to 1024. We perform drop-out on LSTM's outputs with probability 0.25. For word inputs, we use an embedding lookup of 800000 words, each with dimension 1024. For character inputs, we use an embedding lookup of 256 characters, each with dimension 16. We concatenate all characters in each word into a tensor of shape (word length, 16) and add to its two ends the <begin of word> and <end of word> tokens. The resulting concatenation is zero-padded to produce a fixed size tensor of shape (50, 16). This tensor is then processed by eight different 1-D convolution (Conv) kernels of different sizes and number of output channels, listed in Table 8 , each followed by a ReLU acitvation. The output of all CNNs are then concatenated and processed by two other fully-connected layers with highway connection that persist the input dimensionality. The resulting tensor is projected down to a 1024-feature vector. For both word input and character input, we perform dropout on the tensors that go into LSTM layers with probability 0.25. We use a single fully-connected layer followed by a $Softmax$ operator to process the LSTM's output and produce a distribution over word vocabulary of size 800K. During training, LM loss is evaluated using importance sampling with negative sample size of 8192. This loss is minimized using the AdaGrad BIBREF37 algorithm with a learning rate of 0.2. All gradients on LSTM parameters and Character Embedding parameters are clipped by their global norm at 1.0. To avoid storing large matrices in memory, we shard them into 32 equal-sized smaller pieces. In our experiments, we used 8 different variants of this base model as listed in Table 9 . In Table 10 , we listed all LMs and their training text corpora used in each of the experiments in Section "Main results" . Data contamination in CommonCrawl Using the similarity scoring technique in section "Customized training data for Winograd Schema Challenge" , we observe a large amount of low quality training text on the lower end of the ranking. Namely, these are documents whose content are mostly unintelligible or unrecognized by our vocabulary. Training LMs for commonsense reasoning tasks on full CommonCrawl, therefore, might not be ideal. On the other hand, we detected and removed a portion of PDP-122 questions presented as an extremely high ranked document.
documents from the CommonCrawl dataset that has the most overlapping n-grams with the question
770aeff30846cd3d0d5963f527691f3685e8af02
770aeff30846cd3d0d5963f527691f3685e8af02_0
Q: Do they fine-tune their model on the end task? Text: Introduction Although deep neural networks have achieved remarkable successes (e.g., BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ), their dependence on supervised learning has been challenged as a significant weakness. This dependence prevents deep neural networks from being applied to problems where labeled data is scarce. An example of such problems is common sense reasoning, such as the Winograd Schema Challenge BIBREF0 , where the labeled set is typically very small, on the order of hundreds of examples. Below is an example question from this dataset: Although it is straightforward for us to choose the answer to be "the trophy" according to our common sense, answering this type of question is a great challenge for machines because there is no training data, or very little of it. In this paper, we present a surprisingly simple method for common sense reasoning with Winograd schema multiple choice questions. Key to our method is th e use of language models (LMs), trained on a large amount of unlabeled data, to score multiple choice questions posed by the challenge and similar datasets. More concretely, in the above example, we will first substitute the pronoun ("it") with the candidates ("the trophy" and "the suitcase"), and then use LMs to compute the probability of the two resulting sentences ("The trophy doesn’t fit in the suitcase because the trophy is too big." and "The trophy doesn’t fit in the suitcase because the suitcase is too big."). The substitution that results in a more probable sentence will be the correct answer. A unique feature of Winograd Schema questions is the presence of a special word that decides the correct reference choice. In the above example, "big" is this special word. When "big" is replaced by "small", the correct answer switches to "the suitcase". Although detecting this feature is not part of the challenge, further analysis shows that our system successfully discovers this special word to make its decisions in many cases, indicating a good grasp of commonsense knowledge. Related Work Unsupervised learning has been used to discover simple commonsense relationships. For example, Mikolov et al. BIBREF15 , BIBREF16 show that by learning to predict adjacent words in a sentence, word vectors can be used to answer analogy questions such as: Man:King::Woman:?. Our work uses a similar intuition that language modeling can naturally capture common sense knowledge. The difference is that Winograd Schema questions require more contextual information, hence our use of LMs instead of just word vectors. Neural LMs have also been applied successfully to improve downstream applications BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , researchers have shown that pre-trained LMs can be used as feature representations for a sentence, or a paragraph to improve NLP applications such as document classification, machine translation, question answering, etc. The combined evidence suggests that LMs trained on a massive amount of unlabeled data can capture many aspects of natural language and the world's knowledge, especially commonsense information. Previous attempts on solving the Winograd Schema Challenge usually involve heavy utilization of annotated knowledge bases, rule-based reasoning, or hand-crafted features BIBREF21 , BIBREF22 , BIBREF23 . In particular, Rahman and Ng BIBREF24 employ human annotators to build more supervised training data. Their model utilizes nearly 70K hand-crafted features, including querying data from Google Search API. Sharma et al. BIBREF25 rely on a semantic parser to understand the question, query texts through Google Search, and reason on the graph produced by the parser. Similarly, Schüller BIBREF23 formalizes the knowledge-graph data structure and a reasoning process based on cognitive linguistics theories. Bailey et al. BIBREF22 introduces a framework for reasoning, using expensive annotated knowledge bases as axioms. The current best approach makes use of the skip-gram model to learn word representations BIBREF26 . The model incorporates several knowledge bases to regularize its training process, resulting in Knowledge Enhanced Embeddings (KEE). A semantic similarity scorer and a deep neural network classifier are then combined on top of KEE to predict the answers. The final system, therefore, includes both supervised and unsupervised models, besides three different knowledge bases. In contrast, our unsupervised method is simpler while having significantly higher accuracy. Unsupervised training is done on text corpora which can be cheaply curated. Using language models in reading comprehension tests also produced many great successes. Namely Chu et al. BIBREF27 used bi-directional RNNs to predict the last word of a passage in the LAMBADA challenge. Similarly, LMs are also used to produce features for a classifier in the Store Close Test 2017, giving best accuracy against other methods BIBREF28 . In a broader context, LMs are used to produce good word embeddings, significantly improved a wide variety of downstream tasks, including the general problem of question answering BIBREF19 , BIBREF29 . Methods We first substitute the pronoun in the original sentence with each of the candidate choices. The problem of coreference resolution then reduces to identifying which substitution results in a more probable sentence. By reframing the problem this way, language modeling becomes a natural solution by its definition. Namely, LMs are trained on text corpora, which encodes human knowledge in the form of natural language. During inference, LMs are able to assign probability to any given text based on what they have learned from training data. An overview of our method is shown in Figure 1 . Suppose the sentence $S$ of $n$ consecutive words has its pronoun to be resolved specified at the $k^{th}$ position: $S = \lbrace w_1, .., w_{k-1}, w_{k} \equiv p, w_{k+1}, .., w_{n}\rbrace $ . We make use of a trained language model $P_\theta (w_t | w_{1}, w_2, .., w_{t-1})$ , which defines the probability of word $w_t$ conditioned on the previous words $w_1, ..., w_{t-1}$ . The substitution of a candidate reference $c$ in to the pronoun position $k$ results in a new sentence $S_{w_k\leftarrow c}$ (we use notation $n$0 to mean that word $n$1 is substituted by candidate $n$2 ). We consider two different ways of scoring the substitution: which scores how probable the resulting full sentence is, and which scores how probable the part of the resulting sentence following $c$ is, given its antecedent. In other words, it only scores a part $S_{w_k\leftarrow c}$ conditioned on the rest of the substituted sentence. An example of these two scores is shown in Table 1 . In our experiments, we find that partial scoring strategy is generally better than the naive full scoring strategy. Experimental settings In this section we describe tests for commonsense reasoning and the LMs used to solve these tasks. We also detail training text corpora used in our experiments. Main results Our experiments start with testing LMs trained on all text corpora with PDP-60 and WSC-273. Next, we show that it is possible to customize training data to obtain even better results. The first challenge in 2016: PDP-60 We first examine unsupervised single-model resolvers on PDP-60 by training one character-level and one word-level LM on the Gutenberg corpus. In Table 2 , these two resolvers outperform previous results by a large margin. For this task, we found full scoring gives better results than partial scoring. In Section "Partial scoring is better than full scoring." , we provide evidences that this is an atypical case due to the very small size of PDP-60. Next, we allow systems to take in necessary components to maximize their test performance. This includes making use of supervised training data that maps commonsense reasoning questions to their correct answer. Here we simply train another three variants of LMs on LM-1-Billion, CommonCrawl, and SQuAD and ensemble all of them. As reported in Table 3 , this ensemble of five unsupervised models outperform the best system in the 2016 competition (58.3%) by a large margin. Specifically, we achieve 70.0% accuracy, better than the more recent reported results from Quan Liu et al (66.7%) BIBREF26 , who makes use of three knowledge bases and a supervised deep neural network. Winograd Schema Challenge On the harder task WSC-273, our single-model resolvers also outperform the current state-of-the-art by a large margin, as shown in Table 4 . Namely, our word-level resolver achieves an accuracy of 56.4%. By training another 4 LMs, each on one of the 4 text corpora LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and add to the previous ensemble, we are able to reach 61.5%, nearly 10% of accuracy above the previous best result. This is a drastic improvement considering this previous best system outperforms random guess by only 3% in accuracy. This task is more difficult than PDP-60. First, the overall performance of all competing systems are much lower than that of PDP-60. Second, incorporating supervised learning and expensive annotated knowledge bases to USSM provides insignificant gain this time (+3%), comparing to the large gain on PDP-60 (+19%). Customized training data for Winograd Schema Challenge As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 , BIBREF25 , we also explore using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers. In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. The score for each document is a weighted sum of $F_1(n)$ scores when counting overlapping n-grams: $Similarity\_Score_{document} = \frac{\sum _{n=1}^4nF_1(n)}{\sum _{n=1}^4n}$ The top 0.1% of highest ranked documents is chosen as our new training corpus. Details of the ranking is shown in Figure 2 . This procedure resulted in nearly 1,000,000 documents, with the highest ranking document having a score of $8\times 10^{-2}$ , still relatively small to a perfect score of $1.0$ . We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events. We train four different LMs on STORIES and add them to the previous ensemble of 10 LMs, resulting in a gain of 2% accuracy in the final system as shown in Table 5 . Remarkably, single models trained on this corpus are already extremely strong, with a word-level LM achieving 62.6% accuracy, even better than the ensemble of 10 models previously trained on 4 other text corpora (61.5%). Discovery of special words in Winograd Schema We introduce a method to potentially detect keywords at which our proposed resolvers make decision between the two candidates $c_{correct}$ and $c_{incorrect}$ . Namely, we look at the following ratio: $q_t = \frac{P_\theta (w_t | w_1, w_2, ..., w_{t-1}; w_k \leftarrow c_{correct})}{P_\theta (w_t | w_1, w_2, ..., w_{t-1}; w_k \leftarrow c_{incorrect})}$ Where $1 \le t \le n$ for full scoring, and $k +1 \le t \le n$ for partial scoring. It follows that the choice between $c_{correct}$ or $c_{incorrect}$ is made by the value of $Q = \prod _tq_t$ being bigger than $1.0$ or not. By looking at the value of each individual $q_t$ , it is possible to retrieve words with the largest values of $q_t$ and hence most responsible for the final value of $Q$ . We visualize the probability ratios $q_t$ to have more insights into the decisions of our resolvers. Figure 3 displays a sample of incorrect decisions made by full scoring and is corrected by partial scoring. Interestingly, we found $q_t$ with large values coincides with the special keyword of each Winograd Schema in several cases. Intuitively, this means the LMs assigned very low probability for the keyword after observing the wrong substitution. It follows that we can predict the keyword in each the Winograd Schema question by selecting top word positions with the highest value of $q_t$ . For questions with keyword appearing before the reference, we detect them by backward-scoring models. Namely, we ensemble 6 LMs, each trained on one text corpora with word order reversed. This ensemble also outperforms the previous best system on WSC-273 with a remarkable accuracy of 58.2%. Overall, we are able to discover a significant amount of special keywords (115 out of 178 correctly answered questions) as shown in Table 6 . This strongly indicates a correct understanding of the context and a good grasp of commonsense knowledge in the resolver's decision process. Partial scoring is better than full scoring. In this set of experiments, we look at wrong predictions from a word-level LM. With full scoring strategy, we observe that $q_t$ at the pronoun position is most responsible for a very large percentage of incorrect decisions as shown in Figfure 3 and Table 7 . For example, with the test "The trophy cannot fit in the suitcase because it is too big.", the system might return $c_{incorrect} = $ "suitcase" simply because $c_{correct} = $ "trophy" is a very rare word in its training corpus and therefore, is assigned a very low probability, overpowering subsequent $q_t$ values. Following this reasoning, we apply a simple fix to full scoring by normalizing its score with the unigram count of $c$ : $Score_{full~normalized} = Score_{full} / Count(c)$ . Partial scoring, on the other hand, disregards $c$ altogether. As shown in Figure 4 , this normalization fixes full scoring in 9 out of 10 tested LMs on PDP-122. On WSC-273, the result is very decisive as partial scoring strongly outperforms the other two scoring in all cases. Since PDP-122 is a larger superset of PDP-60, we attribute the different behaviour observed on PDP-60 as an atypical case due to its very small size. Importance of training corpus In this set of experiments, we examine the effect of training data on commonsense reasoning test performance. Namely, we train both word-level and character-level LMs on each of the five corpora: LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and STORIES. A held-out dataset from each text corpus is used for early stopping on the corresponding training data. To speed up training on these large corpora, we first train the models on the LM-1-Billion text corpus. Each trained model is then divided into three groups of parameters: Embedding, Recurrent Cell, and Softmax. Each of the three is optionally transferred to train the same architectures on CommonCrawl, SQuAD and Gutenberg Books. The best transferring combination is chosen by cross-validation. Figure 5 -left and middle show that STORIES always yield the highest accuracy for both types of input processing. We next rank the text corpora based on ensemble performance for more reliable results. Namely, we compare the previous ensemble of 10 models against the same set of models trained on each single text corpus. This time, the original ensemble trained on a diverse set of text corpora outperforms all other single-corpus ensembles including STORIES. This highlights the important role of diversity in training data for commonsense reasoning accuracy of the final system. Conclusion We introduce a simple unsupervised method for Commonsense Reasoning tasks. Key to our proposal are large language models, trained on a number of massive and diverse text corpora. The resulting systems outperform previous best systems on both Pronoun Disambiguation Problems and Winograd Schema Challenge. Remarkably on the later benchmark, we are able to achieve 63.7% accuracy, comparing to 52.8% accuracy of the previous state-of-the-art, who utilizes supervised learning and expensively annotated knowledge bases. We analyze our system's answers and observe that it discovers key features of the question that decides the correct answer, indicating good understanding of the context and commonsense knowledge. We also demonstrated that ensembles of models benefit the most when trained on a diverse set of text corpora. We anticipate that this simple technique will be a strong building block for future systems that utilize reasoning ability on commonsense knowledge. Recurrent language models The base model consists of two layers of Long-Short Term Memory (LSTM) BIBREF31 with 8192 hidden units. The output gate of each LSTM uses peepholes and a projection layer to reduce its output dimensionality to 1024. We perform drop-out on LSTM's outputs with probability 0.25. For word inputs, we use an embedding lookup of 800000 words, each with dimension 1024. For character inputs, we use an embedding lookup of 256 characters, each with dimension 16. We concatenate all characters in each word into a tensor of shape (word length, 16) and add to its two ends the <begin of word> and <end of word> tokens. The resulting concatenation is zero-padded to produce a fixed size tensor of shape (50, 16). This tensor is then processed by eight different 1-D convolution (Conv) kernels of different sizes and number of output channels, listed in Table 8 , each followed by a ReLU acitvation. The output of all CNNs are then concatenated and processed by two other fully-connected layers with highway connection that persist the input dimensionality. The resulting tensor is projected down to a 1024-feature vector. For both word input and character input, we perform dropout on the tensors that go into LSTM layers with probability 0.25. We use a single fully-connected layer followed by a $Softmax$ operator to process the LSTM's output and produce a distribution over word vocabulary of size 800K. During training, LM loss is evaluated using importance sampling with negative sample size of 8192. This loss is minimized using the AdaGrad BIBREF37 algorithm with a learning rate of 0.2. All gradients on LSTM parameters and Character Embedding parameters are clipped by their global norm at 1.0. To avoid storing large matrices in memory, we shard them into 32 equal-sized smaller pieces. In our experiments, we used 8 different variants of this base model as listed in Table 9 . In Table 10 , we listed all LMs and their training text corpora used in each of the experiments in Section "Main results" . Data contamination in CommonCrawl Using the similarity scoring technique in section "Customized training data for Winograd Schema Challenge" , we observe a large amount of low quality training text on the lower end of the ranking. Namely, these are documents whose content are mostly unintelligible or unrecognized by our vocabulary. Training LMs for commonsense reasoning tasks on full CommonCrawl, therefore, might not be ideal. On the other hand, we detected and removed a portion of PDP-122 questions presented as an extremely high ranked document.
Yes
f7817b949605fb04b1e4fec9dd9ca8804fb92ae9
f7817b949605fb04b1e4fec9dd9ca8804fb92ae9_0
Q: Why does not the approach from English work on other languages? Text: Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases. This is because NLP systems depend on language corpora, which are inherently “not objective; they are creations of human design” BIBREF0 . One type of societal bias that has received considerable attention from the NLP community is gender stereotyping BIBREF1 , BIBREF2 , BIBREF3 . Gender stereotypes can manifest in language in overt ways. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women BIBREF4 . To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta. In this paper, we present a new approach to counterfactual data augmentation BIBREF10 for mitigating gender stereotypes associated with animate nouns (i.e., nouns that represent people) for morphologically rich languages. We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns. We use this model as part of a four-step process, depicted in fig:pipeline, to reinflect entire sentences following an intervention on the grammatical gender of one word. We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level INLINEFORM0 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively. We also conduct an extrinsic evaluation using four languages. Following DBLP:journals/corr/abs-1807-11714, we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality. Gender Stereotypes in Text Men and women are mentioned at different rates in text BIBREF11 . This problem is exacerbated in certain contexts. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system. Gender stereotypes of this sort have been observed in word embeddings BIBREF5 , BIBREF3 , contextual word embeddings BIBREF12 , and co-reference resolution systems BIBREF13 , BIBREF9 inter alia. A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field BIBREF17 for morpho-syntactic agreement. This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags. Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. A dependency tree for a sentence (see fig:tree for an example) is a set of ordered triples INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are positions in the sentence (or a distinguished root symbol) and INLINEFORM3 is the label of the edge INLINEFORM4 in the tree; each position occurs exactly once as the first element in a triple. Each dependency tree INLINEFORM5 is associated with a sequence of morpho-syntactic tags INLINEFORM6 and a sequence of part-of-speech (POS) tags INLINEFORM7 . For example, the tags INLINEFORM8 and INLINEFORM9 for ingeniero are INLINEFORM10 and INLINEFORM11 , respectively, because ingeniero is a masculine, singular noun. For notational simplicity, we define INLINEFORM12 to be the set of all length- INLINEFORM13 sequences of morpho-syntactic tags. We define the probability of INLINEFORM0 given INLINEFORM1 and INLINEFORM2 as DISPLAYFORM0 where the binary factor INLINEFORM0 scores how well the morpho-syntactic tags INLINEFORM1 and INLINEFORM2 agree given the POS tags INLINEFORM3 and INLINEFORM4 and the label INLINEFORM5 . For example, consider the INLINEFORM6 (adjectival modifier) edge from experto to ingeniero in fig:tree. The factor INLINEFORM7 returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., INLINEFORM8 and INLINEFORM9 ) and a low score if they do not (e.g., INLINEFORM10 and INLINEFORM11 ). The unary factor INLINEFORM12 scores a morpho-syntactic tag INLINEFORM13 outside the context of the dependency tree. As we explain in sec:constraint, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them. eq:dist is normalized by the following partition function: INLINEFORM14 INLINEFORM0 can be calculated using belief propagation; we provide the update equations that we use in sec:bp. Our model is depicted in fig:fg. It is noteworthy that this model is delexicalized—i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves. Parameterization We consider a linear parameterization and a neural parameterization of the binary factor INLINEFORM0 . We define a matrix INLINEFORM0 for each triple INLINEFORM1 , where INLINEFORM2 is the number of morpho-syntactic subtags. For example, INLINEFORM3 has two subtags INLINEFORM4 and INLINEFORM5 . We then define INLINEFORM6 as follows: INLINEFORM7 where INLINEFORM0 is a multi-hot encoding of INLINEFORM1 . As an alternative, we also define a neural parameterization of INLINEFORM0 to allow parameter sharing among edges with different parts of speech and labels: INLINEFORM1 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 and INLINEFORM3 define the structure of the neural parameterization and each INLINEFORM4 is an embedding function. We use the unary factors only to force or disallow particular tags when performing an intervention. Specifically, we define DISPLAYFORM0 where INLINEFORM0 is a strength parameter that determines the extent to which INLINEFORM1 should remain unchanged following an intervention. In the limit as INLINEFORM2 , all tags will remain unchanged except for the tag directly involved in the intervention. Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation BIBREF18 to perform exact inference. The algorithm is a generalization of the forward-backward algorithm for hidden Markov models BIBREF19 . Specifically, we pass messages from the leaves to the root and vice versa. The marginal distribution of a node is the point-wise product of all its incoming messages; the partition function INLINEFORM0 is the sum of any node's marginal distribution. Computing INLINEFORM1 takes polynomial time BIBREF18 —specifically, INLINEFORM2 where INLINEFORM3 is the number of morpho-syntactic tags. Finally, inferring the highest-probability morpho-syntactic tag sequence INLINEFORM4 given INLINEFORM5 and INLINEFORM6 can be performed using the max-product modification to belief propagation. Parameter Estimation We use gradient-based optimization. We treat the negative log-likelihood INLINEFORM0 as the loss function for tree INLINEFORM1 and compute its gradient using automatic differentiation BIBREF20 . We learn the parameters of sec:param by optimizing the negative log-likelihood using gradient descent. Intervention As explained in sec:gender, our goal is to transform sentences like sent:msc to sent:fem by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. For example, if we change the morpho-syntactic tag for ingeniero from [msc;sg] to [fem;sg], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [in; pr; sg]. If we intervene on the INLINEFORM0 word in a sentence, changing its tag from INLINEFORM1 to INLINEFORM2 , then using our model to infer the manner in which the remaining tags must be updated means using INLINEFORM3 to identify high-probability tags for the remaining words. Crucially, we wish to change as little as possible when intervening on a gendered word. The unary factors INLINEFORM0 enable us to do exactly this. As described in the previous section, the strength parameter INLINEFORM1 determines the extent to which INLINEFORM2 should remain unchanged following an intervention—the larger the value, the less likely it is that INLINEFORM3 will be changed. Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms. This task has received considerable attention from the NLP community BIBREF21 , BIBREF22 . We use the inflection model of D18-1473. This model conditions on the lemma INLINEFORM0 and morpho-syntactic tag INLINEFORM1 to form a distribution over possible inflections. For example, given experto and INLINEFORM2 , the trained inflection model will assign a high probability to expertas. We provide accuracies for the trained inflection model in tab:reinflect. Experiments We used the Adam optimizer BIBREF23 to train both parameterizations of our model until the change in dev-loss was less than INLINEFORM0 bits. We set INLINEFORM1 without tuning, and chose a learning rate of INLINEFORM2 and weight decay factor of INLINEFORM3 after tuning. We tuned INLINEFORM4 in the set INLINEFORM5 and chose INLINEFORM6 . For the neural parameterization, we set INLINEFORM7 and INLINEFORM8 without any tuning. Finally, we trained the inflection model using only gendered words. We evaluate our approach both intrinsically and extrinsically. For the intrinsic evaluation, we focus on whether our approach yields the correct morpho-syntactic tags and the correct reinflections. For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models. Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language. Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank. The average length of these extracted sentences was 37 words. We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly. We chose Spanish and Hebrew because gender agreement operates differently in each language. We provide corpus statistics for both languages in the top two rows of tab:data. We created a hard-coded INLINEFORM0 to serve as a baseline for each language. For Spanish, we only activated, i.e. set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns. We created two separate baselines because gender agreement operates differently in each language. To evaluate our approach, we held all morpho-syntactic subtags fixed except for gender. For each annotated sentence, we intervened on the gender of the animate noun. We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata. Finally, we used the annotations to compute the tag-level INLINEFORM0 score and the form-level accuracy, excluding the animate nouns on which we intervened. We present the results in tab:intrinsic. Recall is consistently significantly lower than precision. As expected, the baselines have the highest precision (though not by much). This is because they reflect well-known rules for each language. That said, they have lower recall than our approach because they fail to capture more subtle relationships. For both languages, our approach struggles with conjunctions. For example, consider the phrase él es un ingeniero y escritor (he is an engineer and a writer). Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora. This is because two nouns do not normally need to have the same gender when they are conjoined. Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person. Note that including co-reference information in our MRF would create cycles and inference would no longer be exact. Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs. Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization. We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient. Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping. Following DBLP:journals/corr/abs-1807-11714, focus on neural language models. We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages. As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model INLINEFORM0 for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful. The translations we use for these adjectives are given in sec:translation. We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes. For example, consider DISPLAYFORM0 If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer). If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive. In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive. If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero. Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): DISPLAYFORM0 We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see tab:data). For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using BIBREF24 's parser and extracted taggings and lemmata using the method of BIBREF25 . We automatically extracted an animacy gazetteer from WordNet BIBREF26 and then manually filtered the output for correctness. We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in tab:anim. For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on the noun, and then used our approach to transform the sentence. For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders. Choosing which sentences to duplicate is a difficult task. For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations. Multilingual animacy detection BIBREF27 might help with this challenge; co-reference information might additionally help. For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of BIBREF28 using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. We then computed gender stereotyping and grammaticality as described above. We provide example phrases in tab:lm; we provide a more extensive list of phrases in app:queries. fig:bias demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. It is immediately apparent that our approch reduces gender stereotyping. On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively). We expected that naïve swapping of gendered words would also reduce gender stereotyping. Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages. For Spanish, we also examine specific words that are stereotyped toward men or women. We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender. fig:espbias suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women. The grammaticality of the corpora following CDA differs between languages. That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus. Given that we know the model did not perform as accurately for Hebrew (see tab:intrinsic), this finding is not surprising. Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology—specifically languages that exhibit gender agreement. To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English. For example, BIBREF5 proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; BIBREF10 studied gender stereotypes in language models; and BIBREF13 introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution. The most closely related work is that of BIBREF9 , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages. Our approach is specifically intended to yield grammatical sentences when applied to such languages. BIBREF29 also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation. Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information. Acknowledgments The last author acknowledges a Facebook Fellowship. Belief Propagation Update Equations Our belief propagation update equations are DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 returns the set of neighbouring nodes of node INLINEFORM1 . The belief at any node is given by DISPLAYFORM0 Adjective Translations tab:fem and tab:masc contain the feminine and masculine translations of the four adjectives that we used. Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases. Consider the noun engineer as an example. We created four phrases—one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer. These phrases, as well as their prefix log-likelihoods are provided below in tab:query.
Because, unlike other languages, English does not mark grammatical genders
8255f74cae1352e5acb2144fb857758dda69be02
8255f74cae1352e5acb2144fb857758dda69be02_0
Q: How do they measure grammaticality? Text: Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases. This is because NLP systems depend on language corpora, which are inherently “not objective; they are creations of human design” BIBREF0 . One type of societal bias that has received considerable attention from the NLP community is gender stereotyping BIBREF1 , BIBREF2 , BIBREF3 . Gender stereotypes can manifest in language in overt ways. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women BIBREF4 . To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta. In this paper, we present a new approach to counterfactual data augmentation BIBREF10 for mitigating gender stereotypes associated with animate nouns (i.e., nouns that represent people) for morphologically rich languages. We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns. We use this model as part of a four-step process, depicted in fig:pipeline, to reinflect entire sentences following an intervention on the grammatical gender of one word. We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level INLINEFORM0 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively. We also conduct an extrinsic evaluation using four languages. Following DBLP:journals/corr/abs-1807-11714, we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality. Gender Stereotypes in Text Men and women are mentioned at different rates in text BIBREF11 . This problem is exacerbated in certain contexts. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system. Gender stereotypes of this sort have been observed in word embeddings BIBREF5 , BIBREF3 , contextual word embeddings BIBREF12 , and co-reference resolution systems BIBREF13 , BIBREF9 inter alia. A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field BIBREF17 for morpho-syntactic agreement. This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags. Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. A dependency tree for a sentence (see fig:tree for an example) is a set of ordered triples INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are positions in the sentence (or a distinguished root symbol) and INLINEFORM3 is the label of the edge INLINEFORM4 in the tree; each position occurs exactly once as the first element in a triple. Each dependency tree INLINEFORM5 is associated with a sequence of morpho-syntactic tags INLINEFORM6 and a sequence of part-of-speech (POS) tags INLINEFORM7 . For example, the tags INLINEFORM8 and INLINEFORM9 for ingeniero are INLINEFORM10 and INLINEFORM11 , respectively, because ingeniero is a masculine, singular noun. For notational simplicity, we define INLINEFORM12 to be the set of all length- INLINEFORM13 sequences of morpho-syntactic tags. We define the probability of INLINEFORM0 given INLINEFORM1 and INLINEFORM2 as DISPLAYFORM0 where the binary factor INLINEFORM0 scores how well the morpho-syntactic tags INLINEFORM1 and INLINEFORM2 agree given the POS tags INLINEFORM3 and INLINEFORM4 and the label INLINEFORM5 . For example, consider the INLINEFORM6 (adjectival modifier) edge from experto to ingeniero in fig:tree. The factor INLINEFORM7 returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., INLINEFORM8 and INLINEFORM9 ) and a low score if they do not (e.g., INLINEFORM10 and INLINEFORM11 ). The unary factor INLINEFORM12 scores a morpho-syntactic tag INLINEFORM13 outside the context of the dependency tree. As we explain in sec:constraint, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them. eq:dist is normalized by the following partition function: INLINEFORM14 INLINEFORM0 can be calculated using belief propagation; we provide the update equations that we use in sec:bp. Our model is depicted in fig:fg. It is noteworthy that this model is delexicalized—i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves. Parameterization We consider a linear parameterization and a neural parameterization of the binary factor INLINEFORM0 . We define a matrix INLINEFORM0 for each triple INLINEFORM1 , where INLINEFORM2 is the number of morpho-syntactic subtags. For example, INLINEFORM3 has two subtags INLINEFORM4 and INLINEFORM5 . We then define INLINEFORM6 as follows: INLINEFORM7 where INLINEFORM0 is a multi-hot encoding of INLINEFORM1 . As an alternative, we also define a neural parameterization of INLINEFORM0 to allow parameter sharing among edges with different parts of speech and labels: INLINEFORM1 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 and INLINEFORM3 define the structure of the neural parameterization and each INLINEFORM4 is an embedding function. We use the unary factors only to force or disallow particular tags when performing an intervention. Specifically, we define DISPLAYFORM0 where INLINEFORM0 is a strength parameter that determines the extent to which INLINEFORM1 should remain unchanged following an intervention. In the limit as INLINEFORM2 , all tags will remain unchanged except for the tag directly involved in the intervention. Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation BIBREF18 to perform exact inference. The algorithm is a generalization of the forward-backward algorithm for hidden Markov models BIBREF19 . Specifically, we pass messages from the leaves to the root and vice versa. The marginal distribution of a node is the point-wise product of all its incoming messages; the partition function INLINEFORM0 is the sum of any node's marginal distribution. Computing INLINEFORM1 takes polynomial time BIBREF18 —specifically, INLINEFORM2 where INLINEFORM3 is the number of morpho-syntactic tags. Finally, inferring the highest-probability morpho-syntactic tag sequence INLINEFORM4 given INLINEFORM5 and INLINEFORM6 can be performed using the max-product modification to belief propagation. Parameter Estimation We use gradient-based optimization. We treat the negative log-likelihood INLINEFORM0 as the loss function for tree INLINEFORM1 and compute its gradient using automatic differentiation BIBREF20 . We learn the parameters of sec:param by optimizing the negative log-likelihood using gradient descent. Intervention As explained in sec:gender, our goal is to transform sentences like sent:msc to sent:fem by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. For example, if we change the morpho-syntactic tag for ingeniero from [msc;sg] to [fem;sg], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [in; pr; sg]. If we intervene on the INLINEFORM0 word in a sentence, changing its tag from INLINEFORM1 to INLINEFORM2 , then using our model to infer the manner in which the remaining tags must be updated means using INLINEFORM3 to identify high-probability tags for the remaining words. Crucially, we wish to change as little as possible when intervening on a gendered word. The unary factors INLINEFORM0 enable us to do exactly this. As described in the previous section, the strength parameter INLINEFORM1 determines the extent to which INLINEFORM2 should remain unchanged following an intervention—the larger the value, the less likely it is that INLINEFORM3 will be changed. Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms. This task has received considerable attention from the NLP community BIBREF21 , BIBREF22 . We use the inflection model of D18-1473. This model conditions on the lemma INLINEFORM0 and morpho-syntactic tag INLINEFORM1 to form a distribution over possible inflections. For example, given experto and INLINEFORM2 , the trained inflection model will assign a high probability to expertas. We provide accuracies for the trained inflection model in tab:reinflect. Experiments We used the Adam optimizer BIBREF23 to train both parameterizations of our model until the change in dev-loss was less than INLINEFORM0 bits. We set INLINEFORM1 without tuning, and chose a learning rate of INLINEFORM2 and weight decay factor of INLINEFORM3 after tuning. We tuned INLINEFORM4 in the set INLINEFORM5 and chose INLINEFORM6 . For the neural parameterization, we set INLINEFORM7 and INLINEFORM8 without any tuning. Finally, we trained the inflection model using only gendered words. We evaluate our approach both intrinsically and extrinsically. For the intrinsic evaluation, we focus on whether our approach yields the correct morpho-syntactic tags and the correct reinflections. For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models. Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language. Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank. The average length of these extracted sentences was 37 words. We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly. We chose Spanish and Hebrew because gender agreement operates differently in each language. We provide corpus statistics for both languages in the top two rows of tab:data. We created a hard-coded INLINEFORM0 to serve as a baseline for each language. For Spanish, we only activated, i.e. set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns. We created two separate baselines because gender agreement operates differently in each language. To evaluate our approach, we held all morpho-syntactic subtags fixed except for gender. For each annotated sentence, we intervened on the gender of the animate noun. We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata. Finally, we used the annotations to compute the tag-level INLINEFORM0 score and the form-level accuracy, excluding the animate nouns on which we intervened. We present the results in tab:intrinsic. Recall is consistently significantly lower than precision. As expected, the baselines have the highest precision (though not by much). This is because they reflect well-known rules for each language. That said, they have lower recall than our approach because they fail to capture more subtle relationships. For both languages, our approach struggles with conjunctions. For example, consider the phrase él es un ingeniero y escritor (he is an engineer and a writer). Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora. This is because two nouns do not normally need to have the same gender when they are conjoined. Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person. Note that including co-reference information in our MRF would create cycles and inference would no longer be exact. Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs. Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization. We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient. Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping. Following DBLP:journals/corr/abs-1807-11714, focus on neural language models. We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages. As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model INLINEFORM0 for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful. The translations we use for these adjectives are given in sec:translation. We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes. For example, consider DISPLAYFORM0 If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer). If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive. In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive. If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero. Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): DISPLAYFORM0 We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see tab:data). For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using BIBREF24 's parser and extracted taggings and lemmata using the method of BIBREF25 . We automatically extracted an animacy gazetteer from WordNet BIBREF26 and then manually filtered the output for correctness. We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in tab:anim. For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on the noun, and then used our approach to transform the sentence. For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders. Choosing which sentences to duplicate is a difficult task. For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations. Multilingual animacy detection BIBREF27 might help with this challenge; co-reference information might additionally help. For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of BIBREF28 using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. We then computed gender stereotyping and grammaticality as described above. We provide example phrases in tab:lm; we provide a more extensive list of phrases in app:queries. fig:bias demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. It is immediately apparent that our approch reduces gender stereotyping. On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively). We expected that naïve swapping of gendered words would also reduce gender stereotyping. Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages. For Spanish, we also examine specific words that are stereotyped toward men or women. We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender. fig:espbias suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women. The grammaticality of the corpora following CDA differs between languages. That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus. Given that we know the model did not perform as accurately for Hebrew (see tab:intrinsic), this finding is not surprising. Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology—specifically languages that exhibit gender agreement. To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English. For example, BIBREF5 proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; BIBREF10 studied gender stereotypes in language models; and BIBREF13 introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution. The most closely related work is that of BIBREF9 , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages. Our approach is specifically intended to yield grammatical sentences when applied to such languages. BIBREF29 also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation. Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information. Acknowledgments The last author acknowledges a Facebook Fellowship. Belief Propagation Update Equations Our belief propagation update equations are DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 returns the set of neighbouring nodes of node INLINEFORM1 . The belief at any node is given by DISPLAYFORM0 Adjective Translations tab:fem and tab:masc contain the feminine and masculine translations of the four adjectives that we used. Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases. Consider the noun engineer as an example. We created four phrases—one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer. These phrases, as well as their prefix log-likelihoods are provided below in tab:query.
by calculating log ratio of grammatical phrase over ungrammatical phrase
db62d5d83ec187063b57425affe73fef8733dd28
db62d5d83ec187063b57425affe73fef8733dd28_0
Q: Which model do they use to convert between masculine-inflected and feminine-inflected sentences? Text: Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases. This is because NLP systems depend on language corpora, which are inherently “not objective; they are creations of human design” BIBREF0 . One type of societal bias that has received considerable attention from the NLP community is gender stereotyping BIBREF1 , BIBREF2 , BIBREF3 . Gender stereotypes can manifest in language in overt ways. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women BIBREF4 . To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta. In this paper, we present a new approach to counterfactual data augmentation BIBREF10 for mitigating gender stereotypes associated with animate nouns (i.e., nouns that represent people) for morphologically rich languages. We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns. We use this model as part of a four-step process, depicted in fig:pipeline, to reinflect entire sentences following an intervention on the grammatical gender of one word. We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level INLINEFORM0 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively. We also conduct an extrinsic evaluation using four languages. Following DBLP:journals/corr/abs-1807-11714, we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality. Gender Stereotypes in Text Men and women are mentioned at different rates in text BIBREF11 . This problem is exacerbated in certain contexts. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system. Gender stereotypes of this sort have been observed in word embeddings BIBREF5 , BIBREF3 , contextual word embeddings BIBREF12 , and co-reference resolution systems BIBREF13 , BIBREF9 inter alia. A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field BIBREF17 for morpho-syntactic agreement. This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags. Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. A dependency tree for a sentence (see fig:tree for an example) is a set of ordered triples INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are positions in the sentence (or a distinguished root symbol) and INLINEFORM3 is the label of the edge INLINEFORM4 in the tree; each position occurs exactly once as the first element in a triple. Each dependency tree INLINEFORM5 is associated with a sequence of morpho-syntactic tags INLINEFORM6 and a sequence of part-of-speech (POS) tags INLINEFORM7 . For example, the tags INLINEFORM8 and INLINEFORM9 for ingeniero are INLINEFORM10 and INLINEFORM11 , respectively, because ingeniero is a masculine, singular noun. For notational simplicity, we define INLINEFORM12 to be the set of all length- INLINEFORM13 sequences of morpho-syntactic tags. We define the probability of INLINEFORM0 given INLINEFORM1 and INLINEFORM2 as DISPLAYFORM0 where the binary factor INLINEFORM0 scores how well the morpho-syntactic tags INLINEFORM1 and INLINEFORM2 agree given the POS tags INLINEFORM3 and INLINEFORM4 and the label INLINEFORM5 . For example, consider the INLINEFORM6 (adjectival modifier) edge from experto to ingeniero in fig:tree. The factor INLINEFORM7 returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., INLINEFORM8 and INLINEFORM9 ) and a low score if they do not (e.g., INLINEFORM10 and INLINEFORM11 ). The unary factor INLINEFORM12 scores a morpho-syntactic tag INLINEFORM13 outside the context of the dependency tree. As we explain in sec:constraint, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them. eq:dist is normalized by the following partition function: INLINEFORM14 INLINEFORM0 can be calculated using belief propagation; we provide the update equations that we use in sec:bp. Our model is depicted in fig:fg. It is noteworthy that this model is delexicalized—i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves. Parameterization We consider a linear parameterization and a neural parameterization of the binary factor INLINEFORM0 . We define a matrix INLINEFORM0 for each triple INLINEFORM1 , where INLINEFORM2 is the number of morpho-syntactic subtags. For example, INLINEFORM3 has two subtags INLINEFORM4 and INLINEFORM5 . We then define INLINEFORM6 as follows: INLINEFORM7 where INLINEFORM0 is a multi-hot encoding of INLINEFORM1 . As an alternative, we also define a neural parameterization of INLINEFORM0 to allow parameter sharing among edges with different parts of speech and labels: INLINEFORM1 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 and INLINEFORM3 define the structure of the neural parameterization and each INLINEFORM4 is an embedding function. We use the unary factors only to force or disallow particular tags when performing an intervention. Specifically, we define DISPLAYFORM0 where INLINEFORM0 is a strength parameter that determines the extent to which INLINEFORM1 should remain unchanged following an intervention. In the limit as INLINEFORM2 , all tags will remain unchanged except for the tag directly involved in the intervention. Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation BIBREF18 to perform exact inference. The algorithm is a generalization of the forward-backward algorithm for hidden Markov models BIBREF19 . Specifically, we pass messages from the leaves to the root and vice versa. The marginal distribution of a node is the point-wise product of all its incoming messages; the partition function INLINEFORM0 is the sum of any node's marginal distribution. Computing INLINEFORM1 takes polynomial time BIBREF18 —specifically, INLINEFORM2 where INLINEFORM3 is the number of morpho-syntactic tags. Finally, inferring the highest-probability morpho-syntactic tag sequence INLINEFORM4 given INLINEFORM5 and INLINEFORM6 can be performed using the max-product modification to belief propagation. Parameter Estimation We use gradient-based optimization. We treat the negative log-likelihood INLINEFORM0 as the loss function for tree INLINEFORM1 and compute its gradient using automatic differentiation BIBREF20 . We learn the parameters of sec:param by optimizing the negative log-likelihood using gradient descent. Intervention As explained in sec:gender, our goal is to transform sentences like sent:msc to sent:fem by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. For example, if we change the morpho-syntactic tag for ingeniero from [msc;sg] to [fem;sg], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [in; pr; sg]. If we intervene on the INLINEFORM0 word in a sentence, changing its tag from INLINEFORM1 to INLINEFORM2 , then using our model to infer the manner in which the remaining tags must be updated means using INLINEFORM3 to identify high-probability tags for the remaining words. Crucially, we wish to change as little as possible when intervening on a gendered word. The unary factors INLINEFORM0 enable us to do exactly this. As described in the previous section, the strength parameter INLINEFORM1 determines the extent to which INLINEFORM2 should remain unchanged following an intervention—the larger the value, the less likely it is that INLINEFORM3 will be changed. Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms. This task has received considerable attention from the NLP community BIBREF21 , BIBREF22 . We use the inflection model of D18-1473. This model conditions on the lemma INLINEFORM0 and morpho-syntactic tag INLINEFORM1 to form a distribution over possible inflections. For example, given experto and INLINEFORM2 , the trained inflection model will assign a high probability to expertas. We provide accuracies for the trained inflection model in tab:reinflect. Experiments We used the Adam optimizer BIBREF23 to train both parameterizations of our model until the change in dev-loss was less than INLINEFORM0 bits. We set INLINEFORM1 without tuning, and chose a learning rate of INLINEFORM2 and weight decay factor of INLINEFORM3 after tuning. We tuned INLINEFORM4 in the set INLINEFORM5 and chose INLINEFORM6 . For the neural parameterization, we set INLINEFORM7 and INLINEFORM8 without any tuning. Finally, we trained the inflection model using only gendered words. We evaluate our approach both intrinsically and extrinsically. For the intrinsic evaluation, we focus on whether our approach yields the correct morpho-syntactic tags and the correct reinflections. For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models. Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language. Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank. The average length of these extracted sentences was 37 words. We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly. We chose Spanish and Hebrew because gender agreement operates differently in each language. We provide corpus statistics for both languages in the top two rows of tab:data. We created a hard-coded INLINEFORM0 to serve as a baseline for each language. For Spanish, we only activated, i.e. set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns. We created two separate baselines because gender agreement operates differently in each language. To evaluate our approach, we held all morpho-syntactic subtags fixed except for gender. For each annotated sentence, we intervened on the gender of the animate noun. We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata. Finally, we used the annotations to compute the tag-level INLINEFORM0 score and the form-level accuracy, excluding the animate nouns on which we intervened. We present the results in tab:intrinsic. Recall is consistently significantly lower than precision. As expected, the baselines have the highest precision (though not by much). This is because they reflect well-known rules for each language. That said, they have lower recall than our approach because they fail to capture more subtle relationships. For both languages, our approach struggles with conjunctions. For example, consider the phrase él es un ingeniero y escritor (he is an engineer and a writer). Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora. This is because two nouns do not normally need to have the same gender when they are conjoined. Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person. Note that including co-reference information in our MRF would create cycles and inference would no longer be exact. Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs. Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization. We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient. Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping. Following DBLP:journals/corr/abs-1807-11714, focus on neural language models. We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages. As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model INLINEFORM0 for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful. The translations we use for these adjectives are given in sec:translation. We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes. For example, consider DISPLAYFORM0 If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer). If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive. In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive. If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero. Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): DISPLAYFORM0 We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see tab:data). For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using BIBREF24 's parser and extracted taggings and lemmata using the method of BIBREF25 . We automatically extracted an animacy gazetteer from WordNet BIBREF26 and then manually filtered the output for correctness. We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in tab:anim. For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on the noun, and then used our approach to transform the sentence. For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders. Choosing which sentences to duplicate is a difficult task. For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations. Multilingual animacy detection BIBREF27 might help with this challenge; co-reference information might additionally help. For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of BIBREF28 using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. We then computed gender stereotyping and grammaticality as described above. We provide example phrases in tab:lm; we provide a more extensive list of phrases in app:queries. fig:bias demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. It is immediately apparent that our approch reduces gender stereotyping. On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively). We expected that naïve swapping of gendered words would also reduce gender stereotyping. Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages. For Spanish, we also examine specific words that are stereotyped toward men or women. We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender. fig:espbias suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women. The grammaticality of the corpora following CDA differs between languages. That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus. Given that we know the model did not perform as accurately for Hebrew (see tab:intrinsic), this finding is not surprising. Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology—specifically languages that exhibit gender agreement. To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English. For example, BIBREF5 proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; BIBREF10 studied gender stereotypes in language models; and BIBREF13 introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution. The most closely related work is that of BIBREF9 , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages. Our approach is specifically intended to yield grammatical sentences when applied to such languages. BIBREF29 also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation. Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information. Acknowledgments The last author acknowledges a Facebook Fellowship. Belief Propagation Update Equations Our belief propagation update equations are DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 returns the set of neighbouring nodes of node INLINEFORM1 . The belief at any node is given by DISPLAYFORM0 Adjective Translations tab:fem and tab:masc contain the feminine and masculine translations of the four adjectives that we used. Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases. Consider the noun engineer as an example. We created four phrases—one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer. These phrases, as well as their prefix log-likelihoods are provided below in tab:query.
Markov random field with an optional neural parameterization
946676f1a836ea2d6fe98cb4cfc26b9f4f81984d
946676f1a836ea2d6fe98cb4cfc26b9f4f81984d_0
Q: What is the performance achieved by the model described in the paper? Text: Introduction Humans deploy structure-sensitive expectations to guide processing during natural language comprehension BIBREF0. While it has been shown that neural language models show similar structure-sensitivity in their predictions about upcoming material BIBREF1, BIBREF2, previous work has focused on dependencies that are conditioned by features attached to a single word, such as subject number BIBREF3, BIBREF4 or wh-question words BIBREF5. There has been no systematic investigation into models' ability to compute phrase-level features—features that are attached to a set of words—and whether models can deploy these more abstract properties to drive downstream expectations. In this work, we assess whether state-of-the-art neural models can compute and employ phrase-level gender and number features of coordinated subject Noun Phrases (CoordNPs) with two nouns. Typical syntactic phrases are endocentric: they are headed by a single child, whose features determine the agreement requirements for the entire phrase. In Figure FIGREF1, for example, the word star heads the subject NP The star; since star is singular, the verb must be singular. CoordNPs lack endocentricity: neither conjunct NP solely determines the features of the NP as a whole. Instead, these feature values are determined by compositional rules sensitive to the features of the conjuncts and the identity of the coordinator. In Figure FIGREF1, because the coordinator is and, the subject NP number is plural even though both conjuncts (the star and the moon) are singular. As this case demonstrates, the agreement behavior for CoordNPs must be driven by more abstract, constituent-level representations, and cannot be reduced to features hosted on a single lexical item. We use four suites of experiments to assess whether neural models are able to build up phrase-level representations of CoordNPs on the fly and deploy them to drive humanlike behavior. First, we present a simple control experiment to show that models can represent number and gender features of non-coordinate NPs (Non-coordination Agreement). Second, we show that models modulate their expectations for downstream verb number based on the CoordNP's coordinating conjunction combined with the features of the coordinated nouns (Simple Coordination). We rule out the possibility that models are using simple heuristics by designing a set of stimuli where a simple heuristic would fail due to structural ambiguity (Complex Coordination). The striking success for all models in this experiment indicates that even neural models with no explicit hierarchical bias, trained on a relatively small amount of text are able to learn fine-grained and robust generalizations about the interaction between CoordNPs and local syntactic context. Finally, we use subject–auxiliary inversion to test whether an upstream lexical item modulates model expectation for the phrasal-level features of a downstream CoordNP (Inverted Coordination). Here, we find that all models are insensitive to the fine-grained features of this particular syntactic context. Overall, our results indicate that neural models can learn fine-grained information about the interaction of Coordinated NPs and local syntactic context, but their behavior remains unhumanlike in many key respects. Methods ::: Psycholinguistics Paradigm To determine whether state-of-the-art neural architectures are capable of learning humanlike CoordNP/verb agreement properties, we adopt the psycholinguistics paradigm for model assessment. In this paradigm the models are tested using hand-crafted sentences designed to test underlying network knowledge. The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training, its expectations for upcoming words should qualitatively match human expectations in novel contexts. For example, BIBREF1 and BIBREF6 assessed how well neural models had learned the subject/verb number agreement by feeding them with the prefix The keys to the cabinet .... If the models predicted the grammatical continuation are over the ungrammatical continuation is, they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun, cabinet, drives expectations about the number of the matrix verb. If models are able to robustly modulate their expectations based on the internal components of the CoordNP, this will provide evidence that the networks are building up a context-sensitive phrase-level representation. We quantify model expectations as surprisal values. Surprisal is the negative log-conditional probability $S(x_i) = -\log _2 p(x_i|x_1 \dots x_{i-1})$ of a sentence's $i^{th}$ word $x_i$ given the previous words. Surprisal tells us how strongly $x_i$ is expected in context and is known to correlate with human processing difficulty BIBREF7, BIBREF0, BIBREF8. In the CoordNP/Verb agreement studies presented here, cases where the proceeding context sets high expectation for a number-inflected verb form $w_i$, (e.g. singular `is') we would expect $S(w_i)$ to be lower than its number-mismatched counterpart (e.g. plural `are'). Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256. We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset. Methods ::: Models Tested ::: ActionLSTM models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17. Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG) jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18. The annotation schemes used to train the syntactically-supervised models differ slightly between French and English. In the PTB (English) CoordNPs are flat structures bearing an `NP' label. In FTB (French), CoordNPs are binary-branching, labeled as NPs, except for the phrasal node dominating the coordinating conjunction, which is labeled `COORD'. We examine the effects of annotation schemes on model performance in Appendix SECREF8. Experiment 1: Non-coordination Agreement In order to provide a baseline for following experiments, here we assess whether the models tested have learned basic representations of number and gender features for non-coordinated Noun Phrases. We test number agreement in English and French as well as gender agreement in French. Both English and French have two grammatical number feature: singular (sg) and plural (pl). French has two grammatical gender features: masculine (m) and feminine (f). The experimental materials include sentences where the subject NPs contain a single noun which can either match with the matrix verb (in the case of number agreement) or a following predicative adjective (in the case of gender agreement). Conditions are given in Table TABREF9 and Table TABREF10. We measure model behavior by computing the plural expectation, or the surprisal of the singular continuation minus the surprisal of the plural continuation for each condition and took the average for each condition. We expect a positive plural expectation in the Npl conditions and a negative plural expectation in the Nsg conditions. For gender expectation we compute a gender expectation, which is S(feminine continuation) $-$ S(masculine continuation). We measure surprisal at the verbs and predicative adjectives themselves. The results for this experiment are in Figure FIGREF11, with the plural expectation and gender expectation on the y-axis and conditions on the x-axis. For this and subsequent experiments error bars represent 95% confidence intervals for across-item means. For number agreement, all the models in English and French show positive plural expectation when the head noun is plural and negative plural expectation when it is singular. For gender agreement, however, only the LSTM (frWaC) shows modulation of gender expectation based on the gender of the head noun. This is most likely due to the lower frequency of predicative adjectives compared to matrix verbs in the corpus. Experiment 2: Simple Coordination In this section, we test whether neural language models can use grammatical features hosted on multiple components of a coordination phrase—the coordinated nouns as well as the coordinating conjunction—to drive downstream expectations. We test number agreement in both English and French and gender agreement in French. Experiment 2: Simple Coordination ::: Number Agreement In simple subject/verb number agreement, the number features of the CoordNP are determined by the coordinating conjunction and the number features of the two coordinated NPs. CoordNPs formed by and are plural and thus require plural verbs; CoordNPs formed by or allow either plural or singular verbs, often with the number features of the noun linearly closest to the verb playing a more important role, although this varies cross-linguistically BIBREF20. Forced-choice preference experiments in BIBREF21 reveal that English native speakers prefer singular agreement when the closest conjunct in an or-CoordNP is singular and plural agreement when the closest conjunct is plural. In French, both singular and plural verbs are possible when two singular NPs are joined via disjunction BIBREF22. In order to assess whether the neural models learn the basic CoordNP licensing for English, we adapted 37 items from BIBREF21, along the 16 conditions outlined in Table TABREF14. Test items consist of the sentence preamble, followed by either the singular or plural BE verb, half the time in present tense (is/are) and half the time in past tense (was/were). We measured the plural expectation, following the procedure in Section SECREF3. We created 24 items using the same conditions as the English experiment to test the models trained in French, using the 3rd person singular and plural form of verb aller, `to go' (va, vont). Within each item, nouns match in gender; across all conditions half the nouns are masculine, half feminine. The results for this experiment can be seen in Figure FIGREF12, with the results for English on the left and French on the right. The results for and are on the top row, or on the bottom row. For all figures the y-axis shows the plural expectation, or the difference in surprisal between the singular condition and the plural condition. Turning first to English-and (Figure FIGREF12), all models show plural expectation (the bars are significantly greater than zero) in the pl_and_pl and sg_and_pl conditions, as expected. For the pl_and_sg condition, only the LSTM (enWiki) and ActionLSTM are greater than zero, indicating humanlike behavior. For the sg_and_sg condition, only the LSTM (enWiki) model shows the correct plural expectation. For the French-and (Figure FIGREF12), all models show positive plural expectation in all conditions, as expected, except for the LSTM (FTB) in the sg_and_sg condition. Examining the results for English-or, we find that all models demonstrate humanlike expectation in the pl_or_pl and sg_or_pl conditions. The LSTM (1B), LSTM (PTB), and RNNG models show zero or negative singular expectation for the pl_or_sg conditions, as expected. However the LSTM (enWiki) and ActionLSTM models show positive plural expectation in this condition, indicating that they have not learned the humanlike generalizations. All models show significantly negative plural expectation in the sg_or_sg condition, as expected. In the French-or cases, models show almost identical behavior to the and conditions, except the LSTM (frWaC) shows smaller plural expectation when singular nouns are linearly proximal to the verb. These results indicate moderate success at learning coordinate NP agreement, however this success may be the result of an overly simple heuristic. It appears that expectation for both plural and masculine continuations are driven by a linear combination of the two nominal number/gender features transferred into log-probability space, with the earlier noun mattering less than the later noun. A model that optimally captures human grammatical preferences should show no or only slight difference across conditions in the surprisal differential for the and conditions, and be greater than zero in all cases. Yet, all the models tested show gradient performance based on the number of plural conjuncts. Experiment 2: Simple Coordination ::: Gender Agreement In French, if two nouns are coordinated with et (and-coordination), agreement must be masculine if there is one masculine element in the coordinate structure. If the nouns are coordinated with ou (or-coordination), both masculine and feminine agreement is acceptable BIBREF23, BIBREF24. Although linear proximity effects have been tested for a number of languages that employ grammatical gender, as in e.g. Slavic languages BIBREF25, there is no systematic study for French. To assess whether the French neural models learned humanlike gender agreement, we created 24 test items, following the examples in Table TABREF16, and measured the masculine expectation. In our test items, the coordinated subject NP is followed by a predicative adjective, which either takes on masculine or feminine gender morphology. Results from the experiment can be seen in Figure FIGREF17. No models shows qualitative difference based on the coordinator, and only the LSTM (frWaC) shows significant behavior difference between conditions. Here, we find positive masculine expectation in the m_and_m and f_and_m conditions, and negative masculine expectation in the f_and_f condition, as expected. However, in the m_and_f condition, the masculine expectation is not significantly different from zero, where we would expect it to be positive. In the or-coordination conditions, following our expectation, masculine expectation is positive when both conjuncts are masculine and negative when both are feminine. For the LSTM (FTB) and ActionLSTM models, the masculine expectation is positive (although not significantly so) in all conditions, consistent with results in Section SECREF3. Experiment 3: Complex Coordination One possible explanation for the results presented in the previous section is that the models are using a `bag of features' approach to plural and masculine licensing that is opaque to syntactic context: Following a coordinating conjunction surrounded by nouns, models simply expect the following verb to be plural, proportionally to the number of plural nouns. In this section, we control for this potential confound by conducting two experiments: In the Complex Coordination Control experiments we assess models' ability to extend basic CoordNP licensing into sententially-embedded environments, where the CoordNP can serve as an embedded subject. In the Complex Coordination Critical experiments, we leverage the sentential embedding environment to demonstrate that when the CoordNPs cannot plausibly serve as the subject of the embedded phrase, models are able to suppress the previously-demonstrated expectations set up by these phrases. These results demonstrate that models are not following a simple strategy for predicting downstream number and gender features, but are building up CoordNP representations on the fly, conditioned on the local syntactic context. Experiment 3: Complex Coordination ::: Complex Coordination Control Following certain sentential-embedding verbs, CoordNPs serve unambiguously as the subject of the verb's sentence complement and should trigger number agreement behavior in the main verb of the embedded clause, similar to the behavior presented in SECREF13. To assess this, we use the 37 test items in English and 24 items in French in section SECREF13, following the conditions in Table TABREF19 (for number agreement), testing only and coordination. For gender agreement, we use the same test items and conditions for and coordination in Section SECREF15, but with the Coordinated NPs embedded in a context similar to SECREF18. As before, we derived the plural expectation by measuring the difference in surprisal between the singular and plural continuations and the gender expectation by computing the difference in surprisal between the masculine and feminine predicates. . Je croyais que les prix et les dépenses étaient importants/importantes. I thought that the.pl price.mpl and the.pl expense.fpl were important.mpl/fpl I thought that the prices and the expenses were important. The results for the control experiments can be seen in Figure FIGREF20, with English number agreement on the top row, French number agreement in the middle row and French gender agreement on the bottom. The y-axis shows either plural or masculine expectation, with the various conditions along the x-axis. For English number agreement, we find that the models behave similarly as they do for simple coordination contexts. All models show significant plural expectation when the closest noun is plural, with only two models demonstrating plural expectation in the sg_and_sg case. The French number agreement tests show similar results, with all models except LSTM (FTB) demonstrating significant plural prediction in all cases. Turning to French gender agreement, only the LSTM (frWaC) shows sensitivity to the various conditions, with positive masculine expectation in the m_and_m condition and negative expectation in the f_and_f condition, as expected. These results indicate that the behavior shown in Section SECREF13 extends to more complex syntactic environments—in this case to sentential embeddings. Interestingly, for some models, such as the LSTM (1B), behavior is more humanlike when the CoordNP serves as the subject of an embedded sentence. This may be because the model, which has a large number of hidden states and may be extra sensitive to fine-grained syntactic information carried on lexical items BIBREF2, is using the complementizer, that, to drive more robust expectations. Experiment 3: Complex Coordination ::: Complex Coordination Critical In order to assess whether the models' strategy for CoordNP/verb number agreement is sensitive to syntactic context, we contrast the results presented above to those from a second, critical experiment. Here, two coordinated nouns follow a verb that cannot take a sentential complement, as in the examples given in Table TABREF23. Of the two possible continuations—are or is—the plural is only grammatically licensed when the second of the two conjuncts is plural. In these cases, the plural continuation may lead to a final sentence where the first noun serves as the verb's object and the second introduces a second main clause coordinated with the first, as in I fixed the doors and the windows are still broken. For the same reason, the singular-verb continuation is only licensed when the noun immediately following and is singular. We created 37 test items in both English and French, and calculated the plural expectation. If the models were following a simple strategy to drive CoordNP/verb number agreement, then we should see either no difference in plural expectation across the four conditions or behavior no different from the control experiment. If, however, the models are sensitive to the licensing context, we should see a contrast based solely on the number features of the second conjunct, where plural expectation is positive when the second conjunct is plural, and negative otherwise. Experimental items for a critical gender test were created similarly, as in Example SECREF22. As with plural agreement, gender expectation should be driven solely by the second conjunct: For the f_and_m and m_and_m conditions, the only grammatical continuation is one where the adjectival predicate bears masculine gender morphology. Conversely, for the m_and_f or f_and_f conditions, the only grammatical continuation is one where the adjectival predicate bears feminine morphology. As in SECREF13, we created 24 test items and measured the gender expectation by calculating the difference in surprisal between the masculine and feminine continuations. . Nous avons accepté les prix et les dépenses étaient importants/importantes. we have accepted the.pl price.mpl and the expense.fpl were important.mpl/fpl We have accepted the prices and the expenses were important. The results from the critical experiments are in Figure FIGREF21, with the English number agreement on the top row, French number agreement in the middle and gender expectation on the bottom row. Here the y-axis shows either plural expectation or masculine expectation, with the various conditions are on the x-axis. The results here are strikingly different from those in the control experiments. For number agreement, all models in both languages show strong plural expectation in conditions where the second noun is plural (blue and green bars), as they do in the control experiments. Crucially, when the second noun is singular, the plural expectation is significantly negative for all models (save for the French LSTM (FTB) pl_and_sg condition). Turning to gender agreement, only the LSTM (frWaC) model shows differentiation between the four conditions tested. However, whereas the f_and_m and m_and_f gender expectations are not significantly different from zero in the control condition, in the critical condition they pattern with the purely masculine and purely feminine conditions, indicating that, in this syntactic context, the model has successfully learned to base gender expectation solely off of the second noun. These results are inconsistent with a simple `bag of features' strategy that is insensitive to local syntactic context. They indicate that both models can interpret the same string as either a coordinated noun phrase, or as an NP object and the start of a coordinated VP with the second NP as its subject. Experiment 4: Inverted Coordination In addition to using phrase-level features to drive expectation about downstream lexical items, human processors can do the inverse—use lexical features to drive expectations about upcoming syntactic chunks. In this experiment, we assess whether neural models use number features hosted on a verb to modulate their expectations for upcoming CoordNPs. To assess whether neural language models learn inverted coordination rules, we adapted items from Section SECREF13 in both English (37 items) and French (24 items), following the paradigm in Table TABREF24. The first part of the phrase contains either a plural or singular verb and a plural or singular noun. In this case, we sample the surprisal for the continuations and (or is grammatical in all conditions, so it is omitted from this study). Our expectation is that `and' is less surprising in the Vpl_Nsg condition than in the Vsg_Nsg condition, where a CoordNP is not licensed by the grammar in either French or English (as in *What is the pig and the cat eating?). We also expect lower surprisal for and in the Vpl_Nsg condition, where it is obligatory for a grammatical continuation, than in the Vpl_Npl condition, where it is optional. For French experimental items, the question is embedded into a sentential-complement taking verb, following Example SECREF6, due to the fact that unembedded subject-verb inverted questions sound very formal and might be relatively rare in the training data. . Je me demande où vont le maire et I myself ask where go.3PL the.MSG mayor.MSG and The results for both languages are shown in Figure FIGREF25, with the surprisal at the coordinator on the y-axis and the various conditions on the x-axis. No model in either language shows a signficant difference in surprisal between the Vpl_Nsg and Vpl_Npl conditions or between the Vpl_Nsg and Vsg_Nsg conditions. The LSTM (1B) shows significant difference between the Vpl_Nsg and Vpl_Npl conditions, but in the opposite direction than expected, with the coordinator less surprising in the latter condition. These results indicate that the models are unable to use the fine-grained context sensitivity to drive expectations for CoordNPs, at least in the inversion setting. Discussion The experiments presented here extend and refine a line of research investigating what linguistic knowledge is acquired by neural language models. Previous studies have demonstrated that sequential models trained on a simple regime of optimizing the next word can learn long-distance syntactic dependencies in impressive detail. Our results provide complimentary insights, demonstrating that a range of model architectures trained on a variety of datasets can learn fine-grained information about the interaction of CoordNPs and local syntactic context, but their behavior remains unhumanlike in many key ways. Furthermore, to our best knowledge, this work presents the first psycholinguistic analysis of neural language models trained on French, a high-resource language that has so far been under-investigated in this line of research. In the simple coordination experiment, we demonstrated that models were able to capture some of the agreement behaviors of humans, although their performance deviated in crucial aspects. Whereas human behavior is best modeled as a `percolation' process, the neural models appear to be using a linear combination of NP constituent number to drive CoordNP/verb number agreement, with the second noun weighted more heavily than the first. In these experiments, supervision afforded by the RNNG and ActionLSTM models did not translate into more robust or humanlike learning outcomes. The complex coordination experiments provided evidence that the neural models tested were not using a simple `bag of features' strategy, but were sensitive to syntactic context. All models tested were able to interpret material that had similar surface form in ways that corresponded to two different tree-structural descriptions, based on local context. The inverted coordination experiment provided a contrasting example, in which models were unable to modulate expectations based on subtleties in the syntactic environment. Across all our experiments, the French models performed consistently better on subject/verb number agreement than on subject/predicate gender agreement. Although there are likely more examples of subject/verb number agreement in the French training data, gender agreement is syntactically mandated and widespread in French. It remains an open question why all but one of the models tested were unable to leverage the numerous examples of gender agreement seen in various contexts during training to drive correct subject/predicate expectations. Acknowledgments This project is supported by a grant of Labex EFL ANR-10-LABX-0083 (and Idex ANR-18-IDEX-0001) for AA and MIT–IBM AI Laboratory and the MIT–SenseTimeAlliance on Artificial Intelligence for RPL. We would like to thank the anonymous reviewers for their comments and Anne Abeillé for her advice and feedback. The Effect of Annotation Schemes This section further investigates the effects of CoordNP annotation schemes on the behaviors of structurally-supervised models. We test whether an explicit COORD phrasal tag improves model performance. We trained two additional RNNG models on 38,546 sentences from the Penn Treebank annotated with two different schemes: The first, RNNG (PTB-control) was trained with the original Penn Treebank annotation. The second, RNNG (PTB-coord), was trained on the same sentences, but with an extended coordination annotation scheme, meant to employ the scheme employed in the FTB, adapted from BIBREF26. We stripped empty categories from their scheme and only kept the NP-COORD label for constituents inside a coordination structure. Figure FIGREF26 illustrates the detailed annotation differences between two datasets. We tested both models on all the experiments presented in Sections SECREF3-SECREF6 above. Turning to the results of these six experiments: We see little difference between the two models in the Non-coordination agreement experiment. For the Complex coordination control and Complex coordination critical experiments, both models are largely the same as well. However, in the Simple and-coordination and Simple or-coordination experiments the values for all conditions are shifted upwards for the RNNG PTB-coord model, indicating higher over-all preference for the plural continuation. Furthermore, the range of values is reduced in the RNNG PTB-coord model, compared to the RNNG PTB-control model. These results indicate that adding an explicit COORD phrasal label does not drastically change model performance: Both models still appear to be using a linear combination of number features to drive plural vs. singular expectation. However, the explicit representation has made the interior of the coordination phrase more opaque to the model (each feature matters less) and has slightly shifted model preference towards plural continuations. In this sense, the PTB-coord model may have learned a generalization about CoordNPs, but this generalization remains unlike the ones learned by humans. PTB/FTB Agreement Patterns We present statistics of subject/predicate agreement patterns in the Penn Treebank (PTB) and French Treebank (FTB) in Table TABREF28 and TABREF29.
Unanswerable
3b090b416c4ad7d9b5b05df10c5e7770a4590f6a
3b090b416c4ad7d9b5b05df10c5e7770a4590f6a_0
Q: What is the best performance achieved by supervised models? Text: Introduction Humans deploy structure-sensitive expectations to guide processing during natural language comprehension BIBREF0. While it has been shown that neural language models show similar structure-sensitivity in their predictions about upcoming material BIBREF1, BIBREF2, previous work has focused on dependencies that are conditioned by features attached to a single word, such as subject number BIBREF3, BIBREF4 or wh-question words BIBREF5. There has been no systematic investigation into models' ability to compute phrase-level features—features that are attached to a set of words—and whether models can deploy these more abstract properties to drive downstream expectations. In this work, we assess whether state-of-the-art neural models can compute and employ phrase-level gender and number features of coordinated subject Noun Phrases (CoordNPs) with two nouns. Typical syntactic phrases are endocentric: they are headed by a single child, whose features determine the agreement requirements for the entire phrase. In Figure FIGREF1, for example, the word star heads the subject NP The star; since star is singular, the verb must be singular. CoordNPs lack endocentricity: neither conjunct NP solely determines the features of the NP as a whole. Instead, these feature values are determined by compositional rules sensitive to the features of the conjuncts and the identity of the coordinator. In Figure FIGREF1, because the coordinator is and, the subject NP number is plural even though both conjuncts (the star and the moon) are singular. As this case demonstrates, the agreement behavior for CoordNPs must be driven by more abstract, constituent-level representations, and cannot be reduced to features hosted on a single lexical item. We use four suites of experiments to assess whether neural models are able to build up phrase-level representations of CoordNPs on the fly and deploy them to drive humanlike behavior. First, we present a simple control experiment to show that models can represent number and gender features of non-coordinate NPs (Non-coordination Agreement). Second, we show that models modulate their expectations for downstream verb number based on the CoordNP's coordinating conjunction combined with the features of the coordinated nouns (Simple Coordination). We rule out the possibility that models are using simple heuristics by designing a set of stimuli where a simple heuristic would fail due to structural ambiguity (Complex Coordination). The striking success for all models in this experiment indicates that even neural models with no explicit hierarchical bias, trained on a relatively small amount of text are able to learn fine-grained and robust generalizations about the interaction between CoordNPs and local syntactic context. Finally, we use subject–auxiliary inversion to test whether an upstream lexical item modulates model expectation for the phrasal-level features of a downstream CoordNP (Inverted Coordination). Here, we find that all models are insensitive to the fine-grained features of this particular syntactic context. Overall, our results indicate that neural models can learn fine-grained information about the interaction of Coordinated NPs and local syntactic context, but their behavior remains unhumanlike in many key respects. Methods ::: Psycholinguistics Paradigm To determine whether state-of-the-art neural architectures are capable of learning humanlike CoordNP/verb agreement properties, we adopt the psycholinguistics paradigm for model assessment. In this paradigm the models are tested using hand-crafted sentences designed to test underlying network knowledge. The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training, its expectations for upcoming words should qualitatively match human expectations in novel contexts. For example, BIBREF1 and BIBREF6 assessed how well neural models had learned the subject/verb number agreement by feeding them with the prefix The keys to the cabinet .... If the models predicted the grammatical continuation are over the ungrammatical continuation is, they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun, cabinet, drives expectations about the number of the matrix verb. If models are able to robustly modulate their expectations based on the internal components of the CoordNP, this will provide evidence that the networks are building up a context-sensitive phrase-level representation. We quantify model expectations as surprisal values. Surprisal is the negative log-conditional probability $S(x_i) = -\log _2 p(x_i|x_1 \dots x_{i-1})$ of a sentence's $i^{th}$ word $x_i$ given the previous words. Surprisal tells us how strongly $x_i$ is expected in context and is known to correlate with human processing difficulty BIBREF7, BIBREF0, BIBREF8. In the CoordNP/Verb agreement studies presented here, cases where the proceeding context sets high expectation for a number-inflected verb form $w_i$, (e.g. singular `is') we would expect $S(w_i)$ to be lower than its number-mismatched counterpart (e.g. plural `are'). Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256. We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset. Methods ::: Models Tested ::: ActionLSTM models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17. Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG) jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18. The annotation schemes used to train the syntactically-supervised models differ slightly between French and English. In the PTB (English) CoordNPs are flat structures bearing an `NP' label. In FTB (French), CoordNPs are binary-branching, labeled as NPs, except for the phrasal node dominating the coordinating conjunction, which is labeled `COORD'. We examine the effects of annotation schemes on model performance in Appendix SECREF8. Experiment 1: Non-coordination Agreement In order to provide a baseline for following experiments, here we assess whether the models tested have learned basic representations of number and gender features for non-coordinated Noun Phrases. We test number agreement in English and French as well as gender agreement in French. Both English and French have two grammatical number feature: singular (sg) and plural (pl). French has two grammatical gender features: masculine (m) and feminine (f). The experimental materials include sentences where the subject NPs contain a single noun which can either match with the matrix verb (in the case of number agreement) or a following predicative adjective (in the case of gender agreement). Conditions are given in Table TABREF9 and Table TABREF10. We measure model behavior by computing the plural expectation, or the surprisal of the singular continuation minus the surprisal of the plural continuation for each condition and took the average for each condition. We expect a positive plural expectation in the Npl conditions and a negative plural expectation in the Nsg conditions. For gender expectation we compute a gender expectation, which is S(feminine continuation) $-$ S(masculine continuation). We measure surprisal at the verbs and predicative adjectives themselves. The results for this experiment are in Figure FIGREF11, with the plural expectation and gender expectation on the y-axis and conditions on the x-axis. For this and subsequent experiments error bars represent 95% confidence intervals for across-item means. For number agreement, all the models in English and French show positive plural expectation when the head noun is plural and negative plural expectation when it is singular. For gender agreement, however, only the LSTM (frWaC) shows modulation of gender expectation based on the gender of the head noun. This is most likely due to the lower frequency of predicative adjectives compared to matrix verbs in the corpus. Experiment 2: Simple Coordination In this section, we test whether neural language models can use grammatical features hosted on multiple components of a coordination phrase—the coordinated nouns as well as the coordinating conjunction—to drive downstream expectations. We test number agreement in both English and French and gender agreement in French. Experiment 2: Simple Coordination ::: Number Agreement In simple subject/verb number agreement, the number features of the CoordNP are determined by the coordinating conjunction and the number features of the two coordinated NPs. CoordNPs formed by and are plural and thus require plural verbs; CoordNPs formed by or allow either plural or singular verbs, often with the number features of the noun linearly closest to the verb playing a more important role, although this varies cross-linguistically BIBREF20. Forced-choice preference experiments in BIBREF21 reveal that English native speakers prefer singular agreement when the closest conjunct in an or-CoordNP is singular and plural agreement when the closest conjunct is plural. In French, both singular and plural verbs are possible when two singular NPs are joined via disjunction BIBREF22. In order to assess whether the neural models learn the basic CoordNP licensing for English, we adapted 37 items from BIBREF21, along the 16 conditions outlined in Table TABREF14. Test items consist of the sentence preamble, followed by either the singular or plural BE verb, half the time in present tense (is/are) and half the time in past tense (was/were). We measured the plural expectation, following the procedure in Section SECREF3. We created 24 items using the same conditions as the English experiment to test the models trained in French, using the 3rd person singular and plural form of verb aller, `to go' (va, vont). Within each item, nouns match in gender; across all conditions half the nouns are masculine, half feminine. The results for this experiment can be seen in Figure FIGREF12, with the results for English on the left and French on the right. The results for and are on the top row, or on the bottom row. For all figures the y-axis shows the plural expectation, or the difference in surprisal between the singular condition and the plural condition. Turning first to English-and (Figure FIGREF12), all models show plural expectation (the bars are significantly greater than zero) in the pl_and_pl and sg_and_pl conditions, as expected. For the pl_and_sg condition, only the LSTM (enWiki) and ActionLSTM are greater than zero, indicating humanlike behavior. For the sg_and_sg condition, only the LSTM (enWiki) model shows the correct plural expectation. For the French-and (Figure FIGREF12), all models show positive plural expectation in all conditions, as expected, except for the LSTM (FTB) in the sg_and_sg condition. Examining the results for English-or, we find that all models demonstrate humanlike expectation in the pl_or_pl and sg_or_pl conditions. The LSTM (1B), LSTM (PTB), and RNNG models show zero or negative singular expectation for the pl_or_sg conditions, as expected. However the LSTM (enWiki) and ActionLSTM models show positive plural expectation in this condition, indicating that they have not learned the humanlike generalizations. All models show significantly negative plural expectation in the sg_or_sg condition, as expected. In the French-or cases, models show almost identical behavior to the and conditions, except the LSTM (frWaC) shows smaller plural expectation when singular nouns are linearly proximal to the verb. These results indicate moderate success at learning coordinate NP agreement, however this success may be the result of an overly simple heuristic. It appears that expectation for both plural and masculine continuations are driven by a linear combination of the two nominal number/gender features transferred into log-probability space, with the earlier noun mattering less than the later noun. A model that optimally captures human grammatical preferences should show no or only slight difference across conditions in the surprisal differential for the and conditions, and be greater than zero in all cases. Yet, all the models tested show gradient performance based on the number of plural conjuncts. Experiment 2: Simple Coordination ::: Gender Agreement In French, if two nouns are coordinated with et (and-coordination), agreement must be masculine if there is one masculine element in the coordinate structure. If the nouns are coordinated with ou (or-coordination), both masculine and feminine agreement is acceptable BIBREF23, BIBREF24. Although linear proximity effects have been tested for a number of languages that employ grammatical gender, as in e.g. Slavic languages BIBREF25, there is no systematic study for French. To assess whether the French neural models learned humanlike gender agreement, we created 24 test items, following the examples in Table TABREF16, and measured the masculine expectation. In our test items, the coordinated subject NP is followed by a predicative adjective, which either takes on masculine or feminine gender morphology. Results from the experiment can be seen in Figure FIGREF17. No models shows qualitative difference based on the coordinator, and only the LSTM (frWaC) shows significant behavior difference between conditions. Here, we find positive masculine expectation in the m_and_m and f_and_m conditions, and negative masculine expectation in the f_and_f condition, as expected. However, in the m_and_f condition, the masculine expectation is not significantly different from zero, where we would expect it to be positive. In the or-coordination conditions, following our expectation, masculine expectation is positive when both conjuncts are masculine and negative when both are feminine. For the LSTM (FTB) and ActionLSTM models, the masculine expectation is positive (although not significantly so) in all conditions, consistent with results in Section SECREF3. Experiment 3: Complex Coordination One possible explanation for the results presented in the previous section is that the models are using a `bag of features' approach to plural and masculine licensing that is opaque to syntactic context: Following a coordinating conjunction surrounded by nouns, models simply expect the following verb to be plural, proportionally to the number of plural nouns. In this section, we control for this potential confound by conducting two experiments: In the Complex Coordination Control experiments we assess models' ability to extend basic CoordNP licensing into sententially-embedded environments, where the CoordNP can serve as an embedded subject. In the Complex Coordination Critical experiments, we leverage the sentential embedding environment to demonstrate that when the CoordNPs cannot plausibly serve as the subject of the embedded phrase, models are able to suppress the previously-demonstrated expectations set up by these phrases. These results demonstrate that models are not following a simple strategy for predicting downstream number and gender features, but are building up CoordNP representations on the fly, conditioned on the local syntactic context. Experiment 3: Complex Coordination ::: Complex Coordination Control Following certain sentential-embedding verbs, CoordNPs serve unambiguously as the subject of the verb's sentence complement and should trigger number agreement behavior in the main verb of the embedded clause, similar to the behavior presented in SECREF13. To assess this, we use the 37 test items in English and 24 items in French in section SECREF13, following the conditions in Table TABREF19 (for number agreement), testing only and coordination. For gender agreement, we use the same test items and conditions for and coordination in Section SECREF15, but with the Coordinated NPs embedded in a context similar to SECREF18. As before, we derived the plural expectation by measuring the difference in surprisal between the singular and plural continuations and the gender expectation by computing the difference in surprisal between the masculine and feminine predicates. . Je croyais que les prix et les dépenses étaient importants/importantes. I thought that the.pl price.mpl and the.pl expense.fpl were important.mpl/fpl I thought that the prices and the expenses were important. The results for the control experiments can be seen in Figure FIGREF20, with English number agreement on the top row, French number agreement in the middle row and French gender agreement on the bottom. The y-axis shows either plural or masculine expectation, with the various conditions along the x-axis. For English number agreement, we find that the models behave similarly as they do for simple coordination contexts. All models show significant plural expectation when the closest noun is plural, with only two models demonstrating plural expectation in the sg_and_sg case. The French number agreement tests show similar results, with all models except LSTM (FTB) demonstrating significant plural prediction in all cases. Turning to French gender agreement, only the LSTM (frWaC) shows sensitivity to the various conditions, with positive masculine expectation in the m_and_m condition and negative expectation in the f_and_f condition, as expected. These results indicate that the behavior shown in Section SECREF13 extends to more complex syntactic environments—in this case to sentential embeddings. Interestingly, for some models, such as the LSTM (1B), behavior is more humanlike when the CoordNP serves as the subject of an embedded sentence. This may be because the model, which has a large number of hidden states and may be extra sensitive to fine-grained syntactic information carried on lexical items BIBREF2, is using the complementizer, that, to drive more robust expectations. Experiment 3: Complex Coordination ::: Complex Coordination Critical In order to assess whether the models' strategy for CoordNP/verb number agreement is sensitive to syntactic context, we contrast the results presented above to those from a second, critical experiment. Here, two coordinated nouns follow a verb that cannot take a sentential complement, as in the examples given in Table TABREF23. Of the two possible continuations—are or is—the plural is only grammatically licensed when the second of the two conjuncts is plural. In these cases, the plural continuation may lead to a final sentence where the first noun serves as the verb's object and the second introduces a second main clause coordinated with the first, as in I fixed the doors and the windows are still broken. For the same reason, the singular-verb continuation is only licensed when the noun immediately following and is singular. We created 37 test items in both English and French, and calculated the plural expectation. If the models were following a simple strategy to drive CoordNP/verb number agreement, then we should see either no difference in plural expectation across the four conditions or behavior no different from the control experiment. If, however, the models are sensitive to the licensing context, we should see a contrast based solely on the number features of the second conjunct, where plural expectation is positive when the second conjunct is plural, and negative otherwise. Experimental items for a critical gender test were created similarly, as in Example SECREF22. As with plural agreement, gender expectation should be driven solely by the second conjunct: For the f_and_m and m_and_m conditions, the only grammatical continuation is one where the adjectival predicate bears masculine gender morphology. Conversely, for the m_and_f or f_and_f conditions, the only grammatical continuation is one where the adjectival predicate bears feminine morphology. As in SECREF13, we created 24 test items and measured the gender expectation by calculating the difference in surprisal between the masculine and feminine continuations. . Nous avons accepté les prix et les dépenses étaient importants/importantes. we have accepted the.pl price.mpl and the expense.fpl were important.mpl/fpl We have accepted the prices and the expenses were important. The results from the critical experiments are in Figure FIGREF21, with the English number agreement on the top row, French number agreement in the middle and gender expectation on the bottom row. Here the y-axis shows either plural expectation or masculine expectation, with the various conditions are on the x-axis. The results here are strikingly different from those in the control experiments. For number agreement, all models in both languages show strong plural expectation in conditions where the second noun is plural (blue and green bars), as they do in the control experiments. Crucially, when the second noun is singular, the plural expectation is significantly negative for all models (save for the French LSTM (FTB) pl_and_sg condition). Turning to gender agreement, only the LSTM (frWaC) model shows differentiation between the four conditions tested. However, whereas the f_and_m and m_and_f gender expectations are not significantly different from zero in the control condition, in the critical condition they pattern with the purely masculine and purely feminine conditions, indicating that, in this syntactic context, the model has successfully learned to base gender expectation solely off of the second noun. These results are inconsistent with a simple `bag of features' strategy that is insensitive to local syntactic context. They indicate that both models can interpret the same string as either a coordinated noun phrase, or as an NP object and the start of a coordinated VP with the second NP as its subject. Experiment 4: Inverted Coordination In addition to using phrase-level features to drive expectation about downstream lexical items, human processors can do the inverse—use lexical features to drive expectations about upcoming syntactic chunks. In this experiment, we assess whether neural models use number features hosted on a verb to modulate their expectations for upcoming CoordNPs. To assess whether neural language models learn inverted coordination rules, we adapted items from Section SECREF13 in both English (37 items) and French (24 items), following the paradigm in Table TABREF24. The first part of the phrase contains either a plural or singular verb and a plural or singular noun. In this case, we sample the surprisal for the continuations and (or is grammatical in all conditions, so it is omitted from this study). Our expectation is that `and' is less surprising in the Vpl_Nsg condition than in the Vsg_Nsg condition, where a CoordNP is not licensed by the grammar in either French or English (as in *What is the pig and the cat eating?). We also expect lower surprisal for and in the Vpl_Nsg condition, where it is obligatory for a grammatical continuation, than in the Vpl_Npl condition, where it is optional. For French experimental items, the question is embedded into a sentential-complement taking verb, following Example SECREF6, due to the fact that unembedded subject-verb inverted questions sound very formal and might be relatively rare in the training data. . Je me demande où vont le maire et I myself ask where go.3PL the.MSG mayor.MSG and The results for both languages are shown in Figure FIGREF25, with the surprisal at the coordinator on the y-axis and the various conditions on the x-axis. No model in either language shows a signficant difference in surprisal between the Vpl_Nsg and Vpl_Npl conditions or between the Vpl_Nsg and Vsg_Nsg conditions. The LSTM (1B) shows significant difference between the Vpl_Nsg and Vpl_Npl conditions, but in the opposite direction than expected, with the coordinator less surprising in the latter condition. These results indicate that the models are unable to use the fine-grained context sensitivity to drive expectations for CoordNPs, at least in the inversion setting. Discussion The experiments presented here extend and refine a line of research investigating what linguistic knowledge is acquired by neural language models. Previous studies have demonstrated that sequential models trained on a simple regime of optimizing the next word can learn long-distance syntactic dependencies in impressive detail. Our results provide complimentary insights, demonstrating that a range of model architectures trained on a variety of datasets can learn fine-grained information about the interaction of CoordNPs and local syntactic context, but their behavior remains unhumanlike in many key ways. Furthermore, to our best knowledge, this work presents the first psycholinguistic analysis of neural language models trained on French, a high-resource language that has so far been under-investigated in this line of research. In the simple coordination experiment, we demonstrated that models were able to capture some of the agreement behaviors of humans, although their performance deviated in crucial aspects. Whereas human behavior is best modeled as a `percolation' process, the neural models appear to be using a linear combination of NP constituent number to drive CoordNP/verb number agreement, with the second noun weighted more heavily than the first. In these experiments, supervision afforded by the RNNG and ActionLSTM models did not translate into more robust or humanlike learning outcomes. The complex coordination experiments provided evidence that the neural models tested were not using a simple `bag of features' strategy, but were sensitive to syntactic context. All models tested were able to interpret material that had similar surface form in ways that corresponded to two different tree-structural descriptions, based on local context. The inverted coordination experiment provided a contrasting example, in which models were unable to modulate expectations based on subtleties in the syntactic environment. Across all our experiments, the French models performed consistently better on subject/verb number agreement than on subject/predicate gender agreement. Although there are likely more examples of subject/verb number agreement in the French training data, gender agreement is syntactically mandated and widespread in French. It remains an open question why all but one of the models tested were unable to leverage the numerous examples of gender agreement seen in various contexts during training to drive correct subject/predicate expectations. Acknowledgments This project is supported by a grant of Labex EFL ANR-10-LABX-0083 (and Idex ANR-18-IDEX-0001) for AA and MIT–IBM AI Laboratory and the MIT–SenseTimeAlliance on Artificial Intelligence for RPL. We would like to thank the anonymous reviewers for their comments and Anne Abeillé for her advice and feedback. The Effect of Annotation Schemes This section further investigates the effects of CoordNP annotation schemes on the behaviors of structurally-supervised models. We test whether an explicit COORD phrasal tag improves model performance. We trained two additional RNNG models on 38,546 sentences from the Penn Treebank annotated with two different schemes: The first, RNNG (PTB-control) was trained with the original Penn Treebank annotation. The second, RNNG (PTB-coord), was trained on the same sentences, but with an extended coordination annotation scheme, meant to employ the scheme employed in the FTB, adapted from BIBREF26. We stripped empty categories from their scheme and only kept the NP-COORD label for constituents inside a coordination structure. Figure FIGREF26 illustrates the detailed annotation differences between two datasets. We tested both models on all the experiments presented in Sections SECREF3-SECREF6 above. Turning to the results of these six experiments: We see little difference between the two models in the Non-coordination agreement experiment. For the Complex coordination control and Complex coordination critical experiments, both models are largely the same as well. However, in the Simple and-coordination and Simple or-coordination experiments the values for all conditions are shifted upwards for the RNNG PTB-coord model, indicating higher over-all preference for the plural continuation. Furthermore, the range of values is reduced in the RNNG PTB-coord model, compared to the RNNG PTB-control model. These results indicate that adding an explicit COORD phrasal label does not drastically change model performance: Both models still appear to be using a linear combination of number features to drive plural vs. singular expectation. However, the explicit representation has made the interior of the coordination phrase more opaque to the model (each feature matters less) and has slightly shifted model preference towards plural continuations. In this sense, the PTB-coord model may have learned a generalization about CoordNPs, but this generalization remains unlike the ones learned by humans. PTB/FTB Agreement Patterns We present statistics of subject/predicate agreement patterns in the Penn Treebank (PTB) and French Treebank (FTB) in Table TABREF28 and TABREF29.
Unanswerable
a1e07c7563ad038ee2a7de5093ea08efdd6077d4
a1e07c7563ad038ee2a7de5093ea08efdd6077d4_0
Q: What is the size of the datasets employed? Text: Introduction Humans deploy structure-sensitive expectations to guide processing during natural language comprehension BIBREF0. While it has been shown that neural language models show similar structure-sensitivity in their predictions about upcoming material BIBREF1, BIBREF2, previous work has focused on dependencies that are conditioned by features attached to a single word, such as subject number BIBREF3, BIBREF4 or wh-question words BIBREF5. There has been no systematic investigation into models' ability to compute phrase-level features—features that are attached to a set of words—and whether models can deploy these more abstract properties to drive downstream expectations. In this work, we assess whether state-of-the-art neural models can compute and employ phrase-level gender and number features of coordinated subject Noun Phrases (CoordNPs) with two nouns. Typical syntactic phrases are endocentric: they are headed by a single child, whose features determine the agreement requirements for the entire phrase. In Figure FIGREF1, for example, the word star heads the subject NP The star; since star is singular, the verb must be singular. CoordNPs lack endocentricity: neither conjunct NP solely determines the features of the NP as a whole. Instead, these feature values are determined by compositional rules sensitive to the features of the conjuncts and the identity of the coordinator. In Figure FIGREF1, because the coordinator is and, the subject NP number is plural even though both conjuncts (the star and the moon) are singular. As this case demonstrates, the agreement behavior for CoordNPs must be driven by more abstract, constituent-level representations, and cannot be reduced to features hosted on a single lexical item. We use four suites of experiments to assess whether neural models are able to build up phrase-level representations of CoordNPs on the fly and deploy them to drive humanlike behavior. First, we present a simple control experiment to show that models can represent number and gender features of non-coordinate NPs (Non-coordination Agreement). Second, we show that models modulate their expectations for downstream verb number based on the CoordNP's coordinating conjunction combined with the features of the coordinated nouns (Simple Coordination). We rule out the possibility that models are using simple heuristics by designing a set of stimuli where a simple heuristic would fail due to structural ambiguity (Complex Coordination). The striking success for all models in this experiment indicates that even neural models with no explicit hierarchical bias, trained on a relatively small amount of text are able to learn fine-grained and robust generalizations about the interaction between CoordNPs and local syntactic context. Finally, we use subject–auxiliary inversion to test whether an upstream lexical item modulates model expectation for the phrasal-level features of a downstream CoordNP (Inverted Coordination). Here, we find that all models are insensitive to the fine-grained features of this particular syntactic context. Overall, our results indicate that neural models can learn fine-grained information about the interaction of Coordinated NPs and local syntactic context, but their behavior remains unhumanlike in many key respects. Methods ::: Psycholinguistics Paradigm To determine whether state-of-the-art neural architectures are capable of learning humanlike CoordNP/verb agreement properties, we adopt the psycholinguistics paradigm for model assessment. In this paradigm the models are tested using hand-crafted sentences designed to test underlying network knowledge. The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training, its expectations for upcoming words should qualitatively match human expectations in novel contexts. For example, BIBREF1 and BIBREF6 assessed how well neural models had learned the subject/verb number agreement by feeding them with the prefix The keys to the cabinet .... If the models predicted the grammatical continuation are over the ungrammatical continuation is, they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun, cabinet, drives expectations about the number of the matrix verb. If models are able to robustly modulate their expectations based on the internal components of the CoordNP, this will provide evidence that the networks are building up a context-sensitive phrase-level representation. We quantify model expectations as surprisal values. Surprisal is the negative log-conditional probability $S(x_i) = -\log _2 p(x_i|x_1 \dots x_{i-1})$ of a sentence's $i^{th}$ word $x_i$ given the previous words. Surprisal tells us how strongly $x_i$ is expected in context and is known to correlate with human processing difficulty BIBREF7, BIBREF0, BIBREF8. In the CoordNP/Verb agreement studies presented here, cases where the proceeding context sets high expectation for a number-inflected verb form $w_i$, (e.g. singular `is') we would expect $S(w_i)$ to be lower than its number-mismatched counterpart (e.g. plural `are'). Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256. We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset. Methods ::: Models Tested ::: ActionLSTM models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17. Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG) jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18. The annotation schemes used to train the syntactically-supervised models differ slightly between French and English. In the PTB (English) CoordNPs are flat structures bearing an `NP' label. In FTB (French), CoordNPs are binary-branching, labeled as NPs, except for the phrasal node dominating the coordinating conjunction, which is labeled `COORD'. We examine the effects of annotation schemes on model performance in Appendix SECREF8. Experiment 1: Non-coordination Agreement In order to provide a baseline for following experiments, here we assess whether the models tested have learned basic representations of number and gender features for non-coordinated Noun Phrases. We test number agreement in English and French as well as gender agreement in French. Both English and French have two grammatical number feature: singular (sg) and plural (pl). French has two grammatical gender features: masculine (m) and feminine (f). The experimental materials include sentences where the subject NPs contain a single noun which can either match with the matrix verb (in the case of number agreement) or a following predicative adjective (in the case of gender agreement). Conditions are given in Table TABREF9 and Table TABREF10. We measure model behavior by computing the plural expectation, or the surprisal of the singular continuation minus the surprisal of the plural continuation for each condition and took the average for each condition. We expect a positive plural expectation in the Npl conditions and a negative plural expectation in the Nsg conditions. For gender expectation we compute a gender expectation, which is S(feminine continuation) $-$ S(masculine continuation). We measure surprisal at the verbs and predicative adjectives themselves. The results for this experiment are in Figure FIGREF11, with the plural expectation and gender expectation on the y-axis and conditions on the x-axis. For this and subsequent experiments error bars represent 95% confidence intervals for across-item means. For number agreement, all the models in English and French show positive plural expectation when the head noun is plural and negative plural expectation when it is singular. For gender agreement, however, only the LSTM (frWaC) shows modulation of gender expectation based on the gender of the head noun. This is most likely due to the lower frequency of predicative adjectives compared to matrix verbs in the corpus. Experiment 2: Simple Coordination In this section, we test whether neural language models can use grammatical features hosted on multiple components of a coordination phrase—the coordinated nouns as well as the coordinating conjunction—to drive downstream expectations. We test number agreement in both English and French and gender agreement in French. Experiment 2: Simple Coordination ::: Number Agreement In simple subject/verb number agreement, the number features of the CoordNP are determined by the coordinating conjunction and the number features of the two coordinated NPs. CoordNPs formed by and are plural and thus require plural verbs; CoordNPs formed by or allow either plural or singular verbs, often with the number features of the noun linearly closest to the verb playing a more important role, although this varies cross-linguistically BIBREF20. Forced-choice preference experiments in BIBREF21 reveal that English native speakers prefer singular agreement when the closest conjunct in an or-CoordNP is singular and plural agreement when the closest conjunct is plural. In French, both singular and plural verbs are possible when two singular NPs are joined via disjunction BIBREF22. In order to assess whether the neural models learn the basic CoordNP licensing for English, we adapted 37 items from BIBREF21, along the 16 conditions outlined in Table TABREF14. Test items consist of the sentence preamble, followed by either the singular or plural BE verb, half the time in present tense (is/are) and half the time in past tense (was/were). We measured the plural expectation, following the procedure in Section SECREF3. We created 24 items using the same conditions as the English experiment to test the models trained in French, using the 3rd person singular and plural form of verb aller, `to go' (va, vont). Within each item, nouns match in gender; across all conditions half the nouns are masculine, half feminine. The results for this experiment can be seen in Figure FIGREF12, with the results for English on the left and French on the right. The results for and are on the top row, or on the bottom row. For all figures the y-axis shows the plural expectation, or the difference in surprisal between the singular condition and the plural condition. Turning first to English-and (Figure FIGREF12), all models show plural expectation (the bars are significantly greater than zero) in the pl_and_pl and sg_and_pl conditions, as expected. For the pl_and_sg condition, only the LSTM (enWiki) and ActionLSTM are greater than zero, indicating humanlike behavior. For the sg_and_sg condition, only the LSTM (enWiki) model shows the correct plural expectation. For the French-and (Figure FIGREF12), all models show positive plural expectation in all conditions, as expected, except for the LSTM (FTB) in the sg_and_sg condition. Examining the results for English-or, we find that all models demonstrate humanlike expectation in the pl_or_pl and sg_or_pl conditions. The LSTM (1B), LSTM (PTB), and RNNG models show zero or negative singular expectation for the pl_or_sg conditions, as expected. However the LSTM (enWiki) and ActionLSTM models show positive plural expectation in this condition, indicating that they have not learned the humanlike generalizations. All models show significantly negative plural expectation in the sg_or_sg condition, as expected. In the French-or cases, models show almost identical behavior to the and conditions, except the LSTM (frWaC) shows smaller plural expectation when singular nouns are linearly proximal to the verb. These results indicate moderate success at learning coordinate NP agreement, however this success may be the result of an overly simple heuristic. It appears that expectation for both plural and masculine continuations are driven by a linear combination of the two nominal number/gender features transferred into log-probability space, with the earlier noun mattering less than the later noun. A model that optimally captures human grammatical preferences should show no or only slight difference across conditions in the surprisal differential for the and conditions, and be greater than zero in all cases. Yet, all the models tested show gradient performance based on the number of plural conjuncts. Experiment 2: Simple Coordination ::: Gender Agreement In French, if two nouns are coordinated with et (and-coordination), agreement must be masculine if there is one masculine element in the coordinate structure. If the nouns are coordinated with ou (or-coordination), both masculine and feminine agreement is acceptable BIBREF23, BIBREF24. Although linear proximity effects have been tested for a number of languages that employ grammatical gender, as in e.g. Slavic languages BIBREF25, there is no systematic study for French. To assess whether the French neural models learned humanlike gender agreement, we created 24 test items, following the examples in Table TABREF16, and measured the masculine expectation. In our test items, the coordinated subject NP is followed by a predicative adjective, which either takes on masculine or feminine gender morphology. Results from the experiment can be seen in Figure FIGREF17. No models shows qualitative difference based on the coordinator, and only the LSTM (frWaC) shows significant behavior difference between conditions. Here, we find positive masculine expectation in the m_and_m and f_and_m conditions, and negative masculine expectation in the f_and_f condition, as expected. However, in the m_and_f condition, the masculine expectation is not significantly different from zero, where we would expect it to be positive. In the or-coordination conditions, following our expectation, masculine expectation is positive when both conjuncts are masculine and negative when both are feminine. For the LSTM (FTB) and ActionLSTM models, the masculine expectation is positive (although not significantly so) in all conditions, consistent with results in Section SECREF3. Experiment 3: Complex Coordination One possible explanation for the results presented in the previous section is that the models are using a `bag of features' approach to plural and masculine licensing that is opaque to syntactic context: Following a coordinating conjunction surrounded by nouns, models simply expect the following verb to be plural, proportionally to the number of plural nouns. In this section, we control for this potential confound by conducting two experiments: In the Complex Coordination Control experiments we assess models' ability to extend basic CoordNP licensing into sententially-embedded environments, where the CoordNP can serve as an embedded subject. In the Complex Coordination Critical experiments, we leverage the sentential embedding environment to demonstrate that when the CoordNPs cannot plausibly serve as the subject of the embedded phrase, models are able to suppress the previously-demonstrated expectations set up by these phrases. These results demonstrate that models are not following a simple strategy for predicting downstream number and gender features, but are building up CoordNP representations on the fly, conditioned on the local syntactic context. Experiment 3: Complex Coordination ::: Complex Coordination Control Following certain sentential-embedding verbs, CoordNPs serve unambiguously as the subject of the verb's sentence complement and should trigger number agreement behavior in the main verb of the embedded clause, similar to the behavior presented in SECREF13. To assess this, we use the 37 test items in English and 24 items in French in section SECREF13, following the conditions in Table TABREF19 (for number agreement), testing only and coordination. For gender agreement, we use the same test items and conditions for and coordination in Section SECREF15, but with the Coordinated NPs embedded in a context similar to SECREF18. As before, we derived the plural expectation by measuring the difference in surprisal between the singular and plural continuations and the gender expectation by computing the difference in surprisal between the masculine and feminine predicates. . Je croyais que les prix et les dépenses étaient importants/importantes. I thought that the.pl price.mpl and the.pl expense.fpl were important.mpl/fpl I thought that the prices and the expenses were important. The results for the control experiments can be seen in Figure FIGREF20, with English number agreement on the top row, French number agreement in the middle row and French gender agreement on the bottom. The y-axis shows either plural or masculine expectation, with the various conditions along the x-axis. For English number agreement, we find that the models behave similarly as they do for simple coordination contexts. All models show significant plural expectation when the closest noun is plural, with only two models demonstrating plural expectation in the sg_and_sg case. The French number agreement tests show similar results, with all models except LSTM (FTB) demonstrating significant plural prediction in all cases. Turning to French gender agreement, only the LSTM (frWaC) shows sensitivity to the various conditions, with positive masculine expectation in the m_and_m condition and negative expectation in the f_and_f condition, as expected. These results indicate that the behavior shown in Section SECREF13 extends to more complex syntactic environments—in this case to sentential embeddings. Interestingly, for some models, such as the LSTM (1B), behavior is more humanlike when the CoordNP serves as the subject of an embedded sentence. This may be because the model, which has a large number of hidden states and may be extra sensitive to fine-grained syntactic information carried on lexical items BIBREF2, is using the complementizer, that, to drive more robust expectations. Experiment 3: Complex Coordination ::: Complex Coordination Critical In order to assess whether the models' strategy for CoordNP/verb number agreement is sensitive to syntactic context, we contrast the results presented above to those from a second, critical experiment. Here, two coordinated nouns follow a verb that cannot take a sentential complement, as in the examples given in Table TABREF23. Of the two possible continuations—are or is—the plural is only grammatically licensed when the second of the two conjuncts is plural. In these cases, the plural continuation may lead to a final sentence where the first noun serves as the verb's object and the second introduces a second main clause coordinated with the first, as in I fixed the doors and the windows are still broken. For the same reason, the singular-verb continuation is only licensed when the noun immediately following and is singular. We created 37 test items in both English and French, and calculated the plural expectation. If the models were following a simple strategy to drive CoordNP/verb number agreement, then we should see either no difference in plural expectation across the four conditions or behavior no different from the control experiment. If, however, the models are sensitive to the licensing context, we should see a contrast based solely on the number features of the second conjunct, where plural expectation is positive when the second conjunct is plural, and negative otherwise. Experimental items for a critical gender test were created similarly, as in Example SECREF22. As with plural agreement, gender expectation should be driven solely by the second conjunct: For the f_and_m and m_and_m conditions, the only grammatical continuation is one where the adjectival predicate bears masculine gender morphology. Conversely, for the m_and_f or f_and_f conditions, the only grammatical continuation is one where the adjectival predicate bears feminine morphology. As in SECREF13, we created 24 test items and measured the gender expectation by calculating the difference in surprisal between the masculine and feminine continuations. . Nous avons accepté les prix et les dépenses étaient importants/importantes. we have accepted the.pl price.mpl and the expense.fpl were important.mpl/fpl We have accepted the prices and the expenses were important. The results from the critical experiments are in Figure FIGREF21, with the English number agreement on the top row, French number agreement in the middle and gender expectation on the bottom row. Here the y-axis shows either plural expectation or masculine expectation, with the various conditions are on the x-axis. The results here are strikingly different from those in the control experiments. For number agreement, all models in both languages show strong plural expectation in conditions where the second noun is plural (blue and green bars), as they do in the control experiments. Crucially, when the second noun is singular, the plural expectation is significantly negative for all models (save for the French LSTM (FTB) pl_and_sg condition). Turning to gender agreement, only the LSTM (frWaC) model shows differentiation between the four conditions tested. However, whereas the f_and_m and m_and_f gender expectations are not significantly different from zero in the control condition, in the critical condition they pattern with the purely masculine and purely feminine conditions, indicating that, in this syntactic context, the model has successfully learned to base gender expectation solely off of the second noun. These results are inconsistent with a simple `bag of features' strategy that is insensitive to local syntactic context. They indicate that both models can interpret the same string as either a coordinated noun phrase, or as an NP object and the start of a coordinated VP with the second NP as its subject. Experiment 4: Inverted Coordination In addition to using phrase-level features to drive expectation about downstream lexical items, human processors can do the inverse—use lexical features to drive expectations about upcoming syntactic chunks. In this experiment, we assess whether neural models use number features hosted on a verb to modulate their expectations for upcoming CoordNPs. To assess whether neural language models learn inverted coordination rules, we adapted items from Section SECREF13 in both English (37 items) and French (24 items), following the paradigm in Table TABREF24. The first part of the phrase contains either a plural or singular verb and a plural or singular noun. In this case, we sample the surprisal for the continuations and (or is grammatical in all conditions, so it is omitted from this study). Our expectation is that `and' is less surprising in the Vpl_Nsg condition than in the Vsg_Nsg condition, where a CoordNP is not licensed by the grammar in either French or English (as in *What is the pig and the cat eating?). We also expect lower surprisal for and in the Vpl_Nsg condition, where it is obligatory for a grammatical continuation, than in the Vpl_Npl condition, where it is optional. For French experimental items, the question is embedded into a sentential-complement taking verb, following Example SECREF6, due to the fact that unembedded subject-verb inverted questions sound very formal and might be relatively rare in the training data. . Je me demande où vont le maire et I myself ask where go.3PL the.MSG mayor.MSG and The results for both languages are shown in Figure FIGREF25, with the surprisal at the coordinator on the y-axis and the various conditions on the x-axis. No model in either language shows a signficant difference in surprisal between the Vpl_Nsg and Vpl_Npl conditions or between the Vpl_Nsg and Vsg_Nsg conditions. The LSTM (1B) shows significant difference between the Vpl_Nsg and Vpl_Npl conditions, but in the opposite direction than expected, with the coordinator less surprising in the latter condition. These results indicate that the models are unable to use the fine-grained context sensitivity to drive expectations for CoordNPs, at least in the inversion setting. Discussion The experiments presented here extend and refine a line of research investigating what linguistic knowledge is acquired by neural language models. Previous studies have demonstrated that sequential models trained on a simple regime of optimizing the next word can learn long-distance syntactic dependencies in impressive detail. Our results provide complimentary insights, demonstrating that a range of model architectures trained on a variety of datasets can learn fine-grained information about the interaction of CoordNPs and local syntactic context, but their behavior remains unhumanlike in many key ways. Furthermore, to our best knowledge, this work presents the first psycholinguistic analysis of neural language models trained on French, a high-resource language that has so far been under-investigated in this line of research. In the simple coordination experiment, we demonstrated that models were able to capture some of the agreement behaviors of humans, although their performance deviated in crucial aspects. Whereas human behavior is best modeled as a `percolation' process, the neural models appear to be using a linear combination of NP constituent number to drive CoordNP/verb number agreement, with the second noun weighted more heavily than the first. In these experiments, supervision afforded by the RNNG and ActionLSTM models did not translate into more robust or humanlike learning outcomes. The complex coordination experiments provided evidence that the neural models tested were not using a simple `bag of features' strategy, but were sensitive to syntactic context. All models tested were able to interpret material that had similar surface form in ways that corresponded to two different tree-structural descriptions, based on local context. The inverted coordination experiment provided a contrasting example, in which models were unable to modulate expectations based on subtleties in the syntactic environment. Across all our experiments, the French models performed consistently better on subject/verb number agreement than on subject/predicate gender agreement. Although there are likely more examples of subject/verb number agreement in the French training data, gender agreement is syntactically mandated and widespread in French. It remains an open question why all but one of the models tested were unable to leverage the numerous examples of gender agreement seen in various contexts during training to drive correct subject/predicate expectations. Acknowledgments This project is supported by a grant of Labex EFL ANR-10-LABX-0083 (and Idex ANR-18-IDEX-0001) for AA and MIT–IBM AI Laboratory and the MIT–SenseTimeAlliance on Artificial Intelligence for RPL. We would like to thank the anonymous reviewers for their comments and Anne Abeillé for her advice and feedback. The Effect of Annotation Schemes This section further investigates the effects of CoordNP annotation schemes on the behaviors of structurally-supervised models. We test whether an explicit COORD phrasal tag improves model performance. We trained two additional RNNG models on 38,546 sentences from the Penn Treebank annotated with two different schemes: The first, RNNG (PTB-control) was trained with the original Penn Treebank annotation. The second, RNNG (PTB-coord), was trained on the same sentences, but with an extended coordination annotation scheme, meant to employ the scheme employed in the FTB, adapted from BIBREF26. We stripped empty categories from their scheme and only kept the NP-COORD label for constituents inside a coordination structure. Figure FIGREF26 illustrates the detailed annotation differences between two datasets. We tested both models on all the experiments presented in Sections SECREF3-SECREF6 above. Turning to the results of these six experiments: We see little difference between the two models in the Non-coordination agreement experiment. For the Complex coordination control and Complex coordination critical experiments, both models are largely the same as well. However, in the Simple and-coordination and Simple or-coordination experiments the values for all conditions are shifted upwards for the RNNG PTB-coord model, indicating higher over-all preference for the plural continuation. Furthermore, the range of values is reduced in the RNNG PTB-coord model, compared to the RNNG PTB-control model. These results indicate that adding an explicit COORD phrasal label does not drastically change model performance: Both models still appear to be using a linear combination of number features to drive plural vs. singular expectation. However, the explicit representation has made the interior of the coordination phrase more opaque to the model (each feature matters less) and has slightly shifted model preference towards plural continuations. In this sense, the PTB-coord model may have learned a generalization about CoordNPs, but this generalization remains unlike the ones learned by humans. PTB/FTB Agreement Patterns We present statistics of subject/predicate agreement patterns in the Penn Treebank (PTB) and French Treebank (FTB) in Table TABREF28 and TABREF29.
(about 4 million sentences, 138 million word tokens), one trained on the Billion Word benchmark
a1c4f9e8661d4d488b8684f055e0ee0e2275f767
a1c4f9e8661d4d488b8684f055e0ee0e2275f767_0
Q: What are the baseline models? Text: Introduction Humans deploy structure-sensitive expectations to guide processing during natural language comprehension BIBREF0. While it has been shown that neural language models show similar structure-sensitivity in their predictions about upcoming material BIBREF1, BIBREF2, previous work has focused on dependencies that are conditioned by features attached to a single word, such as subject number BIBREF3, BIBREF4 or wh-question words BIBREF5. There has been no systematic investigation into models' ability to compute phrase-level features—features that are attached to a set of words—and whether models can deploy these more abstract properties to drive downstream expectations. In this work, we assess whether state-of-the-art neural models can compute and employ phrase-level gender and number features of coordinated subject Noun Phrases (CoordNPs) with two nouns. Typical syntactic phrases are endocentric: they are headed by a single child, whose features determine the agreement requirements for the entire phrase. In Figure FIGREF1, for example, the word star heads the subject NP The star; since star is singular, the verb must be singular. CoordNPs lack endocentricity: neither conjunct NP solely determines the features of the NP as a whole. Instead, these feature values are determined by compositional rules sensitive to the features of the conjuncts and the identity of the coordinator. In Figure FIGREF1, because the coordinator is and, the subject NP number is plural even though both conjuncts (the star and the moon) are singular. As this case demonstrates, the agreement behavior for CoordNPs must be driven by more abstract, constituent-level representations, and cannot be reduced to features hosted on a single lexical item. We use four suites of experiments to assess whether neural models are able to build up phrase-level representations of CoordNPs on the fly and deploy them to drive humanlike behavior. First, we present a simple control experiment to show that models can represent number and gender features of non-coordinate NPs (Non-coordination Agreement). Second, we show that models modulate their expectations for downstream verb number based on the CoordNP's coordinating conjunction combined with the features of the coordinated nouns (Simple Coordination). We rule out the possibility that models are using simple heuristics by designing a set of stimuli where a simple heuristic would fail due to structural ambiguity (Complex Coordination). The striking success for all models in this experiment indicates that even neural models with no explicit hierarchical bias, trained on a relatively small amount of text are able to learn fine-grained and robust generalizations about the interaction between CoordNPs and local syntactic context. Finally, we use subject–auxiliary inversion to test whether an upstream lexical item modulates model expectation for the phrasal-level features of a downstream CoordNP (Inverted Coordination). Here, we find that all models are insensitive to the fine-grained features of this particular syntactic context. Overall, our results indicate that neural models can learn fine-grained information about the interaction of Coordinated NPs and local syntactic context, but their behavior remains unhumanlike in many key respects. Methods ::: Psycholinguistics Paradigm To determine whether state-of-the-art neural architectures are capable of learning humanlike CoordNP/verb agreement properties, we adopt the psycholinguistics paradigm for model assessment. In this paradigm the models are tested using hand-crafted sentences designed to test underlying network knowledge. The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training, its expectations for upcoming words should qualitatively match human expectations in novel contexts. For example, BIBREF1 and BIBREF6 assessed how well neural models had learned the subject/verb number agreement by feeding them with the prefix The keys to the cabinet .... If the models predicted the grammatical continuation are over the ungrammatical continuation is, they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun, cabinet, drives expectations about the number of the matrix verb. If models are able to robustly modulate their expectations based on the internal components of the CoordNP, this will provide evidence that the networks are building up a context-sensitive phrase-level representation. We quantify model expectations as surprisal values. Surprisal is the negative log-conditional probability $S(x_i) = -\log _2 p(x_i|x_1 \dots x_{i-1})$ of a sentence's $i^{th}$ word $x_i$ given the previous words. Surprisal tells us how strongly $x_i$ is expected in context and is known to correlate with human processing difficulty BIBREF7, BIBREF0, BIBREF8. In the CoordNP/Verb agreement studies presented here, cases where the proceeding context sets high expectation for a number-inflected verb form $w_i$, (e.g. singular `is') we would expect $S(w_i)$ to be lower than its number-mismatched counterpart (e.g. plural `are'). Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256. We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset. Methods ::: Models Tested ::: ActionLSTM models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17. Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG) jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18. The annotation schemes used to train the syntactically-supervised models differ slightly between French and English. In the PTB (English) CoordNPs are flat structures bearing an `NP' label. In FTB (French), CoordNPs are binary-branching, labeled as NPs, except for the phrasal node dominating the coordinating conjunction, which is labeled `COORD'. We examine the effects of annotation schemes on model performance in Appendix SECREF8. Experiment 1: Non-coordination Agreement In order to provide a baseline for following experiments, here we assess whether the models tested have learned basic representations of number and gender features for non-coordinated Noun Phrases. We test number agreement in English and French as well as gender agreement in French. Both English and French have two grammatical number feature: singular (sg) and plural (pl). French has two grammatical gender features: masculine (m) and feminine (f). The experimental materials include sentences where the subject NPs contain a single noun which can either match with the matrix verb (in the case of number agreement) or a following predicative adjective (in the case of gender agreement). Conditions are given in Table TABREF9 and Table TABREF10. We measure model behavior by computing the plural expectation, or the surprisal of the singular continuation minus the surprisal of the plural continuation for each condition and took the average for each condition. We expect a positive plural expectation in the Npl conditions and a negative plural expectation in the Nsg conditions. For gender expectation we compute a gender expectation, which is S(feminine continuation) $-$ S(masculine continuation). We measure surprisal at the verbs and predicative adjectives themselves. The results for this experiment are in Figure FIGREF11, with the plural expectation and gender expectation on the y-axis and conditions on the x-axis. For this and subsequent experiments error bars represent 95% confidence intervals for across-item means. For number agreement, all the models in English and French show positive plural expectation when the head noun is plural and negative plural expectation when it is singular. For gender agreement, however, only the LSTM (frWaC) shows modulation of gender expectation based on the gender of the head noun. This is most likely due to the lower frequency of predicative adjectives compared to matrix verbs in the corpus. Experiment 2: Simple Coordination In this section, we test whether neural language models can use grammatical features hosted on multiple components of a coordination phrase—the coordinated nouns as well as the coordinating conjunction—to drive downstream expectations. We test number agreement in both English and French and gender agreement in French. Experiment 2: Simple Coordination ::: Number Agreement In simple subject/verb number agreement, the number features of the CoordNP are determined by the coordinating conjunction and the number features of the two coordinated NPs. CoordNPs formed by and are plural and thus require plural verbs; CoordNPs formed by or allow either plural or singular verbs, often with the number features of the noun linearly closest to the verb playing a more important role, although this varies cross-linguistically BIBREF20. Forced-choice preference experiments in BIBREF21 reveal that English native speakers prefer singular agreement when the closest conjunct in an or-CoordNP is singular and plural agreement when the closest conjunct is plural. In French, both singular and plural verbs are possible when two singular NPs are joined via disjunction BIBREF22. In order to assess whether the neural models learn the basic CoordNP licensing for English, we adapted 37 items from BIBREF21, along the 16 conditions outlined in Table TABREF14. Test items consist of the sentence preamble, followed by either the singular or plural BE verb, half the time in present tense (is/are) and half the time in past tense (was/were). We measured the plural expectation, following the procedure in Section SECREF3. We created 24 items using the same conditions as the English experiment to test the models trained in French, using the 3rd person singular and plural form of verb aller, `to go' (va, vont). Within each item, nouns match in gender; across all conditions half the nouns are masculine, half feminine. The results for this experiment can be seen in Figure FIGREF12, with the results for English on the left and French on the right. The results for and are on the top row, or on the bottom row. For all figures the y-axis shows the plural expectation, or the difference in surprisal between the singular condition and the plural condition. Turning first to English-and (Figure FIGREF12), all models show plural expectation (the bars are significantly greater than zero) in the pl_and_pl and sg_and_pl conditions, as expected. For the pl_and_sg condition, only the LSTM (enWiki) and ActionLSTM are greater than zero, indicating humanlike behavior. For the sg_and_sg condition, only the LSTM (enWiki) model shows the correct plural expectation. For the French-and (Figure FIGREF12), all models show positive plural expectation in all conditions, as expected, except for the LSTM (FTB) in the sg_and_sg condition. Examining the results for English-or, we find that all models demonstrate humanlike expectation in the pl_or_pl and sg_or_pl conditions. The LSTM (1B), LSTM (PTB), and RNNG models show zero or negative singular expectation for the pl_or_sg conditions, as expected. However the LSTM (enWiki) and ActionLSTM models show positive plural expectation in this condition, indicating that they have not learned the humanlike generalizations. All models show significantly negative plural expectation in the sg_or_sg condition, as expected. In the French-or cases, models show almost identical behavior to the and conditions, except the LSTM (frWaC) shows smaller plural expectation when singular nouns are linearly proximal to the verb. These results indicate moderate success at learning coordinate NP agreement, however this success may be the result of an overly simple heuristic. It appears that expectation for both plural and masculine continuations are driven by a linear combination of the two nominal number/gender features transferred into log-probability space, with the earlier noun mattering less than the later noun. A model that optimally captures human grammatical preferences should show no or only slight difference across conditions in the surprisal differential for the and conditions, and be greater than zero in all cases. Yet, all the models tested show gradient performance based on the number of plural conjuncts. Experiment 2: Simple Coordination ::: Gender Agreement In French, if two nouns are coordinated with et (and-coordination), agreement must be masculine if there is one masculine element in the coordinate structure. If the nouns are coordinated with ou (or-coordination), both masculine and feminine agreement is acceptable BIBREF23, BIBREF24. Although linear proximity effects have been tested for a number of languages that employ grammatical gender, as in e.g. Slavic languages BIBREF25, there is no systematic study for French. To assess whether the French neural models learned humanlike gender agreement, we created 24 test items, following the examples in Table TABREF16, and measured the masculine expectation. In our test items, the coordinated subject NP is followed by a predicative adjective, which either takes on masculine or feminine gender morphology. Results from the experiment can be seen in Figure FIGREF17. No models shows qualitative difference based on the coordinator, and only the LSTM (frWaC) shows significant behavior difference between conditions. Here, we find positive masculine expectation in the m_and_m and f_and_m conditions, and negative masculine expectation in the f_and_f condition, as expected. However, in the m_and_f condition, the masculine expectation is not significantly different from zero, where we would expect it to be positive. In the or-coordination conditions, following our expectation, masculine expectation is positive when both conjuncts are masculine and negative when both are feminine. For the LSTM (FTB) and ActionLSTM models, the masculine expectation is positive (although not significantly so) in all conditions, consistent with results in Section SECREF3. Experiment 3: Complex Coordination One possible explanation for the results presented in the previous section is that the models are using a `bag of features' approach to plural and masculine licensing that is opaque to syntactic context: Following a coordinating conjunction surrounded by nouns, models simply expect the following verb to be plural, proportionally to the number of plural nouns. In this section, we control for this potential confound by conducting two experiments: In the Complex Coordination Control experiments we assess models' ability to extend basic CoordNP licensing into sententially-embedded environments, where the CoordNP can serve as an embedded subject. In the Complex Coordination Critical experiments, we leverage the sentential embedding environment to demonstrate that when the CoordNPs cannot plausibly serve as the subject of the embedded phrase, models are able to suppress the previously-demonstrated expectations set up by these phrases. These results demonstrate that models are not following a simple strategy for predicting downstream number and gender features, but are building up CoordNP representations on the fly, conditioned on the local syntactic context. Experiment 3: Complex Coordination ::: Complex Coordination Control Following certain sentential-embedding verbs, CoordNPs serve unambiguously as the subject of the verb's sentence complement and should trigger number agreement behavior in the main verb of the embedded clause, similar to the behavior presented in SECREF13. To assess this, we use the 37 test items in English and 24 items in French in section SECREF13, following the conditions in Table TABREF19 (for number agreement), testing only and coordination. For gender agreement, we use the same test items and conditions for and coordination in Section SECREF15, but with the Coordinated NPs embedded in a context similar to SECREF18. As before, we derived the plural expectation by measuring the difference in surprisal between the singular and plural continuations and the gender expectation by computing the difference in surprisal between the masculine and feminine predicates. . Je croyais que les prix et les dépenses étaient importants/importantes. I thought that the.pl price.mpl and the.pl expense.fpl were important.mpl/fpl I thought that the prices and the expenses were important. The results for the control experiments can be seen in Figure FIGREF20, with English number agreement on the top row, French number agreement in the middle row and French gender agreement on the bottom. The y-axis shows either plural or masculine expectation, with the various conditions along the x-axis. For English number agreement, we find that the models behave similarly as they do for simple coordination contexts. All models show significant plural expectation when the closest noun is plural, with only two models demonstrating plural expectation in the sg_and_sg case. The French number agreement tests show similar results, with all models except LSTM (FTB) demonstrating significant plural prediction in all cases. Turning to French gender agreement, only the LSTM (frWaC) shows sensitivity to the various conditions, with positive masculine expectation in the m_and_m condition and negative expectation in the f_and_f condition, as expected. These results indicate that the behavior shown in Section SECREF13 extends to more complex syntactic environments—in this case to sentential embeddings. Interestingly, for some models, such as the LSTM (1B), behavior is more humanlike when the CoordNP serves as the subject of an embedded sentence. This may be because the model, which has a large number of hidden states and may be extra sensitive to fine-grained syntactic information carried on lexical items BIBREF2, is using the complementizer, that, to drive more robust expectations. Experiment 3: Complex Coordination ::: Complex Coordination Critical In order to assess whether the models' strategy for CoordNP/verb number agreement is sensitive to syntactic context, we contrast the results presented above to those from a second, critical experiment. Here, two coordinated nouns follow a verb that cannot take a sentential complement, as in the examples given in Table TABREF23. Of the two possible continuations—are or is—the plural is only grammatically licensed when the second of the two conjuncts is plural. In these cases, the plural continuation may lead to a final sentence where the first noun serves as the verb's object and the second introduces a second main clause coordinated with the first, as in I fixed the doors and the windows are still broken. For the same reason, the singular-verb continuation is only licensed when the noun immediately following and is singular. We created 37 test items in both English and French, and calculated the plural expectation. If the models were following a simple strategy to drive CoordNP/verb number agreement, then we should see either no difference in plural expectation across the four conditions or behavior no different from the control experiment. If, however, the models are sensitive to the licensing context, we should see a contrast based solely on the number features of the second conjunct, where plural expectation is positive when the second conjunct is plural, and negative otherwise. Experimental items for a critical gender test were created similarly, as in Example SECREF22. As with plural agreement, gender expectation should be driven solely by the second conjunct: For the f_and_m and m_and_m conditions, the only grammatical continuation is one where the adjectival predicate bears masculine gender morphology. Conversely, for the m_and_f or f_and_f conditions, the only grammatical continuation is one where the adjectival predicate bears feminine morphology. As in SECREF13, we created 24 test items and measured the gender expectation by calculating the difference in surprisal between the masculine and feminine continuations. . Nous avons accepté les prix et les dépenses étaient importants/importantes. we have accepted the.pl price.mpl and the expense.fpl were important.mpl/fpl We have accepted the prices and the expenses were important. The results from the critical experiments are in Figure FIGREF21, with the English number agreement on the top row, French number agreement in the middle and gender expectation on the bottom row. Here the y-axis shows either plural expectation or masculine expectation, with the various conditions are on the x-axis. The results here are strikingly different from those in the control experiments. For number agreement, all models in both languages show strong plural expectation in conditions where the second noun is plural (blue and green bars), as they do in the control experiments. Crucially, when the second noun is singular, the plural expectation is significantly negative for all models (save for the French LSTM (FTB) pl_and_sg condition). Turning to gender agreement, only the LSTM (frWaC) model shows differentiation between the four conditions tested. However, whereas the f_and_m and m_and_f gender expectations are not significantly different from zero in the control condition, in the critical condition they pattern with the purely masculine and purely feminine conditions, indicating that, in this syntactic context, the model has successfully learned to base gender expectation solely off of the second noun. These results are inconsistent with a simple `bag of features' strategy that is insensitive to local syntactic context. They indicate that both models can interpret the same string as either a coordinated noun phrase, or as an NP object and the start of a coordinated VP with the second NP as its subject. Experiment 4: Inverted Coordination In addition to using phrase-level features to drive expectation about downstream lexical items, human processors can do the inverse—use lexical features to drive expectations about upcoming syntactic chunks. In this experiment, we assess whether neural models use number features hosted on a verb to modulate their expectations for upcoming CoordNPs. To assess whether neural language models learn inverted coordination rules, we adapted items from Section SECREF13 in both English (37 items) and French (24 items), following the paradigm in Table TABREF24. The first part of the phrase contains either a plural or singular verb and a plural or singular noun. In this case, we sample the surprisal for the continuations and (or is grammatical in all conditions, so it is omitted from this study). Our expectation is that `and' is less surprising in the Vpl_Nsg condition than in the Vsg_Nsg condition, where a CoordNP is not licensed by the grammar in either French or English (as in *What is the pig and the cat eating?). We also expect lower surprisal for and in the Vpl_Nsg condition, where it is obligatory for a grammatical continuation, than in the Vpl_Npl condition, where it is optional. For French experimental items, the question is embedded into a sentential-complement taking verb, following Example SECREF6, due to the fact that unembedded subject-verb inverted questions sound very formal and might be relatively rare in the training data. . Je me demande où vont le maire et I myself ask where go.3PL the.MSG mayor.MSG and The results for both languages are shown in Figure FIGREF25, with the surprisal at the coordinator on the y-axis and the various conditions on the x-axis. No model in either language shows a signficant difference in surprisal between the Vpl_Nsg and Vpl_Npl conditions or between the Vpl_Nsg and Vsg_Nsg conditions. The LSTM (1B) shows significant difference between the Vpl_Nsg and Vpl_Npl conditions, but in the opposite direction than expected, with the coordinator less surprising in the latter condition. These results indicate that the models are unable to use the fine-grained context sensitivity to drive expectations for CoordNPs, at least in the inversion setting. Discussion The experiments presented here extend and refine a line of research investigating what linguistic knowledge is acquired by neural language models. Previous studies have demonstrated that sequential models trained on a simple regime of optimizing the next word can learn long-distance syntactic dependencies in impressive detail. Our results provide complimentary insights, demonstrating that a range of model architectures trained on a variety of datasets can learn fine-grained information about the interaction of CoordNPs and local syntactic context, but their behavior remains unhumanlike in many key ways. Furthermore, to our best knowledge, this work presents the first psycholinguistic analysis of neural language models trained on French, a high-resource language that has so far been under-investigated in this line of research. In the simple coordination experiment, we demonstrated that models were able to capture some of the agreement behaviors of humans, although their performance deviated in crucial aspects. Whereas human behavior is best modeled as a `percolation' process, the neural models appear to be using a linear combination of NP constituent number to drive CoordNP/verb number agreement, with the second noun weighted more heavily than the first. In these experiments, supervision afforded by the RNNG and ActionLSTM models did not translate into more robust or humanlike learning outcomes. The complex coordination experiments provided evidence that the neural models tested were not using a simple `bag of features' strategy, but were sensitive to syntactic context. All models tested were able to interpret material that had similar surface form in ways that corresponded to two different tree-structural descriptions, based on local context. The inverted coordination experiment provided a contrasting example, in which models were unable to modulate expectations based on subtleties in the syntactic environment. Across all our experiments, the French models performed consistently better on subject/verb number agreement than on subject/predicate gender agreement. Although there are likely more examples of subject/verb number agreement in the French training data, gender agreement is syntactically mandated and widespread in French. It remains an open question why all but one of the models tested were unable to leverage the numerous examples of gender agreement seen in various contexts during training to drive correct subject/predicate expectations. Acknowledgments This project is supported by a grant of Labex EFL ANR-10-LABX-0083 (and Idex ANR-18-IDEX-0001) for AA and MIT–IBM AI Laboratory and the MIT–SenseTimeAlliance on Artificial Intelligence for RPL. We would like to thank the anonymous reviewers for their comments and Anne Abeillé for her advice and feedback. The Effect of Annotation Schemes This section further investigates the effects of CoordNP annotation schemes on the behaviors of structurally-supervised models. We test whether an explicit COORD phrasal tag improves model performance. We trained two additional RNNG models on 38,546 sentences from the Penn Treebank annotated with two different schemes: The first, RNNG (PTB-control) was trained with the original Penn Treebank annotation. The second, RNNG (PTB-coord), was trained on the same sentences, but with an extended coordination annotation scheme, meant to employ the scheme employed in the FTB, adapted from BIBREF26. We stripped empty categories from their scheme and only kept the NP-COORD label for constituents inside a coordination structure. Figure FIGREF26 illustrates the detailed annotation differences between two datasets. We tested both models on all the experiments presented in Sections SECREF3-SECREF6 above. Turning to the results of these six experiments: We see little difference between the two models in the Non-coordination agreement experiment. For the Complex coordination control and Complex coordination critical experiments, both models are largely the same as well. However, in the Simple and-coordination and Simple or-coordination experiments the values for all conditions are shifted upwards for the RNNG PTB-coord model, indicating higher over-all preference for the plural continuation. Furthermore, the range of values is reduced in the RNNG PTB-coord model, compared to the RNNG PTB-control model. These results indicate that adding an explicit COORD phrasal label does not drastically change model performance: Both models still appear to be using a linear combination of number features to drive plural vs. singular expectation. However, the explicit representation has made the interior of the coordination phrase more opaque to the model (each feature matters less) and has slightly shifted model preference towards plural continuations. In this sense, the PTB-coord model may have learned a generalization about CoordNPs, but this generalization remains unlike the ones learned by humans. PTB/FTB Agreement Patterns We present statistics of subject/predicate agreement patterns in the Penn Treebank (PTB) and French Treebank (FTB) in Table TABREF28 and TABREF29.
Recurrent Neural Network (RNN), ActionLSTM, Generative Recurrent Neural Network Grammars (RNNG)
c5171daf82107fce0f285fa18f19e91fbd1215c5
c5171daf82107fce0f285fa18f19e91fbd1215c5_0
Q: What evaluation metrics are used? Text: Introduction Spoken dialogue systems that can help users to solve complex tasks have become an emerging research topic in artificial intelligence and natural language processing areas BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions. Today, there are several virtual intelligent assistants, such as Apple's Siri, Google's Home, Microsoft's Cortana, and Amazon's Alexa, in the market. A typical dialogue system pipeline can be divided into several parts: a recognized result of a user's speech input is fed into a natural language understanding module (NLU) to classify the domain along with domain-specific intents and fill in a set of slots to form a semantic frame BIBREF4 , BIBREF5 , BIBREF6 . A dialogue state tracking (DST) module predicts the current state of the dialogue by means of the semantic frames extracted from multi-turn conversations. Then the dialogue policy determines the system action for the next step given the current dialogue state. Finally the semantic frame of the system action is then fed into a natural language generation (NLG) module to construct a response utterance to the user BIBREF7 , BIBREF8 . As a key component to a dialogue system, the goal of NLG is to generate natural language sentences given the semantics provided by the dialogue manager to feedback to users. As the endpoint of interacting with users, the quality of generated sentences is crucial for better user experience. The common and mostly adopted method is the rule-based (or template-based) method BIBREF9 , which can ensure the natural language quality and fluency. In spite of robustness and adequacy of the rule-based methods, frequent repetition of identical, tedious output makes talking to a template-based machine unsatisfactory. Furthermore, scalability is an issue, because designing sophisticated rules for a specific domain is time-consuming BIBREF10 . Recurrent neural network-based language model (RNNLM) have demonstrated the capability of modeling long-term dependency in sequence prediction by leveraging recurrent structures BIBREF11 , BIBREF12 . Previous work proposed an RNNLM-based NLG that can be trained on any corpus of dialogue act-utterance pairs without hand-crafted features and any semantic alignment BIBREF13 . The following work based on sequence-to-sequence (seq2seq) further obtained better performance by employing encoder-decoder structure with linguistic knowledge such as syntax trees BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . However, due to grammar complexity and lack of diction knowledge, it is still challenging to generate long and complex sentences by a simple encoder-decoder structure. To address the issue, previous work attempted separating decoding jobs in a decoding hierarchy, which is constructed in terms of part-of-speech (POS) tags BIBREF8 . The original single decoding process is separated into a multi-level decoding hierarchy, where each decoding layer generates words associated with a specific POS set. This paper extends the idea to a more flexible design by incorporating attention mechanisms into the decoding hierarchy. Because prior work designs the decoding hierarchy in a hand-crafted manner based on a subjective intuition BIBREF8 , in this work, we experiment on various generating hierarchies to investigate the importance of linguistic pattern ordering in hierarchical language generation. The experiments show that our proposed method outperforms the classic seq2seq model with a smaller model size; in addition, the concept of the hierarchical decoder is proven general enough for various generating hierarchies. Furthermore, this paper also provides the design guidelines and insights of designing the decoding hierarchy. Hierarchical Natural Language Generation (HNLG) The framework of the proposed hierarchical NLG model is illustrated in Figure FIGREF2 , where the model architecture is based on an encoder-decoder (seq2seq) structure with attentional hierarchical decoders BIBREF14 , BIBREF15 . In the encoder-decoder architecture, a typical generation process includes encoding and decoding phases: First, a given semantic representation sequence INLINEFORM0 is fed into a RNN-based encoder to capture the temporal dependency and project the input to a latent feature space; the semantic representation sequence is also encoded into an one-hot representation as the initial state of the encoder in order to maintain the temporal-independent condition as shown in the left part of Figure FIGREF2 . The recurrent unit of the encoder is bidirectional gated recurrent unit (GRU) BIBREF14 , DISPLAYFORM0 Then the encoded semantic vector, INLINEFORM0 , is fed into an RNN-based decoder as the initial state to decode word sequences, as shown in the right part of Figure FIGREF2 . Attentional Hierarchical Decoder In spite of the intuitive and elegant design of the seq2seq model, it is still difficult to generate complex and decent sequences by a simple encoder-decoder structure, because a single decoder is not capable of learning all diction, grammar, and other related linguistic knowledge at the same time. Some prior work applied additional techniques such as reranker and beam-search to select a better result among multiple generated sequences BIBREF13 , BIBREF16 . However, it is still an unsolved issue to the NLG community. Therefore, we propose a hierarchical decoder to address the above issue, where the core idea is to allow the decoding layers to focus on learning different types of patterns instead of learning all relevant knowledge together. The hierarchical decoder is composed of several decoding layers, each of which is only responsible for learning a portion of the required knowledge. Namely, the linguistic knowledge can be incorporated into the decoding process and divided into several subsets. We use part-of-speech (POS) tags as the additional linguistic features to construct the decoding hierarchy in this paper, where POS tags of the words in the target sentence are separated into several subsets, and each layer is responsible for decoding the words associated with a specific set of POS patterns. An example is shown in the right part of Figure FIGREF2 , where the first layer at the bottom is in charge of decoding nouns, pronouns, and proper nouns, and the second layer is for verbs, and so on. The prior work manually designed the decoding hierarchy by considering the subjective intuition about how children learn to speak BIBREF8 : infants first learn to say keywords, which are often nouns. For example, when an infant says “Daddy, toilet.”, it actually means “Daddy, I want to go to the toilet.”. Along with the growth of the age, children learn more grammars and vocabulary and then start adding verbs to the sentences, further adding adverbs, and so on. However, the hand-crafted linguistic order may not be optimal, so we experiment and analyze the model on various generating linguistic hierarchies to deeply investigate the effect of linguistic pattern ordering. In the hierarchical decoder, the initial state of each GRU-based decoding layer INLINEFORM0 is the extracted feature INLINEFORM1 from the encoder, and the input at every step is the last predicted token INLINEFORM2 concatenated with the output from the previous layer INLINEFORM3 , DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 -th hidden state of the INLINEFORM2 -th GRU decoding layer and INLINEFORM3 is the INLINEFORM4 -th outputted word in the INLINEFORM5 -th layer. We use the cross entropy loss as our training objective for optimization, where the difference between the predicted distribution and target distribution is minimized. To facilitate training and improve the performance, several strategies including scheduled sampling, a repeat input mechanism, curriculum learning, and an attention mechanism are utilized. Scheduled Sampling Teacher forcing BIBREF18 is a strategy for training RNN that uses model output from a prior time step as an input, and it works by using the expected output at the current time step INLINEFORM0 as the input at the next time step, rather than the output generated by the network. The teacher forcing techniques can also be triggered only with a certain probability, which is known as the scheduled sampling approach BIBREF19 . We adopt scheduled sampling methods in our experiments. In the proposed framework, an input of a decoder contains not only the output from the last step but one from the last decoding layer. Therefore, we design two types of scheduled sampling approaches – inner-layer and inter-layer. Inner-layer schedule sampling is the classic teacher forcing strategy: DISPLAYFORM0 Inter-layer schedule sampling uses the labels instead of the actual output tokens of the last layer: DISPLAYFORM0 Curriculum Learning The proposed hierarchical decoder consists of several decoding layers, the expected output sequences of upper layers are longer than the ones in the lower layers. The framework is suitable for applying the curriculum learning BIBREF20 , of which core concept is that a curriculum of progressively harder tasks could significantly accelerate a network’s training. The training procedure is to train each decoding layer for some epochs from the bottommost layer to the topmost one. Repeat-Input Mechanism The concept of the hierarchical decoding is to hierarchically generate the sequence, gradually adding words associated with different linguistic patterns. Therefore, the generated sequences from the decoders become longer as the generating process proceeds to the higher decoding layers, and the sequence generated by a upper layer should contain the words predicted by the lower layers. To facilitate the behavior, previous work designs a strategy that repeats the outputs from the last layer as inputs until the current decoding layer outputs the same token, so-called the repeat-input mechanism BIBREF8 . This approach offers at least two merits: (1) Repeating inputs tells the decoder that the repeated tokens are important to encourage the decoder to generate them. (2) If the expected output sequence of a layer is much shorter than the one of the next layer, the large difference in length becomes a critical issue of the hierarchical decoder, because the output sequence of a layer will be fed into the next layer. With the repeat-input mechanism, the impact of length difference can be mitigated. Attention Mechanism In order to model the relationship between layers in a generating hierarchy, we further design attention mechanisms for the hierarchical decoder. The proposed attention mechanisms are content-based, which means the weights are determined based on hidden states of neural models: DISPLAYFORM0 where INLINEFORM0 is the hidden state at the current step, INLINEFORM1 are the hidden states from the previous decoder layer, and INLINEFORM2 is a learned weight matrix. At each decoding step, attention values INLINEFORM3 are calculated by these methods and then used to compute the weighted sum as a context vector, which is then concatenated to decoder inputs as additional information. Training The objective of the proposed model is to optimize the conditional probability INLINEFORM0 , so that the difference between the predicted distribution and the target distribution, INLINEFORM1 , can be minimized: DISPLAYFORM0 where INLINEFORM0 is the number of samples and the labels INLINEFORM1 are the word labels. Each decoder in the hierarchical NLG is trained based on curriculum learning with the objective. Setup The E2E NLG challenge dataset BIBREF21 is utilized in our experiments, which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set. As shown in Figure FIGREF2 , the inputs are semantic frames containing specific slots and corresponding values, and the outputs are the associated natural language utterances with the given semantics. For example, a semantic frame with the slot-value pairs “name[Bibimbap House], food[English], priceRange[moderate], area [riverside], near [Clare Hall]” corresponds to the target sentence “Bibimbap House is a moderately priced restaurant who's main cuisine is English food. You will find this local gem near Clare Hall in the Riverside area.”. The data preprocessing includes trimming punctuation marks, lemmatization, and turning all words into lowercase. To prepare the labels of each layer within the hierarchical structure of the proposed method, we utilize spaCy toolkit to perform POS tagging for the target word sequences. Some properties such as names of restaurants are delexicalized (for example, replaced with a symbol “RESTAURANT_NAME”) to avoid data sparsity. In our experiments, we perform six different generating linguistic orders, in which each hierarchy is constructed based on different permutations of the POS tag sets: (1) nouns, proper nouns, and pronouns (2) verbs (3) adjectives and adverbs (4) others. The probability of activating inter-layer and inner-layer teacher forcing is set to 0.5, the probability of teacher forcing is attenuated every epoch, and the decaying ratio is 0.9. The models are trained for 20 training epochs without early stop; when curriculum learning is applied, only the first layer is trained during first five epochs, the second decoder layer starts to be trained at the sixth epoch, and so on. To evaluate the quality of the generated sequences regarding both precision and recall, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references BIBREF22 . Results and Analysis In the experiments, we borrow the idea of hierarchical decoding proposed by the previous work BIBREF8 and investigate various extensions of generating hierarchies. To examine the effectiveness of hierarchical decoders, we control our model size to be smaller than the baseline's. Specifically, the decoder in the baseline seq2seq model has hidden layers of size 400, while our models with hierarchical decoders have four decoding layers of size 100 for fair comparison. Table TABREF13 compares the performance between a baseline and proposed models with different generating linguistic orders. For all generating hierarchies with different orders, simply replacing the decoder by a hierarchical decoder achieves significant improvement in every evaluation metrics; for example, the topmost generating hierarchy in Table TABREF13 has 49.25% improvement in BLEU, 30.03% in ROUGE-1, 96.48% in ROUGE-2, and 25.99% in ROUGE-L respectively. In other words, separating the generation process into several phases is proven to be a promising method. Performing curriculum learning strategy offers a considerable improvement, take the topmost generating hierarchy in Table TABREF13 for example, this method yields a 102.07% improvement in BLEU, 48.26% in ROUGE-1, 144.8% in ROUGE-2, and 39.18% in ROUGE-L. Despite that applying repeat-input mechanism alone does not offer benefit, combining these two strategies together further achieves the best performance. Note that these methods do not require any additional parameters. Unfortunately, even some of the attentional hierarchical decoders achieve the best results in the generating hierarchies (Table TABREF18 ). Mostly, the additional attention mechanisms are not capable of bringing benefit for model performance. The reason may be that the decoding process is designed for gradually importing words in the specific set of linguistic patterns to the output sequence, each decoder layer is responsible of copying the output tokens from the previous layer and insert new words into the sequence precisely. Because of this nature, a decoder needs explicit information of the structure of a sentence rather than implicit high-level latent information. For instance, when a decoder is trying to insert some Verb words into the output sequence, knowing the position of subject and object would be very helpful. The above results show that among these six different generating hierarchy, the generating order: (1) verbs INLINEFORM0 (2) nouns, proper nouns, and pronouns INLINEFORM1 (3) adjectives and adverbs INLINEFORM2 (4) the other POS tags yields the worst performance. Table TABREF23 shows that the gap of average length of target sequences between the first and the second decoder layer is the largest among all the hierarchies; in average, the second decoder needs to insert up to 8 words into the sequence based on 3.62 words from the first decoder layer in this generation process, which is absolutely difficult. The essence of the hierarchical design is to separate the job of the decoder into several phases; if the job of each phase is balanced, it is intuitive that it is more suitable for applying curriculum learning and improve the model performance. The model performance is also related to linguistic structures of sentences: the fifth and the sixth generating hierarchies in Table TABREF13 have very similar trends, where the length of target sentences of each decoder layer is almost identical as shown in Table TABREF23 . However, the model performance differs a lot. An adverb word could be used to modify anything but nouns and pronouns, which means that the number of adverbs used for modifying verbs would be a factor to determine the generating order as well. In our cases, almost all adverbs in the dataset are used to describe adjectives, indicating that generating verbs before inserting adverbs to sequences may not provide enough useful information; instead, it would possibly obstruct the model learning. We can also find that in all experiments, inserting adverbs before verbs would be better. In summary, the concept of the hierarchical decoder is simple and useful, separating a difficult job to many phases is demonstrated to be a promising direction and not limited to a specific generating hierarchy. Furthermore, the generating linguistic orders should be determined based on the dataset, and the important factors include the distribution over length of subsequences and the linguistic nature of the dataset for designing a proper generating hierarchy in NLG. Conclusion This paper investigates the seq2seq-based model with a hierarchical decoder that leverages various linguistic patterns. The experiments on different generating linguistic orders demonstrates the generalization about the proposed hierarchical decoder, which is not limited to a specific generating hierarchy. However, there is no universal decoding hierarchy, while the main factor for designing a suitable generating order is the nature of the dataset. Acknowledgements We would like to thank reviewers for their insightful comments on the paper. This work was financially supported by Ministry of Science and Technology (MOST) in Taiwan.
the evaluation metrics include BLEU and ROUGE (1, 2, L) scores
baeb6785077931e842079e9d0c9c9040947ffa4e
baeb6785077931e842079e9d0c9c9040947ffa4e_0
Q: What datasets did they use? Text: Introduction Spoken dialogue systems that can help users to solve complex tasks have become an emerging research topic in artificial intelligence and natural language processing areas BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions. Today, there are several virtual intelligent assistants, such as Apple's Siri, Google's Home, Microsoft's Cortana, and Amazon's Alexa, in the market. A typical dialogue system pipeline can be divided into several parts: a recognized result of a user's speech input is fed into a natural language understanding module (NLU) to classify the domain along with domain-specific intents and fill in a set of slots to form a semantic frame BIBREF4 , BIBREF5 , BIBREF6 . A dialogue state tracking (DST) module predicts the current state of the dialogue by means of the semantic frames extracted from multi-turn conversations. Then the dialogue policy determines the system action for the next step given the current dialogue state. Finally the semantic frame of the system action is then fed into a natural language generation (NLG) module to construct a response utterance to the user BIBREF7 , BIBREF8 . As a key component to a dialogue system, the goal of NLG is to generate natural language sentences given the semantics provided by the dialogue manager to feedback to users. As the endpoint of interacting with users, the quality of generated sentences is crucial for better user experience. The common and mostly adopted method is the rule-based (or template-based) method BIBREF9 , which can ensure the natural language quality and fluency. In spite of robustness and adequacy of the rule-based methods, frequent repetition of identical, tedious output makes talking to a template-based machine unsatisfactory. Furthermore, scalability is an issue, because designing sophisticated rules for a specific domain is time-consuming BIBREF10 . Recurrent neural network-based language model (RNNLM) have demonstrated the capability of modeling long-term dependency in sequence prediction by leveraging recurrent structures BIBREF11 , BIBREF12 . Previous work proposed an RNNLM-based NLG that can be trained on any corpus of dialogue act-utterance pairs without hand-crafted features and any semantic alignment BIBREF13 . The following work based on sequence-to-sequence (seq2seq) further obtained better performance by employing encoder-decoder structure with linguistic knowledge such as syntax trees BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . However, due to grammar complexity and lack of diction knowledge, it is still challenging to generate long and complex sentences by a simple encoder-decoder structure. To address the issue, previous work attempted separating decoding jobs in a decoding hierarchy, which is constructed in terms of part-of-speech (POS) tags BIBREF8 . The original single decoding process is separated into a multi-level decoding hierarchy, where each decoding layer generates words associated with a specific POS set. This paper extends the idea to a more flexible design by incorporating attention mechanisms into the decoding hierarchy. Because prior work designs the decoding hierarchy in a hand-crafted manner based on a subjective intuition BIBREF8 , in this work, we experiment on various generating hierarchies to investigate the importance of linguistic pattern ordering in hierarchical language generation. The experiments show that our proposed method outperforms the classic seq2seq model with a smaller model size; in addition, the concept of the hierarchical decoder is proven general enough for various generating hierarchies. Furthermore, this paper also provides the design guidelines and insights of designing the decoding hierarchy. Hierarchical Natural Language Generation (HNLG) The framework of the proposed hierarchical NLG model is illustrated in Figure FIGREF2 , where the model architecture is based on an encoder-decoder (seq2seq) structure with attentional hierarchical decoders BIBREF14 , BIBREF15 . In the encoder-decoder architecture, a typical generation process includes encoding and decoding phases: First, a given semantic representation sequence INLINEFORM0 is fed into a RNN-based encoder to capture the temporal dependency and project the input to a latent feature space; the semantic representation sequence is also encoded into an one-hot representation as the initial state of the encoder in order to maintain the temporal-independent condition as shown in the left part of Figure FIGREF2 . The recurrent unit of the encoder is bidirectional gated recurrent unit (GRU) BIBREF14 , DISPLAYFORM0 Then the encoded semantic vector, INLINEFORM0 , is fed into an RNN-based decoder as the initial state to decode word sequences, as shown in the right part of Figure FIGREF2 . Attentional Hierarchical Decoder In spite of the intuitive and elegant design of the seq2seq model, it is still difficult to generate complex and decent sequences by a simple encoder-decoder structure, because a single decoder is not capable of learning all diction, grammar, and other related linguistic knowledge at the same time. Some prior work applied additional techniques such as reranker and beam-search to select a better result among multiple generated sequences BIBREF13 , BIBREF16 . However, it is still an unsolved issue to the NLG community. Therefore, we propose a hierarchical decoder to address the above issue, where the core idea is to allow the decoding layers to focus on learning different types of patterns instead of learning all relevant knowledge together. The hierarchical decoder is composed of several decoding layers, each of which is only responsible for learning a portion of the required knowledge. Namely, the linguistic knowledge can be incorporated into the decoding process and divided into several subsets. We use part-of-speech (POS) tags as the additional linguistic features to construct the decoding hierarchy in this paper, where POS tags of the words in the target sentence are separated into several subsets, and each layer is responsible for decoding the words associated with a specific set of POS patterns. An example is shown in the right part of Figure FIGREF2 , where the first layer at the bottom is in charge of decoding nouns, pronouns, and proper nouns, and the second layer is for verbs, and so on. The prior work manually designed the decoding hierarchy by considering the subjective intuition about how children learn to speak BIBREF8 : infants first learn to say keywords, which are often nouns. For example, when an infant says “Daddy, toilet.”, it actually means “Daddy, I want to go to the toilet.”. Along with the growth of the age, children learn more grammars and vocabulary and then start adding verbs to the sentences, further adding adverbs, and so on. However, the hand-crafted linguistic order may not be optimal, so we experiment and analyze the model on various generating linguistic hierarchies to deeply investigate the effect of linguistic pattern ordering. In the hierarchical decoder, the initial state of each GRU-based decoding layer INLINEFORM0 is the extracted feature INLINEFORM1 from the encoder, and the input at every step is the last predicted token INLINEFORM2 concatenated with the output from the previous layer INLINEFORM3 , DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 -th hidden state of the INLINEFORM2 -th GRU decoding layer and INLINEFORM3 is the INLINEFORM4 -th outputted word in the INLINEFORM5 -th layer. We use the cross entropy loss as our training objective for optimization, where the difference between the predicted distribution and target distribution is minimized. To facilitate training and improve the performance, several strategies including scheduled sampling, a repeat input mechanism, curriculum learning, and an attention mechanism are utilized. Scheduled Sampling Teacher forcing BIBREF18 is a strategy for training RNN that uses model output from a prior time step as an input, and it works by using the expected output at the current time step INLINEFORM0 as the input at the next time step, rather than the output generated by the network. The teacher forcing techniques can also be triggered only with a certain probability, which is known as the scheduled sampling approach BIBREF19 . We adopt scheduled sampling methods in our experiments. In the proposed framework, an input of a decoder contains not only the output from the last step but one from the last decoding layer. Therefore, we design two types of scheduled sampling approaches – inner-layer and inter-layer. Inner-layer schedule sampling is the classic teacher forcing strategy: DISPLAYFORM0 Inter-layer schedule sampling uses the labels instead of the actual output tokens of the last layer: DISPLAYFORM0 Curriculum Learning The proposed hierarchical decoder consists of several decoding layers, the expected output sequences of upper layers are longer than the ones in the lower layers. The framework is suitable for applying the curriculum learning BIBREF20 , of which core concept is that a curriculum of progressively harder tasks could significantly accelerate a network’s training. The training procedure is to train each decoding layer for some epochs from the bottommost layer to the topmost one. Repeat-Input Mechanism The concept of the hierarchical decoding is to hierarchically generate the sequence, gradually adding words associated with different linguistic patterns. Therefore, the generated sequences from the decoders become longer as the generating process proceeds to the higher decoding layers, and the sequence generated by a upper layer should contain the words predicted by the lower layers. To facilitate the behavior, previous work designs a strategy that repeats the outputs from the last layer as inputs until the current decoding layer outputs the same token, so-called the repeat-input mechanism BIBREF8 . This approach offers at least two merits: (1) Repeating inputs tells the decoder that the repeated tokens are important to encourage the decoder to generate them. (2) If the expected output sequence of a layer is much shorter than the one of the next layer, the large difference in length becomes a critical issue of the hierarchical decoder, because the output sequence of a layer will be fed into the next layer. With the repeat-input mechanism, the impact of length difference can be mitigated. Attention Mechanism In order to model the relationship between layers in a generating hierarchy, we further design attention mechanisms for the hierarchical decoder. The proposed attention mechanisms are content-based, which means the weights are determined based on hidden states of neural models: DISPLAYFORM0 where INLINEFORM0 is the hidden state at the current step, INLINEFORM1 are the hidden states from the previous decoder layer, and INLINEFORM2 is a learned weight matrix. At each decoding step, attention values INLINEFORM3 are calculated by these methods and then used to compute the weighted sum as a context vector, which is then concatenated to decoder inputs as additional information. Training The objective of the proposed model is to optimize the conditional probability INLINEFORM0 , so that the difference between the predicted distribution and the target distribution, INLINEFORM1 , can be minimized: DISPLAYFORM0 where INLINEFORM0 is the number of samples and the labels INLINEFORM1 are the word labels. Each decoder in the hierarchical NLG is trained based on curriculum learning with the objective. Setup The E2E NLG challenge dataset BIBREF21 is utilized in our experiments, which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set. As shown in Figure FIGREF2 , the inputs are semantic frames containing specific slots and corresponding values, and the outputs are the associated natural language utterances with the given semantics. For example, a semantic frame with the slot-value pairs “name[Bibimbap House], food[English], priceRange[moderate], area [riverside], near [Clare Hall]” corresponds to the target sentence “Bibimbap House is a moderately priced restaurant who's main cuisine is English food. You will find this local gem near Clare Hall in the Riverside area.”. The data preprocessing includes trimming punctuation marks, lemmatization, and turning all words into lowercase. To prepare the labels of each layer within the hierarchical structure of the proposed method, we utilize spaCy toolkit to perform POS tagging for the target word sequences. Some properties such as names of restaurants are delexicalized (for example, replaced with a symbol “RESTAURANT_NAME”) to avoid data sparsity. In our experiments, we perform six different generating linguistic orders, in which each hierarchy is constructed based on different permutations of the POS tag sets: (1) nouns, proper nouns, and pronouns (2) verbs (3) adjectives and adverbs (4) others. The probability of activating inter-layer and inner-layer teacher forcing is set to 0.5, the probability of teacher forcing is attenuated every epoch, and the decaying ratio is 0.9. The models are trained for 20 training epochs without early stop; when curriculum learning is applied, only the first layer is trained during first five epochs, the second decoder layer starts to be trained at the sixth epoch, and so on. To evaluate the quality of the generated sequences regarding both precision and recall, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references BIBREF22 . Results and Analysis In the experiments, we borrow the idea of hierarchical decoding proposed by the previous work BIBREF8 and investigate various extensions of generating hierarchies. To examine the effectiveness of hierarchical decoders, we control our model size to be smaller than the baseline's. Specifically, the decoder in the baseline seq2seq model has hidden layers of size 400, while our models with hierarchical decoders have four decoding layers of size 100 for fair comparison. Table TABREF13 compares the performance between a baseline and proposed models with different generating linguistic orders. For all generating hierarchies with different orders, simply replacing the decoder by a hierarchical decoder achieves significant improvement in every evaluation metrics; for example, the topmost generating hierarchy in Table TABREF13 has 49.25% improvement in BLEU, 30.03% in ROUGE-1, 96.48% in ROUGE-2, and 25.99% in ROUGE-L respectively. In other words, separating the generation process into several phases is proven to be a promising method. Performing curriculum learning strategy offers a considerable improvement, take the topmost generating hierarchy in Table TABREF13 for example, this method yields a 102.07% improvement in BLEU, 48.26% in ROUGE-1, 144.8% in ROUGE-2, and 39.18% in ROUGE-L. Despite that applying repeat-input mechanism alone does not offer benefit, combining these two strategies together further achieves the best performance. Note that these methods do not require any additional parameters. Unfortunately, even some of the attentional hierarchical decoders achieve the best results in the generating hierarchies (Table TABREF18 ). Mostly, the additional attention mechanisms are not capable of bringing benefit for model performance. The reason may be that the decoding process is designed for gradually importing words in the specific set of linguistic patterns to the output sequence, each decoder layer is responsible of copying the output tokens from the previous layer and insert new words into the sequence precisely. Because of this nature, a decoder needs explicit information of the structure of a sentence rather than implicit high-level latent information. For instance, when a decoder is trying to insert some Verb words into the output sequence, knowing the position of subject and object would be very helpful. The above results show that among these six different generating hierarchy, the generating order: (1) verbs INLINEFORM0 (2) nouns, proper nouns, and pronouns INLINEFORM1 (3) adjectives and adverbs INLINEFORM2 (4) the other POS tags yields the worst performance. Table TABREF23 shows that the gap of average length of target sequences between the first and the second decoder layer is the largest among all the hierarchies; in average, the second decoder needs to insert up to 8 words into the sequence based on 3.62 words from the first decoder layer in this generation process, which is absolutely difficult. The essence of the hierarchical design is to separate the job of the decoder into several phases; if the job of each phase is balanced, it is intuitive that it is more suitable for applying curriculum learning and improve the model performance. The model performance is also related to linguistic structures of sentences: the fifth and the sixth generating hierarchies in Table TABREF13 have very similar trends, where the length of target sentences of each decoder layer is almost identical as shown in Table TABREF23 . However, the model performance differs a lot. An adverb word could be used to modify anything but nouns and pronouns, which means that the number of adverbs used for modifying verbs would be a factor to determine the generating order as well. In our cases, almost all adverbs in the dataset are used to describe adjectives, indicating that generating verbs before inserting adverbs to sequences may not provide enough useful information; instead, it would possibly obstruct the model learning. We can also find that in all experiments, inserting adverbs before verbs would be better. In summary, the concept of the hierarchical decoder is simple and useful, separating a difficult job to many phases is demonstrated to be a promising direction and not limited to a specific generating hierarchy. Furthermore, the generating linguistic orders should be determined based on the dataset, and the important factors include the distribution over length of subsequences and the linguistic nature of the dataset for designing a proper generating hierarchy in NLG. Conclusion This paper investigates the seq2seq-based model with a hierarchical decoder that leverages various linguistic patterns. The experiments on different generating linguistic orders demonstrates the generalization about the proposed hierarchical decoder, which is not limited to a specific generating hierarchy. However, there is no universal decoding hierarchy, while the main factor for designing a suitable generating order is the nature of the dataset. Acknowledgements We would like to thank reviewers for their insightful comments on the paper. This work was financially supported by Ministry of Science and Technology (MOST) in Taiwan.
The E2E NLG challenge dataset BIBREF21
bb570d4a1b814f508a07e74baac735bf6ca0f040
bb570d4a1b814f508a07e74baac735bf6ca0f040_0
Q: Why does their model do better than prior models? Text: Introduction This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ Discourse parsing is a fundamental task in natural language processing (NLP) which determines the structure of the whole discourse and identifies the relations between discourse spans such as clauses and sentences. Improving this task can be helpful to many downstream tasks such as machine translation BIBREF0 , question answering BIBREF1 , and so on. As one of the important parts of discourse parsing, implicit discourse relation recognition task is to find the relation between two spans without explicit connectives (e.g., but, so, etc.), and needs recovering the relation from semantic understanding of texts. The Penn Discourse Treebank 2.0 (PDTB 2.0) BIBREF2 is a benchmark corpus for discourse relations. In PDTB style, the connectives can be explicit or implicit, and one entry of the data is separated into Arg1 and Arg2, accompanied with a relation sense. Since the release of PDTB 2.0 dataset, many methods have been proposed, ranging from traditional feature-based methods BIBREF3 , BIBREF4 to latest neural-based methods BIBREF5 , BIBREF6 . Especially through many neural network methods used for this task such as convolutional neural network (CNN) BIBREF7 , recursive neural network BIBREF8 , embedding improvement BIBREF9 , attention mechanism BIBREF10 , gate mechanism BIBREF11 , multi-task method BIBREF6 , the performance of this task has improved a lot since it was first introduced. However, this task is still very challenging with the highest reported accuracy still lower than 50% due to the hardness for the machines to understand the text meaning and the relatively small task corpus. In this work, we focus on improving the learned representations of sentence pairs to address the implicit discourse relation recognition. It is well known that text representation is the core part of state-of-the-art deep learning methods for NLP tasks, and improving the representation from all perspective will benefit the concerned task. The representation is improved by two ways in our model through three-level hierarchy. The first way is embedding augmentation. Only with informative embeddings, can the final representations be better. This is implemented in our word-level module. We augment word embeddings with subword-level embeddings and pre-trained ELMo embeddings. Subwords coming from unsupervised segmentation demonstrate a better consequent performance than characters for being a better minimal representation unit. The pre-trained contextualized word embeddings (ELMo) can make the embeddings contain more contextual information which is also involved with character-level inputs. The second way is a deep residual bi-attention encoder. Since this task is about classifying sentence pairs, the encoder is implemented in sentence and sentence-pair levels. A deeper model can support richer representations but is hard to train, especially with a small dataset. So we apply residual connections BIBREF12 to each module for facilitating signal propagation and alleviating gradient degradation. The stacked encoder blocks make the single sentence representation richer and bi-attention module mixes two sentence representations focusingly. With introducing richer and deeper representation enhancement, we report the deepest model so far for the task. Our representation enhanced model will be evaluated on the benchmark PDTB 2.0 and demonstrate state-of-the-art performance to verify its effectiveness. This paper is organized as follows. Section 2 reviews related work. Section 3 introduces our model. Section 4 shows our experiments and analyses the results. Section 5 concludes this work. Related Work After the release of Penn Discourse Treebank 2.0, many works have been made to solve this concerned task. lin-kan-ng:2009:EMNLP is the first work who considered the second-level classification of the task by empirically evaluating the impact of surface features. Feature based methods BIBREF4 , BIBREF13 , BIBREF14 , BIBREF15 mainly focused on using linguistic, or semantic features from the discourse units, or the relations between unit pairs and word pairs. zhang-EtAl:2015:EMNLP4 is the first one who modeled this task using end-to-end neural network and gained great performance improvement. Neural network methods also used by lots of works BIBREF16 , BIBREF17 for better performance. Since then, a lot of methods have been proposed. braud2015comparing found that word embeddings trained by neural networks is very useful to this task. qin-zhang-zhao:2016:COLING augmented their system with character-level and contextualized embeddings. Recurrent networks and convolutional networks have been used as basic blocks in many works BIBREF18 , BIBREF19 , BIBREF7 . TACL536 used recursive neural networks. Attention mechanism was used by liu-li:2016:EMNLP2016, cai2017discourse and others. wu-EtAl:2016:EMNLP2016 and lan-EtAl:2017:EMNLP20172 applied multi-task component. qin-EtAl:2017:Long utilized adversarial nets to migrate the connective-based features to implicit ones. Sentence representation is a key component in many NLP tasks. Usually, better representation means better performance. Plenty of work on language modeling has been done, as language modeling can supply better sentence representations. Since the pioneering work of Bengio2006, neural language models have been well developed BIBREF20 , BIBREF21 , BIBREF22 . Sentence representation is directly handled in a series of work. lin2017structured used self attention mechanism and used matrix to represent sentence, and conneau-EtAl:2017:EMNLP2017 used encoders pre-trained on SNLI BIBREF23 and MultiNLI BIBREF24 . Different from all the existing work, for the first time to our best knowledge, this work is devoted to an empirical study on different levels of representation enhancement for implicit discourse relation classification task. Overview Figure 1 illustrates an overview of our model, which is mainly consisted of three parts: word-level module, sentence-level module, and pair-level module. Token sequences of sentence pairs (Arg1 and Arg2) are encoded by word-level module first and every token becomes a word embedding augmented by subword and ELMo. Then these embeddings are fed to sentence-level module and processed by stacked encoder blocks (CNN or RNN encoder block). Every block layer outputs representation for each token. Furthermore, the output of each layer is processed by bi-attention module in the pair-level module, and concatenated to pair representation, which is finally sent to classifiers which are multiple layer perceptrons (MLP) with softmax. The model details are given in the rest of this section. Word-Level Module An inputed token sequence of length $N$ is encoded by the word-level module into an embedding sequence $(\mathbf {e}_1, \mathbf {e}_2, \mathbf {e}_3, \cdots , \mathbf {e}_N)$ . For each embedded token $\mathbf {e}_i$ , it is concatenated from three parts, $$\mathbf {e}_i = [\mathbf {e}_i^w;~ \mathbf {e}_i^s;~ \mathbf {e}_i^c] \in \mathbb {R}^{d_e}$$ (Eq. 4) $\mathbf {e}_i^w \in \mathbb {R}^{d_w}$ is pre-trained word embedding for this token, and is fixed during the training procedure. Our experiments show that fine-tuning the embeddings slowed down the training without better performance. $\mathbf {e}_i^s \in \mathbb {R}^{d_s}$ is subword-level embedding encoded by subword encoder. $\mathbf {e}_i^c \in \mathbb {R}^{d_c}$ is contextualized word embedding encoded by pre-trained ELMo encoders, whose parameters are also fixed during training. Subword is merged from single-character segmentation and the input of ELMo encoder is also character. Character-level embeddings have been used widely in lots of works and its effectiveness is verified for out-of-vocabulary (OOV) or rare word representation. However, character is not a natural minimal unit for there exists word internal structure, we thus introduce a subword-level embedding instead. Subword units can be computationally discovered by unsupervised segmentation over words that are regarded as character sequences. We adopt byte pair encoding (BPE) algorithm introduced by sennrich-haddow-birch:2016:P16-12 for this segmentation. BPE segmentation actually relies on a series of iterative merging operation over bigrams with the highest frequency. The number of merging operation times is roughly equal to the result subword vocabulary size. For each word, the subword-level embedding is encoded by a subword encoder as in Figure 2 . Firstly, the subword sequence (of length $n$ ) of the word is mapped to subword embedding sequence $(\mathbf {se}_1, \mathbf {se}_2, \mathbf {se}_3, \cdots , \mathbf {se}_n)$ (after padding), which is randomly initialized. Then $K$ (we empirically set $K$ =2) convolutional operations $Conv_1, Conv_2, \cdots , Conv_K$ followed by max pooling operation are applied to the embedding sequence, and the sequence is padded before the convolutional operation. For the $i$ -th convolution kernel $Conv_i$ , suppose the kernel size is $k_i$ , then the output of $Conv_i$ on embeddings $\mathbf {se}_{j}$ to $(\mathbf {se}_1, \mathbf {se}_2, \mathbf {se}_3, \cdots , \mathbf {se}_n)$0 is $(\mathbf {se}_1, \mathbf {se}_2, \mathbf {se}_3, \cdots , \mathbf {se}_n)$1 The final output of $Conv_i$ after max pooling is $\begin{split} \mathbf {u}_i &= \mathop {maxpool}{(\mathbf {C}_1,~ \cdots ,~ \mathbf {C}_j,~ \cdots ,~ \mathbf {C}_n)} \end{split}$ Finally, the $K$ outputs are concatenated, $ \mathbf {u} = [\mathbf {u}_1;~ \mathbf {u}_2;~ \cdots ;~ \mathbf {u}_K] \in \mathbb {R}^{d_s} $ to feed a highway network BIBREF25 , $$\mathbf {g} &=& \sigma (\mathbf {W}_g \mathbf {u}^T + \mathbf {b}_g) \in \mathbb {R}^{d_s} \nonumber \\ \mathbf {e}_i^s &=& \mathbf {g} \odot \mathop {ReLU}(\mathbf {W}_h \mathbf {u}^T + \mathbf {b}_h) + (\mathbf {1} - \mathbf {g}) \odot \mathbf {u} \nonumber \\ &\in & \mathbb {R}^{d_s}$$ (Eq. 6) where $\mathbf {g}$ denotes the gate, and $\mathbf {W}_g \in \mathbb {R}^{d_s \times d_s}, \mathbf {b}_g \in \mathbb {R}^{d_s}, \mathbf {W}_h \in \mathbb {R}^{d_s \times d_s}, \mathbf {b}_h \in \mathbb {R}^{d_s}$ are parameters. $\odot $ is element-wise multiplication. The above Eq. 6 gives the subword-level embedding for the $i$ -th word. ELMo (Embeddings from Language Models) BIBREF26 is a pre-trained contextualized word embeddings involving character-level representation. It is shown useful in some works BIBREF27 , BIBREF28 . This embedding is trained by bidirectional language models on large corpus using character sequence for each word token as input. The ELMo encoder employs CNN and highway networks over characters, whose output is given to a multiple-layer biLSTM with residual connections. Then the output is contextualized embeddings for each word. It is also can be seen as a hybrid encoder for character, word, and sentence. This encoder can add lots of contextual information to each word, and can ease the semantics learning of the model. For the pre-trained ELMo encoder, the output is the result of the last two biLSTM layers. Suppose $\mathbf {c}_i$ is the character sequence of $i$ -th word in a sentence, then the encoder output is $ [\cdots , \mathbf {h}_i^0, \cdots ;~ \cdots , \mathbf {h}_i^1, \cdots ] = \mathop {ELMo}(\cdots , \mathbf {c}_i, \cdots ) $ where $\mathbf {h}_i^0$ and $\mathbf {h}_i^1$ denote the outputs of first and second layers of ELMo encoder for $i$ -th word. Following Peters2018ELMo, we use a self-adjusted weighted average of $\mathbf {h}_i^0, \mathbf {h}_i^1$ , $\begin{split} \mathbf {s} &= \mathop {softmax}(\mathbf {w}) \in \mathbb {R}^2\\ \mathbf {h} &= \gamma \sum _{j=0}^1 s_j \mathbf {h}_i^j \in \mathbb {R}^{d_c^{\prime }} \end{split}$ where $\gamma \in \mathbb {R}$ and $\mathbf {w} \in \mathbb {R}^2$ are parameters tuned during training and $d_c^{\prime }$ is the dimension of the ELMo encoder's outputs. Then the result is fed to a feed forward network to reduce its dimension, $$\mathbf {e}_i^c = \mathbf {W}_c \mathbf {h}^T + \mathbf {b}_c \in \mathbb {R}^{d_c}$$ (Eq. 7) $\mathbf {W}_c \in \mathbb {R}^{d_c^{\prime } \times d_c}$ and $\mathbf {b}_c \in \mathbb {R}^{d_c}$ are parameters. The above Eq. 7 gives ELMo embedding for the $i$ -th word. Sentence-Level Module The resulting word embeddings $\mathbf {e}_i$ (Eq. 4 ) are sent to sentence-level module. The sentence-level module is composed of stacked encoder blocks. The block in each layer receives output of the previous layer as input and sends output to next layer. It also sends its output to the pair-level module. Parameters in different layers are not the same. We consider two encoder types, convolutional type and recurrent type. We only use one encoder type in one experiment. For the sentence-level module for different arguments (Arg1 and Arg2), many previous works used same parameters to encode different arguments, that is, one encoder for two type arguments. But as indicated by prasad2008penn, Arg1 and Arg2 may have different semantic perspective, we thus introduce argument-aware parameter settings for different arguments. Figure 3 is the convolutional encoder block. Suppose the input for the encoder block is $\mathbf {x}_i ~ (i=1, \cdots , N)$ , then $\mathbf {x}_i \in \mathbb {R}^{d_e}$ . The input is sent to a convolutional layer and mapped to output $\mathbf {y}_i = [\mathbf {A}_i \; \mathbf {B}_i] \in \mathbb {R}^{2d_e}$ . After the convolutional operation, gated linear units (GLU) BIBREF29 is applied, i.e., $ \mathbf {z}_i = \mathbf {A}_i \odot \sigma (\mathbf {B}_i) \in \mathbb {R}^{d_e} $ There is also a residual connection (Res 1) in the block, which means adding the output of $\mathop {GLU}$ and the input of the block as final output, so $\mathbf {z}_i + \mathbf {x}_i$ is the output of the block corresponding to the input $\mathbf {x}_i$ . The output $\mathbf {z}_i + \mathbf {x}_i$ for all $i = 1, \cdots , N$ is sent to both the next layer and the pair-level module as input. Similar to the convolutional one, recurrent encoder block is shown in Figure 3 . The input $\mathbf {x}_i$ is encoded by a biGRU BIBREF30 layer first, $ \mathbf {y}_i = \mathop {biGRU}(\mathbf {x}_i) \in \mathbb {R}^{2d_e} $ then this is sent to a feed forword network, $$\mathbf {z}_i = \mathbf {W}_r \mathbf {y}_i^T + \mathbf {b}_r \in \mathbb {R}^{d_e}$$ (Eq. 10) $\mathbf {W}_r \in \mathbb {R}^{2d_e \times d_e}$ and $\mathbf {b}_r \in \mathbb {R}^{d_e}$ are parameters. There is also a similar residual connection (Res 1) in the block, so $\mathbf {z}_i + \mathbf {x}_i$ for all $i = 1, \cdots , N$ is the final output of the recurrent encoder block. Pair-Level Module Through the sentence-level module, the word representations are contextualized, and these contextualized representations of each layer are sent to pair-level module. Suppose the encoder block layer number is $l$ , and the outputs of $j$ -th block layer for Arg1 and Arg2 are $\mathbf {v}_1^j, \mathbf {v}_2^j \in \mathbb {R}^{N \times d_e}$ , each row of which is the embedding for the corresponding word. $N$ is the length of word sequence (sentence). Each sentence is padded or truncated to let all sentences have the same length. They are sent to a bi-attention module, the attention matrix is $ \mathbf {M}_j = (\mathop {FFN}(\mathbf {v}_1^j)) {\mathbf {v}_2^j}^T \in \mathbb {R}^{N \times N} $ $\mathop {FFN}$ is a feed froward network (similar to Eq. 10 ) applied to the last dimension corresponding to the word. Then the projected representations are $\begin{split} \mathbf {w}_2^j &= \mathop {softmax}(\mathbf {M}_j) {\mathbf {v}_2^j} \in \mathbb {R}^{N \times d_e}\\ \mathbf {w}_1^j &= \mathop {softmax}(\mathbf {M}_j^T) {\mathbf {v}_1^j} \in \mathbb {R}^{N \times d_e} \end{split}$ where the $\mathop {softmax}$ is applied to each row of the matrix. We apply 2-max pooling on each projected representation and concatenate them as output of the $j$ -th bi-attention module $ \mathbf {o}_j = [\mathop {top2}(\mathbf {w}_1^j);~ \mathop {top2}(\mathbf {w}_2^j)] \in \mathbb {R}^{4 d_e} $ The number of max pooling operation (top-2) is selected from experiments and it is a balance of more salient features and less noise. The final pair representation is $$\mathbf {o} = [\mathbf {o}_1, \mathbf {o}_2, \cdots , \mathbf {o}_l] \in \mathbb {R}^{4 l d_e}$$ (Eq. 12) Since the output is concatenated from different layers and the outputs of lower layers are sent directly to the final representation, this also can be seen as residual connections (Res 2). Then the output as Eq. 12 is fed to an MLP classifier with softmax. The parameters for bi-attention modules in different levels are shared. Classifier We use two classifiers in our model. One is for relation classification, and another one is for connective classification. The classifier is only a multiple layer perceptron (MLP) with softmax layer. qin-EtAl:2017:Long used adversarial method to utilize the connectives, but this method is not suitable for our adopted attention module since the attended part of a sentence will be distinctly different when the argument is with and without connectives. They also proposed a multi-task method that augments the model with an additional classifier for connective prediction, and the input of it is also the pair representation. It is straightforward and simple enough, and can help the model learn better representations, so we include this module in our model. The implicit connectives are provided by PDTB 2.0 dataset, and the connective classifier is only used during training. The loss function for both classifiers is cross entropy loss, and the total loss is the sum of the two losses, i.e., $Loss = Loss_{relation} + Loss_{connective}$ . ExperimentsThe code for this paper is available at https://github.com/diccooo/Deep_Enhanced_Repr_for_IDRR Our model is evaluated on the benchmark PDTB 2.0 for two types of classification tasks. PDTB 2.0 has three levels of senses: Level-1 Class, Level-2 Type, and Level-3 Subtypes. The first level consists of four major relation Classes: COMPARISON, CONTINGENCY, EXPANSION, and TEMPORAL. The second level contains 16 Types. All our experiments are implemented by PyTorch. The pre-trained ELMo encoder is from AllenNLP toolkit BIBREF31 . 11-way Classification Following the settings of qin-EtAl:2017:Long, we use two splitting methods of PDTB dataset for comprehensive comparison. The first is PDTB-Lin BIBREF3 , which uses section 2-21, 22 and 23 as training, dev and test sets respectively. The second is PDTB-Ji BIBREF8 , which uses section 2-20, 0-1, and 21-22 as training, dev and test sets respectively. According to TACL536, five relation types have few training instances and no dev and test instance. Removing the five types, there remain 11 second level types. During training, instances with more than one annotated relation types are considered as multiple instances, each of which has one of the annotations. At test time, a prediction that matches one of the gold types is considered as correct. All sentences in the dataset are padded or truncated to keep the same 100-word length. For the results of both splitting methods, we share some hyperparameters. Table 1 is some of the shared hyperparameter settings. The pre-trained word embeddings are 300-dim word2vec BIBREF32 pre-trained from Google News. So $d_w = 300, d_s = 100, d_c = 300$ , then for the final embedding ( $\mathbf {e}_i$ ), $d_e = 700$ . For the encoder block in sentence-level module, kernel size is same for every layer. We use AdaGrad optimization BIBREF33 . The encoder block layer number is different for the two splitting methods. The layer number for PDTB-Ji splitting method is 4, and the layer number for PDTB-Lin splitting method is 5. Compared to other recent state-of-the-art systems in Table 2 , our model achieves new state-of-the-art performance in two splitting methods with great improvements. As to our best knowledge, our model is the first one that exceeds the 48% accuracy in 11-way classification. Ablation Study To illustrate the effectiveness of our model and the contribution of each module, we use the PTDB-Ji splitting method to do a group of experiments. For the baseline model, we use 4 layer stacked convolutional encoder blocks without the residual connection in the block with only pre-trained word embeddings. We only use the output of the last layer and the output is processed by 2-max pooling without attention and sent to the relation classifier and connective classifier. Without the two residual connections, using 4 layers may be not the best for baseline model but is more convenient to comparison. Firstly, we add modules from high level to low level accumulatively to observe the performance improvement. Table 3 is the results, which demonstrate that every module has considerable effect on the performance. Then we test the effects of the two residual connections on the performance. The results are in Table 3 . The baseline $^+$ means baseline + bi-attention, i.e., the second row of Table 3 . We find that Res 1 (residual connection in the block) is much more useful than Res 2 (residual connection for pair representation), and they work together can bring even better performance. Without ELMo (the same setting as 4-th row in Table 3 ), our data settings is the same as qin-EtAl:2017:Long whose performance was state-of-the-art and will be compared directly. We see that even without the pre-trained ELMo encoder, our performance is better, which is mostly attributed to our better sentence pair representations. Subword-Level Embedding For the usefulness of subword-level embedding, we compare its performance to a model with character-level embedding, which was ever used in qin-zhang-zhao:2016:COLING. We use the same model setting as the 4-th row of Table 3 , and then replace subword with character sequence. The subword embedding augmented result is 47.03%, while the character embedding result is 46.37%, which verifies that the former is a better input representation for the task. Parameters for Sentence-Level Module As previously discussed, argument specific parameter settings may result in better sentence-level encoders. We use the model which is the same as the third row in Table 3 . If shared parameters are used, the result is 45.97%, which is lower than argument specific parameter settings (46.29%). The comparison shows argument specific parameter settings indeed capture the difference of argument representations and facilitate the sentence pair representation. Encoder Block Type and Layer Number In section 3.3, we consider two encoder types, here we compare their effects on the model performance. Like the previous part, The model setting is also the same as the third row in Table 3 except for the block type and layer number. The results are shown in Figure 4 . The results in the figure show that both types may reach similar level of top accuracies, as the order of word is not important to the task. We also try to add position information to the convolutional type encoder, and receive a dropped accuracy. This further verifies the order information does not matter too much for the task. For most of the other numbers of layers, the recurrent type shows better, as the number of layers has an impact on the window size of convolutional encoders. When convolutional type is used, the training procedure is much faster, but choosing the suitable kernel size needs extra efforts. Bi-Attention We visualize the attention weight of one instance in Figure 5 . For lower layers, the attended part is more concentrated. For higher layers, the weights are more average and the attended part moves to the sentence border. This is because the window size is bigger for higher layers, and the convolutional kernel may have higher weights on words at the window edge. Binary and 4-way Classification Settings For the first level classification, we perform both 4-way classification and one-vs-others binary classification. Following the settings of previous works, the dataset splitting method is the same as PDTB-Ji without removing instances. The model uses 5 block layers with kernel size 3, other details are the same as that for 11-way classification on PDTB-Ji. Results Table 4 is the result comparison on first level classification. For binary classification, the result is computed by $F_1$ score (%), and for 4-way classification, the result is computed by macro average $F_1$ score (%). Our model gives the state-of-the-art performance for 4-way classification by providing an $F_1$ score greater than 50% for the first time according to our best knowledge. Conclusion In this paper, we propose a deeper neural model augmented by different grained text representations for implicit discourse relation recognition. These different module levels work together and produce task-related representations of the sentence pair. Our experiments show that the model is effective and achieve the state-of-the-art performance. As to our best knowledge, this is the first time that an implicit discourse relation classifier gives an accuracy higher than 48% for 11-way and an $F_1$ score higher than 50% for 4-way classification tasks.
better sentence pair representations
1771a55236823ed44d3ee537de2e85465bf03eaf
1771a55236823ed44d3ee537de2e85465bf03eaf_0
Q: What is the difference in recall score between the systems? Text: Introduction & Related Work Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4. Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5. Authors have used approaches of Gibbs sampling and conditional random field (CRF) for non-local information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6. Deep learning approaches in NLP use document, word or token representations instead of one-hot encoded vectors. With the rise of transfer learning, pretrained Word2VecBIBREF7, GloVeBIBREF8, fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks (RNN) to detect NERs. Using LSTM layers followed by CRF layes with pretrained word-embeddings as input has been explored hereBIBREF10. Also, CNNs with character embeddings as inputs followed by bi-directional LSTM and CRF layers, were explored hereBIBREF11. With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years. Approach of using these pretrained models like ElmoBIBREF13, FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show state-of-the-art performance. There are very few approaches where caseless NER task is explored. In this recent paperBIBREF16 authors have explored effects of "Cased" entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text. In another paperBIBREF17, authors have proposed True-Case pre-training before using BiLSTM+CRF approach to detect NERs effectively. Though it shows good results over previous approaches, it is not useful in Indian Languages context as there is no concept of cases. In our approach, we are focusing more on data preparation for our definition of topics using some of the state-of-art architectures based on BERT, LSTM/GRU and CRF layers as they have been explored in previous approaches mentioned above. Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score. And comparisons have been made with available and ready-to-use open-source libraries from the productionization perspective. Data Preparation We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows - Remove titles with common words : "are", "the", "which" Remove titles with numeric values : 29, 101 Remove titles with technical components, driver names, transistor names : X00, lga-775 Remove 1-gram titles except locations (almost 80% of these also appear in remaining n-gram titles) After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years. We then created a parallel corpus format as shown in Table 1. Using pre-trained Bert-Tokenizer from hugging-face, converted words in sentences to tokenes. Caseless-BERT pre-trained tokenizer is used. Notice that some of the topic words are broken into tokens and NER tag has been repeated accordingly. For example, in Table 1 second row, word "harassment" is broken into "har ##ass ##ment". Similarly, one "NER" tag is repeated three times to keep the length of sequence-pair same. Finally, for around 3 million news articles, parallel corpus is created, which is of around 150 million sentences, with around 3 billion words (all lower cased) and with around 5 billion tokens approximately. Experiments ::: Model Architecture We tried multiple variations of LSTM and GRU layes, with/without CRF layer. There is a marginal gain in using GRU layers over LSTM. Also, we saw gain in using just one layers of GRU instead of more. Finally, we settled on the architecture, shown in Figure 1 for the final training, based on validation set scores with sample training set. Text had to be tokenized using pytorch-pretrained-bert as explained above before passing to the network. Architecture is built using tensorflow/keras. Coding inspiration taken from BERT-keras and for CRF layer keras-contrib. If one is more comfortable in pytorch there are many examples available on github, but pytorch-bert-crf-ner is better for an easy start. We used BERT-Multilingual model so that we can train and fine-tune the same model for other Indian languages. You can take BERT-base or BERT-large for better performance with only English dataset. Or you can use DistilBERT for English and DistilmBERT for 104 languages for faster pre-training and inferences. Also, we did not choose AutoML approach for hyper-parameter tuning which could have resulted in much more accurate results but at the same time could have taken very long time as well. So instead, chose and tweaked the parameters based on initial results. We trained two models, one with sequence length 512 to capture document level important n-grams and second with sequence length 64 to capture sentence/paragraph level important n-grams. Through experiments it was evident that, sequence length plays a vital role in deciding context and locally/globally important n-grams. Final output is a concatenation of both the model outputs. Experiments ::: Training Trained the topic model on single 32gb NVidia-V100 and it took around 50 hours to train the model with sequence length 512. We had to take 256gb ram machine to accommodate all data in memory for faster read/write. Also, trained model with 64 sequence length in around 17 hours. It is very important to note that sequence length decides how many bert-tokens you can pass for inference and also decides training time and accuracy. Ideally more is better because inference would be faster as well. For 64 sequence length, we are moving 64-token window over whole token-text and recognising topics in each window. So, one should choose sequence length according to their use case. Also, we have explained before our motivation of choosing 2 separate sequence lengths models. We stopped the training for both the models when it crossed 70% precision, 90% recall on training and testing sets, as we were just looking to get maximum recall and not bothered about precision in our case. Both the models reach this point at around 16 epochs. Experiments ::: Results Comparison with existing open-source NER libraries is not exactly fair as they are NOT trained for detecting topics and important n-grams, also NOT trained for case-less text. But they are useful in testing and benchmarking if our model is detecting traditional NERs or not, which it should capture, as Wikipedia titles contains almost all Names, Places and Organisation names. You can check the sample output here Comparisons have been made among Flair-NER, Stanford-caseless-NER (used english.conll.4class.caseless as it performed better than 3class and 7class), Spacy-NER and our models. Of which only Stanford-NER provides case-less models. In Table 2, scores are calculated by taking traditional NER list as reference. In Table 4, same is done with Wikipedia Titles reference set. As you can see in Table 2 & 3, recall is great for our model but precision is not good as Model is also trying to detect new potential topics which are not there even in reference Wikipedia-Titles and NER sets. In capturing Wikipedia topics our model clearly surpasses other models in all scores. Spacy results are good despite not being trained for case-less data. In terms of F1 and overall stability Spacy did better than Stanford NER, on our News Validation set. Similarly, Stanford did well in Precision but could not catch up with Spacy and our model in terms of Recall. Flair overall performed poorly, but as said before these open-source models are not trained for our particular use-case. Experiments ::: Discussions Lets check some examples for detailed analysis of the models and their results. Following is the economy related news. Example 1 : around $1–1.5 trillion or around two percent of global gdp, are lost to corruption every year, president of the natural resource governance institute nrgi has said. speaking at a panel on integrity in public governance during the world bank group and international monetary fund annual meeting on sunday, daniel kaufmann, president of nrgi, presented the statistic, result of a study by the nrgi, an independent, non-profit organisation based in new york. however, according to kaufmann, the figure is only the direct costs of corruption as it does not factor in the opportunities lost on innovation and productivity, xinhua news agency reported. a country that addresses corruption and significantly improves rule of law can expect a huge increase in per capita income in the long run, the study showed. it will also see similar gains in reducing infant mortality and improving education, said kaufmann. Detected NERs can be seen per model in Table 4. Our model do not capture numbers as we have removed all numbers from my wiki-titles as topics. Reason behind the same is that we can easily write regex to detect currency, prices, time, date and deep learning is not required for the same. Following are few important n-grams only our models was able to capture - capita income infant mortality international monetary fund annual meeting natural resource governance institute public governance At the same time, we can see that Spacy did much better than Stanford-caseless NER and Flair could not capture any of the NERs. Another example of a news in political domain and detected NERs can be seen per model in Table 5. Example 2 : wearing the aam aadmi party's trademark cap and with copies of the party's five-year report card in hand, sunita kejriwal appears completely at ease. it's a cold winter afternoon in delhi, as the former indian revenue service (irs) officer hits the campaign trail to support her husband and batchmate, chief minister arvind kejriwal. emerging from the background for the first time, she is lending her shoulder to the aap bandwagon in the new delhi assembly constituency from where the cm, then a political novice, had emerged as the giant killer by defeating congress incumbent sheila dikshit in 2013. Correct n-grams captured only by our model are - aam aadmi party aap bandwagon delhi assembly constituency giant killer indian revenue service political novice In this example, Stanford model did better and captured names properly, for example "sheila dikshit" which Spacy could not detect but Spacy captureed almost all numeric values along with numbers expressed in words. It is important to note that, our model captures NERs with some additional words around them. For example, "president of nrgi" is detected by the model but not "ngri". But model output does convey more information than the later. To capture the same for all models (and to make comparison fair), partial match has been enabled and if correct NER is part of predictied NER then later one is marked as matched. This could be the reason for good score for Spacy. Note that, partial match is disabled for Wikipedia Titles match task as shown in Table 3. Here, our model outperformed all the models. Conclusion and Future Work Through this exercise, we were able to test out the best suitable model architecture and data preparation steps so that similar models could be trained for Indian languages. Building cased or caseless NERs for English was not the final goal and this has already been benchmarked and explored before in previous approaches explained in "Related Work" section. We didn't use traditional datasets for model performance comparisons & benchmarks. As mentioned before, all the comparisons are being done with open-source models and libraries from the productionization point of view. We used a english-news validation dataset which is important and relevant to our specific task and all validation datasets and raw output results can be found at our github link . Wikipedia titles for Indian languages are very very less and resulting tagged data is even less to run deep architectures. We are trying out translations/transliterations of the English-Wiki-Titles to improve Indic-languages entity/topics data. This approach is also useful in building news-summarizing models as it detects almost all important n-grams present in the news. Output of this model can be introduced in a summarization network to add more bias towards important words and bias for their inclusion.
Between the model and Stanford, Spacy and Flair the differences are 42.91, 25.03, 69.8 with Traditional NERs as reference and 49.88, 43.36, 62.43 with Wikipedia titles as reference.
1d74fd1d38a5532d20ffae4abbadaeda225b6932
1d74fd1d38a5532d20ffae4abbadaeda225b6932_0
Q: What is their f1 score and recall? Text: Introduction & Related Work Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4. Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5. Authors have used approaches of Gibbs sampling and conditional random field (CRF) for non-local information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6. Deep learning approaches in NLP use document, word or token representations instead of one-hot encoded vectors. With the rise of transfer learning, pretrained Word2VecBIBREF7, GloVeBIBREF8, fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks (RNN) to detect NERs. Using LSTM layers followed by CRF layes with pretrained word-embeddings as input has been explored hereBIBREF10. Also, CNNs with character embeddings as inputs followed by bi-directional LSTM and CRF layers, were explored hereBIBREF11. With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years. Approach of using these pretrained models like ElmoBIBREF13, FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show state-of-the-art performance. There are very few approaches where caseless NER task is explored. In this recent paperBIBREF16 authors have explored effects of "Cased" entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text. In another paperBIBREF17, authors have proposed True-Case pre-training before using BiLSTM+CRF approach to detect NERs effectively. Though it shows good results over previous approaches, it is not useful in Indian Languages context as there is no concept of cases. In our approach, we are focusing more on data preparation for our definition of topics using some of the state-of-art architectures based on BERT, LSTM/GRU and CRF layers as they have been explored in previous approaches mentioned above. Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score. And comparisons have been made with available and ready-to-use open-source libraries from the productionization perspective. Data Preparation We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows - Remove titles with common words : "are", "the", "which" Remove titles with numeric values : 29, 101 Remove titles with technical components, driver names, transistor names : X00, lga-775 Remove 1-gram titles except locations (almost 80% of these also appear in remaining n-gram titles) After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years. We then created a parallel corpus format as shown in Table 1. Using pre-trained Bert-Tokenizer from hugging-face, converted words in sentences to tokenes. Caseless-BERT pre-trained tokenizer is used. Notice that some of the topic words are broken into tokens and NER tag has been repeated accordingly. For example, in Table 1 second row, word "harassment" is broken into "har ##ass ##ment". Similarly, one "NER" tag is repeated three times to keep the length of sequence-pair same. Finally, for around 3 million news articles, parallel corpus is created, which is of around 150 million sentences, with around 3 billion words (all lower cased) and with around 5 billion tokens approximately. Experiments ::: Model Architecture We tried multiple variations of LSTM and GRU layes, with/without CRF layer. There is a marginal gain in using GRU layers over LSTM. Also, we saw gain in using just one layers of GRU instead of more. Finally, we settled on the architecture, shown in Figure 1 for the final training, based on validation set scores with sample training set. Text had to be tokenized using pytorch-pretrained-bert as explained above before passing to the network. Architecture is built using tensorflow/keras. Coding inspiration taken from BERT-keras and for CRF layer keras-contrib. If one is more comfortable in pytorch there are many examples available on github, but pytorch-bert-crf-ner is better for an easy start. We used BERT-Multilingual model so that we can train and fine-tune the same model for other Indian languages. You can take BERT-base or BERT-large for better performance with only English dataset. Or you can use DistilBERT for English and DistilmBERT for 104 languages for faster pre-training and inferences. Also, we did not choose AutoML approach for hyper-parameter tuning which could have resulted in much more accurate results but at the same time could have taken very long time as well. So instead, chose and tweaked the parameters based on initial results. We trained two models, one with sequence length 512 to capture document level important n-grams and second with sequence length 64 to capture sentence/paragraph level important n-grams. Through experiments it was evident that, sequence length plays a vital role in deciding context and locally/globally important n-grams. Final output is a concatenation of both the model outputs. Experiments ::: Training Trained the topic model on single 32gb NVidia-V100 and it took around 50 hours to train the model with sequence length 512. We had to take 256gb ram machine to accommodate all data in memory for faster read/write. Also, trained model with 64 sequence length in around 17 hours. It is very important to note that sequence length decides how many bert-tokens you can pass for inference and also decides training time and accuracy. Ideally more is better because inference would be faster as well. For 64 sequence length, we are moving 64-token window over whole token-text and recognising topics in each window. So, one should choose sequence length according to their use case. Also, we have explained before our motivation of choosing 2 separate sequence lengths models. We stopped the training for both the models when it crossed 70% precision, 90% recall on training and testing sets, as we were just looking to get maximum recall and not bothered about precision in our case. Both the models reach this point at around 16 epochs. Experiments ::: Results Comparison with existing open-source NER libraries is not exactly fair as they are NOT trained for detecting topics and important n-grams, also NOT trained for case-less text. But they are useful in testing and benchmarking if our model is detecting traditional NERs or not, which it should capture, as Wikipedia titles contains almost all Names, Places and Organisation names. You can check the sample output here Comparisons have been made among Flair-NER, Stanford-caseless-NER (used english.conll.4class.caseless as it performed better than 3class and 7class), Spacy-NER and our models. Of which only Stanford-NER provides case-less models. In Table 2, scores are calculated by taking traditional NER list as reference. In Table 4, same is done with Wikipedia Titles reference set. As you can see in Table 2 & 3, recall is great for our model but precision is not good as Model is also trying to detect new potential topics which are not there even in reference Wikipedia-Titles and NER sets. In capturing Wikipedia topics our model clearly surpasses other models in all scores. Spacy results are good despite not being trained for case-less data. In terms of F1 and overall stability Spacy did better than Stanford NER, on our News Validation set. Similarly, Stanford did well in Precision but could not catch up with Spacy and our model in terms of Recall. Flair overall performed poorly, but as said before these open-source models are not trained for our particular use-case. Experiments ::: Discussions Lets check some examples for detailed analysis of the models and their results. Following is the economy related news. Example 1 : around $1–1.5 trillion or around two percent of global gdp, are lost to corruption every year, president of the natural resource governance institute nrgi has said. speaking at a panel on integrity in public governance during the world bank group and international monetary fund annual meeting on sunday, daniel kaufmann, president of nrgi, presented the statistic, result of a study by the nrgi, an independent, non-profit organisation based in new york. however, according to kaufmann, the figure is only the direct costs of corruption as it does not factor in the opportunities lost on innovation and productivity, xinhua news agency reported. a country that addresses corruption and significantly improves rule of law can expect a huge increase in per capita income in the long run, the study showed. it will also see similar gains in reducing infant mortality and improving education, said kaufmann. Detected NERs can be seen per model in Table 4. Our model do not capture numbers as we have removed all numbers from my wiki-titles as topics. Reason behind the same is that we can easily write regex to detect currency, prices, time, date and deep learning is not required for the same. Following are few important n-grams only our models was able to capture - capita income infant mortality international monetary fund annual meeting natural resource governance institute public governance At the same time, we can see that Spacy did much better than Stanford-caseless NER and Flair could not capture any of the NERs. Another example of a news in political domain and detected NERs can be seen per model in Table 5. Example 2 : wearing the aam aadmi party's trademark cap and with copies of the party's five-year report card in hand, sunita kejriwal appears completely at ease. it's a cold winter afternoon in delhi, as the former indian revenue service (irs) officer hits the campaign trail to support her husband and batchmate, chief minister arvind kejriwal. emerging from the background for the first time, she is lending her shoulder to the aap bandwagon in the new delhi assembly constituency from where the cm, then a political novice, had emerged as the giant killer by defeating congress incumbent sheila dikshit in 2013. Correct n-grams captured only by our model are - aam aadmi party aap bandwagon delhi assembly constituency giant killer indian revenue service political novice In this example, Stanford model did better and captured names properly, for example "sheila dikshit" which Spacy could not detect but Spacy captureed almost all numeric values along with numbers expressed in words. It is important to note that, our model captures NERs with some additional words around them. For example, "president of nrgi" is detected by the model but not "ngri". But model output does convey more information than the later. To capture the same for all models (and to make comparison fair), partial match has been enabled and if correct NER is part of predictied NER then later one is marked as matched. This could be the reason for good score for Spacy. Note that, partial match is disabled for Wikipedia Titles match task as shown in Table 3. Here, our model outperformed all the models. Conclusion and Future Work Through this exercise, we were able to test out the best suitable model architecture and data preparation steps so that similar models could be trained for Indian languages. Building cased or caseless NERs for English was not the final goal and this has already been benchmarked and explored before in previous approaches explained in "Related Work" section. We didn't use traditional datasets for model performance comparisons & benchmarks. As mentioned before, all the comparisons are being done with open-source models and libraries from the productionization point of view. We used a english-news validation dataset which is important and relevant to our specific task and all validation datasets and raw output results can be found at our github link . Wikipedia titles for Indian languages are very very less and resulting tagged data is even less to run deep architectures. We are trying out translations/transliterations of the English-Wiki-Titles to improve Indic-languages entity/topics data. This approach is also useful in building news-summarizing models as it detects almost all important n-grams present in the news. Output of this model can be introduced in a summarization network to add more bias towards important words and bias for their inclusion.
F1 score and Recall are 68.66, 80.08 with Traditional NERs as reference and 59.56, 69.76 with Wikipedia titles as reference.
da8bda963f179f5517a864943dc0ee71249ee1ce
da8bda963f179f5517a864943dc0ee71249ee1ce_0
Q: How many layers does their system have? Text: Introduction & Related Work Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4. Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5. Authors have used approaches of Gibbs sampling and conditional random field (CRF) for non-local information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6. Deep learning approaches in NLP use document, word or token representations instead of one-hot encoded vectors. With the rise of transfer learning, pretrained Word2VecBIBREF7, GloVeBIBREF8, fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks (RNN) to detect NERs. Using LSTM layers followed by CRF layes with pretrained word-embeddings as input has been explored hereBIBREF10. Also, CNNs with character embeddings as inputs followed by bi-directional LSTM and CRF layers, were explored hereBIBREF11. With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years. Approach of using these pretrained models like ElmoBIBREF13, FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show state-of-the-art performance. There are very few approaches where caseless NER task is explored. In this recent paperBIBREF16 authors have explored effects of "Cased" entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text. In another paperBIBREF17, authors have proposed True-Case pre-training before using BiLSTM+CRF approach to detect NERs effectively. Though it shows good results over previous approaches, it is not useful in Indian Languages context as there is no concept of cases. In our approach, we are focusing more on data preparation for our definition of topics using some of the state-of-art architectures based on BERT, LSTM/GRU and CRF layers as they have been explored in previous approaches mentioned above. Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score. And comparisons have been made with available and ready-to-use open-source libraries from the productionization perspective. Data Preparation We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows - Remove titles with common words : "are", "the", "which" Remove titles with numeric values : 29, 101 Remove titles with technical components, driver names, transistor names : X00, lga-775 Remove 1-gram titles except locations (almost 80% of these also appear in remaining n-gram titles) After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years. We then created a parallel corpus format as shown in Table 1. Using pre-trained Bert-Tokenizer from hugging-face, converted words in sentences to tokenes. Caseless-BERT pre-trained tokenizer is used. Notice that some of the topic words are broken into tokens and NER tag has been repeated accordingly. For example, in Table 1 second row, word "harassment" is broken into "har ##ass ##ment". Similarly, one "NER" tag is repeated three times to keep the length of sequence-pair same. Finally, for around 3 million news articles, parallel corpus is created, which is of around 150 million sentences, with around 3 billion words (all lower cased) and with around 5 billion tokens approximately. Experiments ::: Model Architecture We tried multiple variations of LSTM and GRU layes, with/without CRF layer. There is a marginal gain in using GRU layers over LSTM. Also, we saw gain in using just one layers of GRU instead of more. Finally, we settled on the architecture, shown in Figure 1 for the final training, based on validation set scores with sample training set. Text had to be tokenized using pytorch-pretrained-bert as explained above before passing to the network. Architecture is built using tensorflow/keras. Coding inspiration taken from BERT-keras and for CRF layer keras-contrib. If one is more comfortable in pytorch there are many examples available on github, but pytorch-bert-crf-ner is better for an easy start. We used BERT-Multilingual model so that we can train and fine-tune the same model for other Indian languages. You can take BERT-base or BERT-large for better performance with only English dataset. Or you can use DistilBERT for English and DistilmBERT for 104 languages for faster pre-training and inferences. Also, we did not choose AutoML approach for hyper-parameter tuning which could have resulted in much more accurate results but at the same time could have taken very long time as well. So instead, chose and tweaked the parameters based on initial results. We trained two models, one with sequence length 512 to capture document level important n-grams and second with sequence length 64 to capture sentence/paragraph level important n-grams. Through experiments it was evident that, sequence length plays a vital role in deciding context and locally/globally important n-grams. Final output is a concatenation of both the model outputs. Experiments ::: Training Trained the topic model on single 32gb NVidia-V100 and it took around 50 hours to train the model with sequence length 512. We had to take 256gb ram machine to accommodate all data in memory for faster read/write. Also, trained model with 64 sequence length in around 17 hours. It is very important to note that sequence length decides how many bert-tokens you can pass for inference and also decides training time and accuracy. Ideally more is better because inference would be faster as well. For 64 sequence length, we are moving 64-token window over whole token-text and recognising topics in each window. So, one should choose sequence length according to their use case. Also, we have explained before our motivation of choosing 2 separate sequence lengths models. We stopped the training for both the models when it crossed 70% precision, 90% recall on training and testing sets, as we were just looking to get maximum recall and not bothered about precision in our case. Both the models reach this point at around 16 epochs. Experiments ::: Results Comparison with existing open-source NER libraries is not exactly fair as they are NOT trained for detecting topics and important n-grams, also NOT trained for case-less text. But they are useful in testing and benchmarking if our model is detecting traditional NERs or not, which it should capture, as Wikipedia titles contains almost all Names, Places and Organisation names. You can check the sample output here Comparisons have been made among Flair-NER, Stanford-caseless-NER (used english.conll.4class.caseless as it performed better than 3class and 7class), Spacy-NER and our models. Of which only Stanford-NER provides case-less models. In Table 2, scores are calculated by taking traditional NER list as reference. In Table 4, same is done with Wikipedia Titles reference set. As you can see in Table 2 & 3, recall is great for our model but precision is not good as Model is also trying to detect new potential topics which are not there even in reference Wikipedia-Titles and NER sets. In capturing Wikipedia topics our model clearly surpasses other models in all scores. Spacy results are good despite not being trained for case-less data. In terms of F1 and overall stability Spacy did better than Stanford NER, on our News Validation set. Similarly, Stanford did well in Precision but could not catch up with Spacy and our model in terms of Recall. Flair overall performed poorly, but as said before these open-source models are not trained for our particular use-case. Experiments ::: Discussions Lets check some examples for detailed analysis of the models and their results. Following is the economy related news. Example 1 : around $1–1.5 trillion or around two percent of global gdp, are lost to corruption every year, president of the natural resource governance institute nrgi has said. speaking at a panel on integrity in public governance during the world bank group and international monetary fund annual meeting on sunday, daniel kaufmann, president of nrgi, presented the statistic, result of a study by the nrgi, an independent, non-profit organisation based in new york. however, according to kaufmann, the figure is only the direct costs of corruption as it does not factor in the opportunities lost on innovation and productivity, xinhua news agency reported. a country that addresses corruption and significantly improves rule of law can expect a huge increase in per capita income in the long run, the study showed. it will also see similar gains in reducing infant mortality and improving education, said kaufmann. Detected NERs can be seen per model in Table 4. Our model do not capture numbers as we have removed all numbers from my wiki-titles as topics. Reason behind the same is that we can easily write regex to detect currency, prices, time, date and deep learning is not required for the same. Following are few important n-grams only our models was able to capture - capita income infant mortality international monetary fund annual meeting natural resource governance institute public governance At the same time, we can see that Spacy did much better than Stanford-caseless NER and Flair could not capture any of the NERs. Another example of a news in political domain and detected NERs can be seen per model in Table 5. Example 2 : wearing the aam aadmi party's trademark cap and with copies of the party's five-year report card in hand, sunita kejriwal appears completely at ease. it's a cold winter afternoon in delhi, as the former indian revenue service (irs) officer hits the campaign trail to support her husband and batchmate, chief minister arvind kejriwal. emerging from the background for the first time, she is lending her shoulder to the aap bandwagon in the new delhi assembly constituency from where the cm, then a political novice, had emerged as the giant killer by defeating congress incumbent sheila dikshit in 2013. Correct n-grams captured only by our model are - aam aadmi party aap bandwagon delhi assembly constituency giant killer indian revenue service political novice In this example, Stanford model did better and captured names properly, for example "sheila dikshit" which Spacy could not detect but Spacy captureed almost all numeric values along with numbers expressed in words. It is important to note that, our model captures NERs with some additional words around them. For example, "president of nrgi" is detected by the model but not "ngri". But model output does convey more information than the later. To capture the same for all models (and to make comparison fair), partial match has been enabled and if correct NER is part of predictied NER then later one is marked as matched. This could be the reason for good score for Spacy. Note that, partial match is disabled for Wikipedia Titles match task as shown in Table 3. Here, our model outperformed all the models. Conclusion and Future Work Through this exercise, we were able to test out the best suitable model architecture and data preparation steps so that similar models could be trained for Indian languages. Building cased or caseless NERs for English was not the final goal and this has already been benchmarked and explored before in previous approaches explained in "Related Work" section. We didn't use traditional datasets for model performance comparisons & benchmarks. As mentioned before, all the comparisons are being done with open-source models and libraries from the productionization point of view. We used a english-news validation dataset which is important and relevant to our specific task and all validation datasets and raw output results can be found at our github link . Wikipedia titles for Indian languages are very very less and resulting tagged data is even less to run deep architectures. We are trying out translations/transliterations of the English-Wiki-Titles to improve Indic-languages entity/topics data. This approach is also useful in building news-summarizing models as it detects almost all important n-grams present in the news. Output of this model can be introduced in a summarization network to add more bias towards important words and bias for their inclusion.
4 layers
5c059a13d59947f30877bed7d0180cca20a83284
5c059a13d59947f30877bed7d0180cca20a83284_0
Q: Which news corpus is used? Text: Introduction & Related Work Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4. Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5. Authors have used approaches of Gibbs sampling and conditional random field (CRF) for non-local information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6. Deep learning approaches in NLP use document, word or token representations instead of one-hot encoded vectors. With the rise of transfer learning, pretrained Word2VecBIBREF7, GloVeBIBREF8, fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks (RNN) to detect NERs. Using LSTM layers followed by CRF layes with pretrained word-embeddings as input has been explored hereBIBREF10. Also, CNNs with character embeddings as inputs followed by bi-directional LSTM and CRF layers, were explored hereBIBREF11. With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years. Approach of using these pretrained models like ElmoBIBREF13, FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show state-of-the-art performance. There are very few approaches where caseless NER task is explored. In this recent paperBIBREF16 authors have explored effects of "Cased" entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text. In another paperBIBREF17, authors have proposed True-Case pre-training before using BiLSTM+CRF approach to detect NERs effectively. Though it shows good results over previous approaches, it is not useful in Indian Languages context as there is no concept of cases. In our approach, we are focusing more on data preparation for our definition of topics using some of the state-of-art architectures based on BERT, LSTM/GRU and CRF layers as they have been explored in previous approaches mentioned above. Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score. And comparisons have been made with available and ready-to-use open-source libraries from the productionization perspective. Data Preparation We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows - Remove titles with common words : "are", "the", "which" Remove titles with numeric values : 29, 101 Remove titles with technical components, driver names, transistor names : X00, lga-775 Remove 1-gram titles except locations (almost 80% of these also appear in remaining n-gram titles) After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years. We then created a parallel corpus format as shown in Table 1. Using pre-trained Bert-Tokenizer from hugging-face, converted words in sentences to tokenes. Caseless-BERT pre-trained tokenizer is used. Notice that some of the topic words are broken into tokens and NER tag has been repeated accordingly. For example, in Table 1 second row, word "harassment" is broken into "har ##ass ##ment". Similarly, one "NER" tag is repeated three times to keep the length of sequence-pair same. Finally, for around 3 million news articles, parallel corpus is created, which is of around 150 million sentences, with around 3 billion words (all lower cased) and with around 5 billion tokens approximately. Experiments ::: Model Architecture We tried multiple variations of LSTM and GRU layes, with/without CRF layer. There is a marginal gain in using GRU layers over LSTM. Also, we saw gain in using just one layers of GRU instead of more. Finally, we settled on the architecture, shown in Figure 1 for the final training, based on validation set scores with sample training set. Text had to be tokenized using pytorch-pretrained-bert as explained above before passing to the network. Architecture is built using tensorflow/keras. Coding inspiration taken from BERT-keras and for CRF layer keras-contrib. If one is more comfortable in pytorch there are many examples available on github, but pytorch-bert-crf-ner is better for an easy start. We used BERT-Multilingual model so that we can train and fine-tune the same model for other Indian languages. You can take BERT-base or BERT-large for better performance with only English dataset. Or you can use DistilBERT for English and DistilmBERT for 104 languages for faster pre-training and inferences. Also, we did not choose AutoML approach for hyper-parameter tuning which could have resulted in much more accurate results but at the same time could have taken very long time as well. So instead, chose and tweaked the parameters based on initial results. We trained two models, one with sequence length 512 to capture document level important n-grams and second with sequence length 64 to capture sentence/paragraph level important n-grams. Through experiments it was evident that, sequence length plays a vital role in deciding context and locally/globally important n-grams. Final output is a concatenation of both the model outputs. Experiments ::: Training Trained the topic model on single 32gb NVidia-V100 and it took around 50 hours to train the model with sequence length 512. We had to take 256gb ram machine to accommodate all data in memory for faster read/write. Also, trained model with 64 sequence length in around 17 hours. It is very important to note that sequence length decides how many bert-tokens you can pass for inference and also decides training time and accuracy. Ideally more is better because inference would be faster as well. For 64 sequence length, we are moving 64-token window over whole token-text and recognising topics in each window. So, one should choose sequence length according to their use case. Also, we have explained before our motivation of choosing 2 separate sequence lengths models. We stopped the training for both the models when it crossed 70% precision, 90% recall on training and testing sets, as we were just looking to get maximum recall and not bothered about precision in our case. Both the models reach this point at around 16 epochs. Experiments ::: Results Comparison with existing open-source NER libraries is not exactly fair as they are NOT trained for detecting topics and important n-grams, also NOT trained for case-less text. But they are useful in testing and benchmarking if our model is detecting traditional NERs or not, which it should capture, as Wikipedia titles contains almost all Names, Places and Organisation names. You can check the sample output here Comparisons have been made among Flair-NER, Stanford-caseless-NER (used english.conll.4class.caseless as it performed better than 3class and 7class), Spacy-NER and our models. Of which only Stanford-NER provides case-less models. In Table 2, scores are calculated by taking traditional NER list as reference. In Table 4, same is done with Wikipedia Titles reference set. As you can see in Table 2 & 3, recall is great for our model but precision is not good as Model is also trying to detect new potential topics which are not there even in reference Wikipedia-Titles and NER sets. In capturing Wikipedia topics our model clearly surpasses other models in all scores. Spacy results are good despite not being trained for case-less data. In terms of F1 and overall stability Spacy did better than Stanford NER, on our News Validation set. Similarly, Stanford did well in Precision but could not catch up with Spacy and our model in terms of Recall. Flair overall performed poorly, but as said before these open-source models are not trained for our particular use-case. Experiments ::: Discussions Lets check some examples for detailed analysis of the models and their results. Following is the economy related news. Example 1 : around $1–1.5 trillion or around two percent of global gdp, are lost to corruption every year, president of the natural resource governance institute nrgi has said. speaking at a panel on integrity in public governance during the world bank group and international monetary fund annual meeting on sunday, daniel kaufmann, president of nrgi, presented the statistic, result of a study by the nrgi, an independent, non-profit organisation based in new york. however, according to kaufmann, the figure is only the direct costs of corruption as it does not factor in the opportunities lost on innovation and productivity, xinhua news agency reported. a country that addresses corruption and significantly improves rule of law can expect a huge increase in per capita income in the long run, the study showed. it will also see similar gains in reducing infant mortality and improving education, said kaufmann. Detected NERs can be seen per model in Table 4. Our model do not capture numbers as we have removed all numbers from my wiki-titles as topics. Reason behind the same is that we can easily write regex to detect currency, prices, time, date and deep learning is not required for the same. Following are few important n-grams only our models was able to capture - capita income infant mortality international monetary fund annual meeting natural resource governance institute public governance At the same time, we can see that Spacy did much better than Stanford-caseless NER and Flair could not capture any of the NERs. Another example of a news in political domain and detected NERs can be seen per model in Table 5. Example 2 : wearing the aam aadmi party's trademark cap and with copies of the party's five-year report card in hand, sunita kejriwal appears completely at ease. it's a cold winter afternoon in delhi, as the former indian revenue service (irs) officer hits the campaign trail to support her husband and batchmate, chief minister arvind kejriwal. emerging from the background for the first time, she is lending her shoulder to the aap bandwagon in the new delhi assembly constituency from where the cm, then a political novice, had emerged as the giant killer by defeating congress incumbent sheila dikshit in 2013. Correct n-grams captured only by our model are - aam aadmi party aap bandwagon delhi assembly constituency giant killer indian revenue service political novice In this example, Stanford model did better and captured names properly, for example "sheila dikshit" which Spacy could not detect but Spacy captureed almost all numeric values along with numbers expressed in words. It is important to note that, our model captures NERs with some additional words around them. For example, "president of nrgi" is detected by the model but not "ngri". But model output does convey more information than the later. To capture the same for all models (and to make comparison fair), partial match has been enabled and if correct NER is part of predictied NER then later one is marked as matched. This could be the reason for good score for Spacy. Note that, partial match is disabled for Wikipedia Titles match task as shown in Table 3. Here, our model outperformed all the models. Conclusion and Future Work Through this exercise, we were able to test out the best suitable model architecture and data preparation steps so that similar models could be trained for Indian languages. Building cased or caseless NERs for English was not the final goal and this has already been benchmarked and explored before in previous approaches explained in "Related Work" section. We didn't use traditional datasets for model performance comparisons & benchmarks. As mentioned before, all the comparisons are being done with open-source models and libraries from the productionization point of view. We used a english-news validation dataset which is important and relevant to our specific task and all validation datasets and raw output results can be found at our github link . Wikipedia titles for Indian languages are very very less and resulting tagged data is even less to run deep architectures. We are trying out translations/transliterations of the English-Wiki-Titles to improve Indic-languages entity/topics data. This approach is also useful in building news-summarizing models as it detects almost all important n-grams present in the news. Output of this model can be introduced in a summarization network to add more bias towards important words and bias for their inclusion.
Unanswerable
a1885f807753cff7a59f69b5cf6d0fdef8484057
a1885f807753cff7a59f69b5cf6d0fdef8484057_0
Q: How large is the dataset they used? Text: Introduction & Related Work Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4. Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5. Authors have used approaches of Gibbs sampling and conditional random field (CRF) for non-local information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6. Deep learning approaches in NLP use document, word or token representations instead of one-hot encoded vectors. With the rise of transfer learning, pretrained Word2VecBIBREF7, GloVeBIBREF8, fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks (RNN) to detect NERs. Using LSTM layers followed by CRF layes with pretrained word-embeddings as input has been explored hereBIBREF10. Also, CNNs with character embeddings as inputs followed by bi-directional LSTM and CRF layers, were explored hereBIBREF11. With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years. Approach of using these pretrained models like ElmoBIBREF13, FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show state-of-the-art performance. There are very few approaches where caseless NER task is explored. In this recent paperBIBREF16 authors have explored effects of "Cased" entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text. In another paperBIBREF17, authors have proposed True-Case pre-training before using BiLSTM+CRF approach to detect NERs effectively. Though it shows good results over previous approaches, it is not useful in Indian Languages context as there is no concept of cases. In our approach, we are focusing more on data preparation for our definition of topics using some of the state-of-art architectures based on BERT, LSTM/GRU and CRF layers as they have been explored in previous approaches mentioned above. Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score. And comparisons have been made with available and ready-to-use open-source libraries from the productionization perspective. Data Preparation We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows - Remove titles with common words : "are", "the", "which" Remove titles with numeric values : 29, 101 Remove titles with technical components, driver names, transistor names : X00, lga-775 Remove 1-gram titles except locations (almost 80% of these also appear in remaining n-gram titles) After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years. We then created a parallel corpus format as shown in Table 1. Using pre-trained Bert-Tokenizer from hugging-face, converted words in sentences to tokenes. Caseless-BERT pre-trained tokenizer is used. Notice that some of the topic words are broken into tokens and NER tag has been repeated accordingly. For example, in Table 1 second row, word "harassment" is broken into "har ##ass ##ment". Similarly, one "NER" tag is repeated three times to keep the length of sequence-pair same. Finally, for around 3 million news articles, parallel corpus is created, which is of around 150 million sentences, with around 3 billion words (all lower cased) and with around 5 billion tokens approximately. Experiments ::: Model Architecture We tried multiple variations of LSTM and GRU layes, with/without CRF layer. There is a marginal gain in using GRU layers over LSTM. Also, we saw gain in using just one layers of GRU instead of more. Finally, we settled on the architecture, shown in Figure 1 for the final training, based on validation set scores with sample training set. Text had to be tokenized using pytorch-pretrained-bert as explained above before passing to the network. Architecture is built using tensorflow/keras. Coding inspiration taken from BERT-keras and for CRF layer keras-contrib. If one is more comfortable in pytorch there are many examples available on github, but pytorch-bert-crf-ner is better for an easy start. We used BERT-Multilingual model so that we can train and fine-tune the same model for other Indian languages. You can take BERT-base or BERT-large for better performance with only English dataset. Or you can use DistilBERT for English and DistilmBERT for 104 languages for faster pre-training and inferences. Also, we did not choose AutoML approach for hyper-parameter tuning which could have resulted in much more accurate results but at the same time could have taken very long time as well. So instead, chose and tweaked the parameters based on initial results. We trained two models, one with sequence length 512 to capture document level important n-grams and second with sequence length 64 to capture sentence/paragraph level important n-grams. Through experiments it was evident that, sequence length plays a vital role in deciding context and locally/globally important n-grams. Final output is a concatenation of both the model outputs. Experiments ::: Training Trained the topic model on single 32gb NVidia-V100 and it took around 50 hours to train the model with sequence length 512. We had to take 256gb ram machine to accommodate all data in memory for faster read/write. Also, trained model with 64 sequence length in around 17 hours. It is very important to note that sequence length decides how many bert-tokens you can pass for inference and also decides training time and accuracy. Ideally more is better because inference would be faster as well. For 64 sequence length, we are moving 64-token window over whole token-text and recognising topics in each window. So, one should choose sequence length according to their use case. Also, we have explained before our motivation of choosing 2 separate sequence lengths models. We stopped the training for both the models when it crossed 70% precision, 90% recall on training and testing sets, as we were just looking to get maximum recall and not bothered about precision in our case. Both the models reach this point at around 16 epochs. Experiments ::: Results Comparison with existing open-source NER libraries is not exactly fair as they are NOT trained for detecting topics and important n-grams, also NOT trained for case-less text. But they are useful in testing and benchmarking if our model is detecting traditional NERs or not, which it should capture, as Wikipedia titles contains almost all Names, Places and Organisation names. You can check the sample output here Comparisons have been made among Flair-NER, Stanford-caseless-NER (used english.conll.4class.caseless as it performed better than 3class and 7class), Spacy-NER and our models. Of which only Stanford-NER provides case-less models. In Table 2, scores are calculated by taking traditional NER list as reference. In Table 4, same is done with Wikipedia Titles reference set. As you can see in Table 2 & 3, recall is great for our model but precision is not good as Model is also trying to detect new potential topics which are not there even in reference Wikipedia-Titles and NER sets. In capturing Wikipedia topics our model clearly surpasses other models in all scores. Spacy results are good despite not being trained for case-less data. In terms of F1 and overall stability Spacy did better than Stanford NER, on our News Validation set. Similarly, Stanford did well in Precision but could not catch up with Spacy and our model in terms of Recall. Flair overall performed poorly, but as said before these open-source models are not trained for our particular use-case. Experiments ::: Discussions Lets check some examples for detailed analysis of the models and their results. Following is the economy related news. Example 1 : around $1–1.5 trillion or around two percent of global gdp, are lost to corruption every year, president of the natural resource governance institute nrgi has said. speaking at a panel on integrity in public governance during the world bank group and international monetary fund annual meeting on sunday, daniel kaufmann, president of nrgi, presented the statistic, result of a study by the nrgi, an independent, non-profit organisation based in new york. however, according to kaufmann, the figure is only the direct costs of corruption as it does not factor in the opportunities lost on innovation and productivity, xinhua news agency reported. a country that addresses corruption and significantly improves rule of law can expect a huge increase in per capita income in the long run, the study showed. it will also see similar gains in reducing infant mortality and improving education, said kaufmann. Detected NERs can be seen per model in Table 4. Our model do not capture numbers as we have removed all numbers from my wiki-titles as topics. Reason behind the same is that we can easily write regex to detect currency, prices, time, date and deep learning is not required for the same. Following are few important n-grams only our models was able to capture - capita income infant mortality international monetary fund annual meeting natural resource governance institute public governance At the same time, we can see that Spacy did much better than Stanford-caseless NER and Flair could not capture any of the NERs. Another example of a news in political domain and detected NERs can be seen per model in Table 5. Example 2 : wearing the aam aadmi party's trademark cap and with copies of the party's five-year report card in hand, sunita kejriwal appears completely at ease. it's a cold winter afternoon in delhi, as the former indian revenue service (irs) officer hits the campaign trail to support her husband and batchmate, chief minister arvind kejriwal. emerging from the background for the first time, she is lending her shoulder to the aap bandwagon in the new delhi assembly constituency from where the cm, then a political novice, had emerged as the giant killer by defeating congress incumbent sheila dikshit in 2013. Correct n-grams captured only by our model are - aam aadmi party aap bandwagon delhi assembly constituency giant killer indian revenue service political novice In this example, Stanford model did better and captured names properly, for example "sheila dikshit" which Spacy could not detect but Spacy captureed almost all numeric values along with numbers expressed in words. It is important to note that, our model captures NERs with some additional words around them. For example, "president of nrgi" is detected by the model but not "ngri". But model output does convey more information than the later. To capture the same for all models (and to make comparison fair), partial match has been enabled and if correct NER is part of predictied NER then later one is marked as matched. This could be the reason for good score for Spacy. Note that, partial match is disabled for Wikipedia Titles match task as shown in Table 3. Here, our model outperformed all the models. Conclusion and Future Work Through this exercise, we were able to test out the best suitable model architecture and data preparation steps so that similar models could be trained for Indian languages. Building cased or caseless NERs for English was not the final goal and this has already been benchmarked and explored before in previous approaches explained in "Related Work" section. We didn't use traditional datasets for model performance comparisons & benchmarks. As mentioned before, all the comparisons are being done with open-source models and libraries from the productionization point of view. We used a english-news validation dataset which is important and relevant to our specific task and all validation datasets and raw output results can be found at our github link . Wikipedia titles for Indian languages are very very less and resulting tagged data is even less to run deep architectures. We are trying out translations/transliterations of the English-Wiki-Titles to improve Indic-languages entity/topics data. This approach is also useful in building news-summarizing models as it detects almost all important n-grams present in the news. Output of this model can be introduced in a summarization network to add more bias towards important words and bias for their inclusion.
English wikipedia dataset has more than 18 million, a dump of 15 million English news articles
c2553166463b7b5ae4d9786f0446eb06a90af458
c2553166463b7b5ae4d9786f0446eb06a90af458_0
Q: Which coreference resolution systems are tested? Text: Introduction There is a classic riddle: A man and his son get into a terrible car crash. The father dies, and the boy is badly injured. In the hospital, the surgeon looks at the patient and exclaims, “I can't operate on this boy, he's my son!” How can this be? That a majority of people are reportedly unable to solve this riddle is taken as evidence of underlying implicit gender bias BIBREF0 : many first-time listeners have difficulty assigning both the role of “mother” and “surgeon” to the same entity. As the riddle reveals, the task of coreference resolution in English is tightly bound with questions of gender, for humans and automated systems alike (see Figure 1 ). As awareness grows of the ways in which data-driven AI technologies may acquire and amplify human-like biases BIBREF1 , BIBREF2 , BIBREF3 , this work investigates how gender biases manifest in coreference resolution systems. There are many ways one could approach this question; here we focus on gender bias with respect to occupations, for which we have corresponding U.S. employment statistics. Our approach is to construct a challenge dataset in the style of Winograd schemas, wherein a pronoun must be resolved to one of two previously-mentioned entities in a sentence designed to be easy for humans to interpret, but challenging for data-driven systems BIBREF4 . In our setting, one of these mentions is a person referred to by their occupation; by varying only the pronoun's gender, we are able to test the impact of gender on resolution. With these “Winogender schemas,” we demonstrate the presence of systematic gender bias in multiple publicly-available coreference resolution systems, and that occupation-specific bias is correlated with employment statistics. We release these test sentences to the public. In our experiments, we represent gender as a categorical variable with either two or three possible values: female, male, and (in some cases) neutral. These choices reflect limitations of the textual and real-world datasets we use. Coreference Systems In this work, we evaluate three publicly-available off-the-shelf coreference resolution systems, representing three different machine learning paradigms: rule-based systems, feature-driven statistical systems, and neural systems. Winogender Schemas Our intent is to reveal cases where coreference systems may be more or less likely to recognize a pronoun as coreferent with a particular occupation based on pronoun gender, as observed in Figure 1 . To this end, we create a specialized evaluation set consisting of 120 hand-written sentence templates, in the style of the Winograd Schemas BIBREF4 . Each sentence contains three referring expressions of interest: We use a list of 60 one-word occupations obtained from Caliskan183 (see supplement), with corresponding gender percentages available from the U.S. Bureau of Labor Statistics. For each occupation, we wrote two similar sentence templates: one in which pronoun is coreferent with occupation, and one in which it is coreferent with participant (see Figure 2 ). For each sentence template, there are three pronoun instantiations (female, male, or neutral), and two participant instantiations (a specific participant, e.g., “the passenger,” and a generic paricipant, “someone.”) With the templates fully instantiated, the evaluation set contains 720 sentences: 60 occupations $\times $ 2 sentence templates per occupation $\times $ 2 participants $\times $ 3 pronoun genders. Results and Discussion We evaluate examples of each of the three coreference system architectures described in "Coreference Systems" : the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL). By multiple measures, the Winogender schemas reveal varying degrees of gender bias in all three systems. First we observe that these systems do not behave in a gender-neutral fashion. That is to say, we have designed test sentences where correct pronoun resolution is not a function of gender (as validated by human annotators), but system predictions do exhibit sensitivity to pronoun gender: 68% of male-female minimal pair test sentences are resolved differently by the RULE system; 28% for STAT; and 13% for NEURAL. Overall, male pronouns are also more likely to be resolved as occupation than female or neutral pronouns across all systems: for RULE, 72% male vs 29% female and 1% neutral; for STAT, 71% male vs 63% female and 50% neutral; and for NEURAL, 87% male vs 80% female and 36% neutral. Neutral pronouns are often resolved as neither occupation nor participant, possibly due to the number ambiguity of “they/their/them.” When these systems' predictions diverge based on pronoun gender, they do so in ways that reinforce and magnify real-world occupational gender disparities. Figure 4 shows that systems' gender preferences for occupations correlate with real-world employment statistics (U.S. Bureau of Labor Statistics) and the gender statistics from text BIBREF14 which these systems access directly; correlation values are in Table 1 . We also identify so-called “gotcha” sentences in which pronoun gender does not match the occupation's majority gender (BLS) if occupation is the correct answer; all systems perform worse on these “gotchas.” (See Table 2 .) Because coreference systems need to make discrete choices about which mentions are coreferent, percentage-wise differences in real-world statistics may translate into absolute differences in system predictions. For example, the occupation “manager” is 38.5% female in the U.S. according to real-world statistics (BLS); mentions of “manager” in text are only 5.18% female (B&L resource); and finally, as viewed through the behavior of the three coreference systems we tested, no managers are predicted to be female. This illustrates two related phenomena: first, that data-driven NLP pipelines are susceptible to sequential amplification of bias throughout a pipeline, and second, that although the gender statistics from B&L correlate with BLS employment statistics, they are systematically male-skewed (Figure 3 ). Related Work Here we give a brief (and non-exhaustive) overview of prior work on gender bias in NLP systems and datasets. A number of papers explore (gender) bias in English word embeddings: how they capture implicit human biases in modern BIBREF1 and historical BIBREF15 text, and methods for debiasing them BIBREF16 . Further work on debiasing models with adversarial learning is explored by DBLP:journals/corr/BeutelCZC17 and zhang2018mitigating. Prior work also analyzes social and gender stereotyping in existing NLP and vision datasets BIBREF17 , BIBREF18 . tatman:2017:EthNLP investigates the impact of gender and dialect on deployed speech recognition systems, while zhao-EtAl:2017:EMNLP20173 introduce a method to reduce amplification effects on models trained with gender-biased datasets. koolen-vancranenburgh:2017:EthNLP examine the relationship between author gender and text attributes, noting the potential for researcher interpretation bias in such studies. Both larson:2017:EthNLP and koolen-vancranenburgh:2017:EthNLP offer guidelines to NLP researchers and computational social scientists who wish to predict gender as a variable. hovy-spruit:2016:P16-2 introduce a helpful set of terminology for identifying and categorizing types of bias that manifest in AI systems, including overgeneralization, which we observe in our work here. Finally, we note independent but closely related work by zhao-wang:2018:N18-1, published concurrently with this paper. In their work, zhao-wang:2018:N18-1 also propose a Winograd schema-like test for gender bias in coreference resolution systems (called “WinoBias”). Though similar in appearance, these two efforts have notable differences in substance and emphasis. The contribution of this work is focused primarily on schema construction and validation, with extensive analysis of observed system bias, revealing its correlation with biases present in real-world and textual statistics; by contrast, zhao-wang:2018:N18-1 present methods of debiasing existing systems, showing that simple approaches such as augmenting training data with gender-swapped examples or directly editing noun phrase counts in the B&L resource are effective at reducing system bias, as measured by the schemas. Complementary differences exist between the two schema formulations: Winogender schemas (this work) include gender-neutral pronouns, are syntactically diverse, and are human-validated; WinoBias includes (and delineates) sentences resolvable from syntax alone; a Winogender schema has one occupational mention and one “other participant” mention; WinoBias has two occupational mentions. Due to these differences, we encourage future evaluations to make use of both datasets. Conclusion and Future Work We have introduced “Winogender schemas,” a pronoun resolution task in the style of Winograd schemas that enables us to uncover gender bias in coreference resolution systems. We evaluate three publicly-available, off-the-shelf systems and find systematic gender bias in each: for many occupations, systems strongly prefer to resolve pronouns of one gender over another. We demonstrate that this preferential behavior correlates both with real-world employment statistics and the text statistics that these systems use. We posit that these systems overgeneralize the attribute of gender, leading them to make errors that humans do not make on this evaluation. We hope that by drawing attention to this issue, future systems will be designed in ways that mitigate gender-based overgeneralization. It is important to underscore the limitations of Winogender schemas. As a diagnostic test of gender bias, we view the schemas as having high positive predictive value and low negative predictive value; that is, they may demonstrate the presence of gender bias in a system, but not prove its absence. Here we have focused on examples of occupational gender bias, but Winogender schemas may be extended broadly to probe for other manifestations of gender bias. Though we have used human-validated schemas to demonstrate that existing NLP systems are comparatively more prone to gender-based overgeneralization, we do not presume that matching human judgment is the ultimate objective of this line of research. Rather, human judgements, which carry their own implicit biases, serve as a lower bound for equitability in automated systems. Acknowledgments The authors thank Rebecca Knowles and Chandler May for their valuable feedback on this work. This research was supported by the JHU HLTCOE, DARPA AIDA, and NSF-GRFP (1232825). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.
the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL).
cc9f0ac8ead575a9b485a51ddc06b9ecb2e2a44d
cc9f0ac8ead575a9b485a51ddc06b9ecb2e2a44d_0
Q: How big is improvement in performances of proposed model over state of the art? Text: Introduction Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries. A number of context modeling methods have been suggested in the literature to address SPC BIBREF3, BIBREF4, BIBREF2, BIBREF5, BIBREF6. These methods proposed to leverage two categories of context: recent questions and precedent logic form. It is natural to leverage recent questions as context. Taking the example from Figure FIGREF1, when parsing $Q_3$, we also need to take $Q_1$ and $Q_2$ as input. We can either simply concatenate the input questions, or use a model to encode them hierarchically BIBREF4. As for the second category, instead of taking a bag of recent questions as input, it only considers the precedent logic form. For instance, when parsing $Q_3$, we only need to take $S_2$ as context. With such a context, the decoder can attend over it, or reuse it via a copy mechanism BIBREF4, BIBREF5. Intuitively, methods that fall into this category enjoy better generalizability, as they only rely on the last logic form as context, no matter at which turn. Notably, these two categories of context can be used simultaneously. However, it remains unclear how far we are from effective context modeling. First, there is a lack of thorough comparisons of typical context modeling methods on complex SPC (e.g. cross-domain). Second, none of previous works verified their proposed context modeling methods with the grammar-based decoding technique, which has been developed for years and proven to be highly effective in semantic parsing BIBREF7, BIBREF8, BIBREF9. To obtain better performance, it is worthwhile to study how context modeling methods collaborate with the grammar-based decoding. Last but not the least, there is limited understanding of how context modeling methods perform on various contextual phenomena. An in-depth analysis can shed light on potential research directions. In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance. Methodology In the task of semantic parsing in context, we are given a dataset composed of dialogues. Denoting $\langle \mathbf {x}_1,...,\mathbf {x}_n\rangle $ a sequence of natural language questions in a dialogue, $\langle \mathbf {y}_1,...,\mathbf {y}_n\rangle $ are their corresponding SQL queries. Each SQL query is conditioned on a multi-table database schema, and the databases used in test do not appear in training. In this section, we first present a base model without considering context. Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods. Finally, we present how to augment the model with BERT BIBREF10. Methodology ::: Base Model We employ the popularly used attention-based sequence-to-sequence architecture BIBREF11, BIBREF12 to build our base model. As shown in Figure FIGREF6, the base model consists of a question encoder and a grammar-based decoder. For each question, the encoder provides contextual representations, while the decoder generates its corresponding SQL query according to a predefined grammar. Methodology ::: Base Model ::: Question Encoder To capture contextual information within a question, we apply Bidirectional Long Short-Term Memory Neural Network (BiLSTM) as our question encoder BIBREF13, BIBREF14. Specifically, at turn $i$, firstly every token $x_{i,k}$ in $\mathbf {x}_{i}$ is fed into a word embedding layer $\mathbf {\phi }^x$ to get its embedding representation $\mathbf {\phi }^x{(x_{i,k})}$. On top of the embedding representation, the question encoder obtains a contextual representation $\mathbf {h}^{E}_{i,k}=[\mathop {{\mathbf {h}}^{\overrightarrow{E}}_{i,k}}\,;{\mathbf {h}}^{\overleftarrow{E}}_{i,k}]$, where the forward hidden state is computed as following: Methodology ::: Base Model ::: Grammar-based Decoder The decoder is grammar-based with attention on the input question BIBREF7. Different from producing a SQL query word by word, our decoder outputs a sequence of grammar rule (i.e. action). Such a sequence has one-to-one correspondence with the abstract syntax tree of the SQL query. Taking the SQL query in Figure FIGREF6 as an example, it is transformed to the action sequence $\langle $ $\rm \scriptstyle {Start}\rightarrow \rm {Root}$, $\rm \scriptstyle {Root}\rightarrow \rm {Select\ Order}$, $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$, $\rm \scriptstyle {Order}\rightarrow \rm {desc\ limit\ Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {none\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ by left-to-right depth-first traversing on the tree. At each decoding step, a nonterminal is expanded using one of its corresponding grammar rules. The rules are either schema-specific (e.g. $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$), or schema-agnostic (e.g. $\rm \scriptstyle {Start}\rightarrow \rm {Root}$). More specifically, as shown at the top of Figure FIGREF6, we make a little modification on $\rm {Order}$-related rules upon the grammar proposed by BIBREF9, which has been proven to have better performance than vanilla SQL grammar. Denoting $\mathbf {LSTM}^{\overrightarrow{D}}$ the unidirectional LSTM used in the decoder, at each decoding step $j$ of turn $i$, it takes the embedding of the previous generated grammar rule $\mathbf {\phi }^y(y_{i,j-1})$ (indicated as the dash lines in Figure FIGREF6), and updates its hidden state as: where $\mathbf {c}_{i,j-1}$ is the context vector produced by attending on each encoder hidden state $\mathbf {h}^E_{i,k}$ in the previous step: where $\mathbf {W}^e$ is a learned matrix. $\mathbf {h}^{\overrightarrow{D}}_{i,0}$ is initialized by the final encoder hidden state $\mathbf {h}^E_{i,|\mathbf {x}_{i}|}$, while $\mathbf {c}_{i,0}$ is a zero-vector. For each schema-agnostic grammar rule, $\mathbf {\phi }^y$ returns a learned embedding. For schema-specific one, the embedding is obtained by passing its schema (i.e. table or column) through another unidirectional LSTM, namely schema encoder $\mathbf {LSTM}^{\overrightarrow{S}}$. For example, the embedding of $\rm \scriptstyle {Col}\rightarrow \rm {Id}$ is: As for the output $y_{i,j}$, if the expanded nonterminal corresponds to schema-agnostic grammar rules, we can obtain the output probability of action ${\gamma }$ as: where $\mathbf {W}^o$ is a learned matrix. When it comes to schema-specific grammar rules, the main challenge is that the model may encounter schemas never appeared in training due to the cross-domain setting. To deal with it, we do not directly compute the similarity between the decoder hidden state and the schema-specific grammar rule embedding. Instead, we first obtain the unnormalized linking score $l(x_{i,k},\gamma )$ between the $k$-th token in $\mathbf {x}_i$ and the schema in action $\gamma $. It is computed by both handcraft features (e.g. word exact match) BIBREF15 and learned similarity (i.e. dot product between word embedding and grammar rule embedding). With the input question as bridge, we reuse the attention score $a_{i,k}$ in Equation DISPLAY_FORM8 to measure the probability of outputting a schema-specific action $\gamma $ as: Methodology ::: Recent Questions as Context To take advantage of the question context, we provide the base model with recent $h$ questions as additional input. As shown in Figure FIGREF13, we summarize and generalize three ways to incorporate recent questions as context. Methodology ::: Recent Questions as Context ::: Concat The method concatenates recent questions with the current question in order, making the input of the question encoder be $[\mathbf {x}_{i-h},\dots ,\mathbf {x}_{i}]$, while the architecture of the base model remains the same. We do not insert special delimiters between questions, as there are punctuation marks. Methodology ::: Recent Questions as Context ::: Turn A dialogue can be seen as a sequence of questions which, in turn, are sequences of words. Considering such hierarchy, BIBREF4 employed a turn-level encoder (i.e. an unidirectional LSTM) to encode recent questions hierarchically. At turn $i$, the turn-level encoder takes the previous question vector $[\mathbf {h}^{\overleftarrow{E}}_{i-1,1},\mathbf {h}^{\overrightarrow{E}}_{i-1,|\mathbf {x}_{i-1}|}]$ as input, and updates its hidden state to $\mathbf {h}^{\overrightarrow{T}}_{i}$. Then $\mathbf {h}^{\overrightarrow{T}}_{i}$ is fed into $\mathbf {LSTM}^E$ as an implicit context. Accordingly Equation DISPLAY_FORM4 is rewritten as: Similar to Concat, BIBREF4 allowed the decoder to attend over all encoder hidden states. To make the decoder distinguish hidden states from different turns, they further proposed a relative distance embedding ${\phi }^{d}$ in attention computing. Taking the above into account, Equation DISPLAY_FORM8 is as: where $t{\in }[0,\dots ,h]$ represents the relative distance. Methodology ::: Recent Questions as Context ::: Gate To jointly model the decoder attention in token-level and question-level, inspired by the advances of open-domain dialogue area BIBREF16, we propose a gate mechanism to automatically compute the importance of each question. The importance is computed by: where $\lbrace \mathbf {V}^{g},\mathbf {W}^g,\mathbf {U}^g\rbrace $ are learned parameters and $0\,{\le }\,t\,{\le }\,h$. As done in Equation DISPLAY_FORM17 except for the relative distance embedding, the decoder of Gate also attends over all the encoder hidden states. And the question-level importance $\bar{g}_{i-t}$ is employed as the coefficient of the attention scores at turn $i\!-\!t$. Methodology ::: Precedent SQL as Context Besides recent questions, as mentioned in Section SECREF1, the precedent SQL can also be context. As shown in Figure FIGREF27, the usage of $\mathbf {y}_{i-1}$ requires a SQL encoder, where we employ another BiLSTM to achieve it. The $m$-th contextual action representation at turn $i\!-\!1$, $\mathbf {h}^A_{i-1,m}$, can be obtained by passing the action sequence through the SQL encoder. Methodology ::: Precedent SQL as Context ::: SQL Attn Attention over $\mathbf {y}_{i-1}$ is a straightforward method to incorporate the SQL context. Given $\mathbf {h}^A_{i-1,m}$, we employ a similar manner as Equation DISPLAY_FORM8 to compute attention score and thus obtain the SQL context vector. This vector is employed as an additional input for decoder in Equation DISPLAY_FORM7. Methodology ::: Precedent SQL as Context ::: Action Copy To reuse the precedent generated SQL, BIBREF5 presented a token-level copy mechanism on their non-grammar based parser. Inspired by them, we propose an action-level copy mechanism suited for grammar-based decoding. It enables the decoder to copy actions appearing in $\mathbf {y}_{i-1}$, when the actions are compatible to the current expanded nonterminal. As the copied actions lie in the same semantic space with the generated ones, the output probability for action $\gamma $ is a mix of generating ($\mathbf {g}$) and copying ($\mathbf {c}$). The generating probability $P(y_{i,j}\!=\!{\gamma }\,|\,\mathbf {g})$ follows Equation DISPLAY_FORM10 and DISPLAY_FORM11, while the copying probability is: where $\mathbf {W}^l$ is a learned matrix. Denoting $P^{copy}_{i,j}$ the probability of copying at decoding step $j$ of turn $i$, it can be obtained by $\sigma (\mathbf {W}^{c}\mathbf {h}^{\overrightarrow{D}}_{i,j}+\mathbf {b}^{c})$, where $\lbrace \mathbf {W}^{c},\mathbf {b}^{c}\rbrace $ are learned parameters and $\sigma $ is the sigmoid function. The final probability $P(y_{i,j}={\gamma })$ is computed by: Methodology ::: Precedent SQL as Context ::: Tree Copy Besides the action-level copy, we also introduce a tree-level copy mechanism. As illustrated in Figure FIGREF27, tree-level copy mechanism enables the decoder to copy action subtrees extracted from $\mathbf {y}_{i-1}$, which shrinks the number of decoding steps by a large margin. Similar idea has been proposed in a non-grammar based decoder BIBREF4. In fact, a subtree is an action sequence starting from specific nonterminals, such as ${\rm Select}$. To give an example, $\langle $ $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ makes up a subtree for the tree in Figure FIGREF6. For a subtree $\upsilon $, its representation $\phi ^{t}(\upsilon )$ is the final hidden state of SQL encoder, which encodes its corresponding action sequence. Then we can obtain the output probability of subtree $\upsilon $ as: where $\mathbf {W}^t$ is a learned matrix. The output probabilities of subtrees are normalized together with Equation DISPLAY_FORM10 and DISPLAY_FORM11. Methodology ::: BERT Enhanced Embedding We employ BERT BIBREF10 to augment our model via enhancing the embedding of questions and schemas. We first concatenate the input question and all the schemas in a deterministic order with [SEP] as delimiter BIBREF17. For instance, the input for $Q_1$ in Figure FIGREF1 is “What is id ... max horsepower? [SEP] CARS_NAMES [SEP] MakeId ... [SEP] Horsepower”. Feeding it into BERT, we obtain the schema-aware question representations and question-aware schema representations. These contextual representations are used to substitute $\phi ^x$ subsequently, while other parts of the model remain the same. Experiment & Analysis We conduct experiments to study whether the introduced methods are able to effectively model context in the task of SPC (Section SECREF36), and further perform a fine-grained analysis on various contextual phenomena (Section SECREF40). Experiment & Analysis ::: Experimental Setup ::: Dataset Two large complex cross-domain datasets are used: SParC BIBREF2 consists of 3034 / 422 dialogues for train / development, and CoSQL BIBREF6 consists of 2164 / 292 ones. The average turn numbers of SParC and CoSQL are $3.0$ and $5.2$, respectively. Experiment & Analysis ::: Experimental Setup ::: Evaluation Metrics We evaluate each predicted SQL query using exact set match accuracy BIBREF2. Based on it, we consider three metrics: Question Match (Ques.Match), the match accuracy over all questions, Interaction Match (Int.Match), the match accuracy over all dialogues, and Turn $i$ Match, the match accuracy over questions at turn $i$. Experiment & Analysis ::: Experimental Setup ::: Implementation Detail Our implementation is based on PyTorch BIBREF18, AllenNLP BIBREF19 and the library transformers BIBREF20. We adopt the Adam optimizer and set the learning rate as 1e-3 on all modules except for BERT, for which a learning rate of 1e-5 is used BIBREF21. The dimensions of word embedding, action embedding and distance embedding are 100, while the hidden state dimensions of question encoder, grammar-based decoder, turn-level encoder and SQL encoder are 200. We initialize word embedding using Glove BIBREF22 for non-BERT models. For methods which use recent $h$ questions, $h$ is set as 5 on both datasets. Experiment & Analysis ::: Experimental Setup ::: Baselines We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy). Experiment & Analysis ::: Model Comparison Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively. To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected. As mentioned in Section SECREF1, intuitively, methods which only use the precedent SQL enjoys better generalizability. To validate it, we further conduct an out-of-distribution experiment to assess the generalizability of different context modeling methods. Concretely, we select three representative methods and train them on questions at turn 1 and 2, whereas test them at turn 3, 4 and beyond. As shown in Figure FIGREF38, Action Copy has a consistently comparable or better performance, validating the intuition. Meanwhile, Concat appears to be strikingly competitive, demonstrating it also has a good generalizability. Compared with them, Turn is more vulnerable to out-of-distribution questions. In conclusion, existing context modeling methods in the task of SPC are not as effective as expected, since they do not show a significant advantage over the simple concatenation method. Experiment & Analysis ::: Fine-grained Analysis By a careful investigation on contextual phenomena, we summarize them in multiple hierarchies. Roughly, there are three kinds of contextual phenomena in questions: semantically complete, coreference and ellipsis. Semantically complete means a question can reflect all the meaning of its corresponding SQL. Coreference means a question contains pronouns, while ellipsis means the question cannot reflect all of its SQL, even if resolving its pronouns. In the fine-grained level, coreference can be divided into 5 types according to its pronoun BIBREF1. Ellipsis can be characterized by its intention: continuation and substitution. Continuation is to augment extra semantics (e.g. ${\rm Filter}$), and substitution refers to the situation where current question is intended to substitute particular semantics in the precedent question. Substitution can be further branched into 4 types: explicit vs. implicit and schema vs. operator. Explicit means the current question provides contextual clues (i.e. partial context overlaps with the precedent question) to help locate the substitution target, while implicit does not. On most cases, the target is schema or operator. In order to study the effect of context modeling methods on various phenomena, as shown in Table TABREF39, we take the development set of SParC as an example to perform our analysis. The analysis begins by presenting Ques.Match of three representative models on above fine-grained types in Figure FIGREF42. As shown, though different methods have different strengths, they all perform poorly on certain types, which will be elaborated below. Experiment & Analysis ::: Fine-grained Analysis ::: Coreference Diving deep into the coreference (left of Figure FIGREF42), we observe that all methods struggle with two fine-grained types: definite noun phrases and one anaphora. Through our study, we find the scope of antecedent is a key factor. An antecedent is one or more entities referred by a pronoun. Its scope is either whole, where the antecedent is the precedent answer, or partial, where the antecedent is part of the precedent question. The above-mentioned fine-grained types are more challenging as their partial proportion are nearly $40\%$, while for demonstrative pronoun it is only $22\%$. It is reasonable as partial requires complex inference on context. Considering the 4th example in Table TABREF39, “one” refers to “pets” instead of “age” because the accompanying verb is “weigh”. From this observation, we draw the conclusion that current context modeling methods do not succeed on pronouns which require complex inference on context. Experiment & Analysis ::: Fine-grained Analysis ::: Ellipsis As for ellipsis (right of Figure FIGREF42), we obtain three interesting findings by comparisons in three aspects. The first finding is that all models have a better performance on continuation than substitution. This is expected since there are redundant semantics in substitution, while not in continuation. Considering the 8th example in Table TABREF39, “horsepower” is a redundant semantic which may raise noise in SQL prediction. The second finding comes from the unexpected drop from implicit(substitution) to explicit(substitution). Intuitively, explicit should surpass implicit on substitution as it provides more contextual clues. The finding demonstrates that contextual clues are obviously not well utilized by the context modeling methods. Third, compared with schema(substitution), operator(substitution) achieves a comparable or better performance consistently. We believe it is caused by the cross-domain setting, which makes schema related substitution more difficult. Related Work The most related work is the line of semantic parsing in context. In the topic of SQL, BIBREF24 proposed a context-independent CCG parser and then applied it to do context-dependent substitution, BIBREF3 applied a search-based method for sequential questions, and BIBREF4 provided the first sequence-to-sequence solution in the area. More recently, BIBREF5 presented a edit-based method to reuse the precedent generated SQL. With respect to other logic forms, BIBREF25 focuses on understanding execution commands in context, BIBREF26 on question answering over knowledge base in a conversation, and BIBREF27 on code generation in environment context. Our work is different from theirs as we perform an exploratory study, not fulfilled by previous works. There are also several related works that provided studies on context. BIBREF17 explored the contextual representations in context-independent semantic parsing, and BIBREF28 studied how conversational agents use conversation history to generate response. Different from them, our task focuses on context modeling for semantic parsing. Under the same task, BIBREF1 summarized contextual phenomena in a coarse-grained level, while BIBREF0 performed a wizard-of-oz experiment to study the most frequent phenomena. What makes our work different from them is that we not only summarize contextual phenomena by fine-grained types, but also perform an analysis on context modeling methods. Conclusion & Future Work This work conducts an exploratory study on semantic parsing in context, to realize how far we are from effective context modeling. Through a thorough comparison, we find that existing context modeling methods are not as effective as expected. A simple concatenation method can be much competitive. Furthermore, by performing a fine-grained analysis, we summarize two potential directions as our future work: incorporating common sense for better pronouns inference, and modeling contextual clues in a more explicit manner. By open-sourcing our code and materials, we believe our work can facilitate the community to debug models in a fine-grained level and make more progress.
Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.
69e678666d11731c9bfa99953e2cd5a5d11a4d4f
69e678666d11731c9bfa99953e2cd5a5d11a4d4f_0
Q: What two large datasets are used for evaluation? Text: Introduction Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries. A number of context modeling methods have been suggested in the literature to address SPC BIBREF3, BIBREF4, BIBREF2, BIBREF5, BIBREF6. These methods proposed to leverage two categories of context: recent questions and precedent logic form. It is natural to leverage recent questions as context. Taking the example from Figure FIGREF1, when parsing $Q_3$, we also need to take $Q_1$ and $Q_2$ as input. We can either simply concatenate the input questions, or use a model to encode them hierarchically BIBREF4. As for the second category, instead of taking a bag of recent questions as input, it only considers the precedent logic form. For instance, when parsing $Q_3$, we only need to take $S_2$ as context. With such a context, the decoder can attend over it, or reuse it via a copy mechanism BIBREF4, BIBREF5. Intuitively, methods that fall into this category enjoy better generalizability, as they only rely on the last logic form as context, no matter at which turn. Notably, these two categories of context can be used simultaneously. However, it remains unclear how far we are from effective context modeling. First, there is a lack of thorough comparisons of typical context modeling methods on complex SPC (e.g. cross-domain). Second, none of previous works verified their proposed context modeling methods with the grammar-based decoding technique, which has been developed for years and proven to be highly effective in semantic parsing BIBREF7, BIBREF8, BIBREF9. To obtain better performance, it is worthwhile to study how context modeling methods collaborate with the grammar-based decoding. Last but not the least, there is limited understanding of how context modeling methods perform on various contextual phenomena. An in-depth analysis can shed light on potential research directions. In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance. Methodology In the task of semantic parsing in context, we are given a dataset composed of dialogues. Denoting $\langle \mathbf {x}_1,...,\mathbf {x}_n\rangle $ a sequence of natural language questions in a dialogue, $\langle \mathbf {y}_1,...,\mathbf {y}_n\rangle $ are their corresponding SQL queries. Each SQL query is conditioned on a multi-table database schema, and the databases used in test do not appear in training. In this section, we first present a base model without considering context. Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods. Finally, we present how to augment the model with BERT BIBREF10. Methodology ::: Base Model We employ the popularly used attention-based sequence-to-sequence architecture BIBREF11, BIBREF12 to build our base model. As shown in Figure FIGREF6, the base model consists of a question encoder and a grammar-based decoder. For each question, the encoder provides contextual representations, while the decoder generates its corresponding SQL query according to a predefined grammar. Methodology ::: Base Model ::: Question Encoder To capture contextual information within a question, we apply Bidirectional Long Short-Term Memory Neural Network (BiLSTM) as our question encoder BIBREF13, BIBREF14. Specifically, at turn $i$, firstly every token $x_{i,k}$ in $\mathbf {x}_{i}$ is fed into a word embedding layer $\mathbf {\phi }^x$ to get its embedding representation $\mathbf {\phi }^x{(x_{i,k})}$. On top of the embedding representation, the question encoder obtains a contextual representation $\mathbf {h}^{E}_{i,k}=[\mathop {{\mathbf {h}}^{\overrightarrow{E}}_{i,k}}\,;{\mathbf {h}}^{\overleftarrow{E}}_{i,k}]$, where the forward hidden state is computed as following: Methodology ::: Base Model ::: Grammar-based Decoder The decoder is grammar-based with attention on the input question BIBREF7. Different from producing a SQL query word by word, our decoder outputs a sequence of grammar rule (i.e. action). Such a sequence has one-to-one correspondence with the abstract syntax tree of the SQL query. Taking the SQL query in Figure FIGREF6 as an example, it is transformed to the action sequence $\langle $ $\rm \scriptstyle {Start}\rightarrow \rm {Root}$, $\rm \scriptstyle {Root}\rightarrow \rm {Select\ Order}$, $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$, $\rm \scriptstyle {Order}\rightarrow \rm {desc\ limit\ Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {none\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ by left-to-right depth-first traversing on the tree. At each decoding step, a nonterminal is expanded using one of its corresponding grammar rules. The rules are either schema-specific (e.g. $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$), or schema-agnostic (e.g. $\rm \scriptstyle {Start}\rightarrow \rm {Root}$). More specifically, as shown at the top of Figure FIGREF6, we make a little modification on $\rm {Order}$-related rules upon the grammar proposed by BIBREF9, which has been proven to have better performance than vanilla SQL grammar. Denoting $\mathbf {LSTM}^{\overrightarrow{D}}$ the unidirectional LSTM used in the decoder, at each decoding step $j$ of turn $i$, it takes the embedding of the previous generated grammar rule $\mathbf {\phi }^y(y_{i,j-1})$ (indicated as the dash lines in Figure FIGREF6), and updates its hidden state as: where $\mathbf {c}_{i,j-1}$ is the context vector produced by attending on each encoder hidden state $\mathbf {h}^E_{i,k}$ in the previous step: where $\mathbf {W}^e$ is a learned matrix. $\mathbf {h}^{\overrightarrow{D}}_{i,0}$ is initialized by the final encoder hidden state $\mathbf {h}^E_{i,|\mathbf {x}_{i}|}$, while $\mathbf {c}_{i,0}$ is a zero-vector. For each schema-agnostic grammar rule, $\mathbf {\phi }^y$ returns a learned embedding. For schema-specific one, the embedding is obtained by passing its schema (i.e. table or column) through another unidirectional LSTM, namely schema encoder $\mathbf {LSTM}^{\overrightarrow{S}}$. For example, the embedding of $\rm \scriptstyle {Col}\rightarrow \rm {Id}$ is: As for the output $y_{i,j}$, if the expanded nonterminal corresponds to schema-agnostic grammar rules, we can obtain the output probability of action ${\gamma }$ as: where $\mathbf {W}^o$ is a learned matrix. When it comes to schema-specific grammar rules, the main challenge is that the model may encounter schemas never appeared in training due to the cross-domain setting. To deal with it, we do not directly compute the similarity between the decoder hidden state and the schema-specific grammar rule embedding. Instead, we first obtain the unnormalized linking score $l(x_{i,k},\gamma )$ between the $k$-th token in $\mathbf {x}_i$ and the schema in action $\gamma $. It is computed by both handcraft features (e.g. word exact match) BIBREF15 and learned similarity (i.e. dot product between word embedding and grammar rule embedding). With the input question as bridge, we reuse the attention score $a_{i,k}$ in Equation DISPLAY_FORM8 to measure the probability of outputting a schema-specific action $\gamma $ as: Methodology ::: Recent Questions as Context To take advantage of the question context, we provide the base model with recent $h$ questions as additional input. As shown in Figure FIGREF13, we summarize and generalize three ways to incorporate recent questions as context. Methodology ::: Recent Questions as Context ::: Concat The method concatenates recent questions with the current question in order, making the input of the question encoder be $[\mathbf {x}_{i-h},\dots ,\mathbf {x}_{i}]$, while the architecture of the base model remains the same. We do not insert special delimiters between questions, as there are punctuation marks. Methodology ::: Recent Questions as Context ::: Turn A dialogue can be seen as a sequence of questions which, in turn, are sequences of words. Considering such hierarchy, BIBREF4 employed a turn-level encoder (i.e. an unidirectional LSTM) to encode recent questions hierarchically. At turn $i$, the turn-level encoder takes the previous question vector $[\mathbf {h}^{\overleftarrow{E}}_{i-1,1},\mathbf {h}^{\overrightarrow{E}}_{i-1,|\mathbf {x}_{i-1}|}]$ as input, and updates its hidden state to $\mathbf {h}^{\overrightarrow{T}}_{i}$. Then $\mathbf {h}^{\overrightarrow{T}}_{i}$ is fed into $\mathbf {LSTM}^E$ as an implicit context. Accordingly Equation DISPLAY_FORM4 is rewritten as: Similar to Concat, BIBREF4 allowed the decoder to attend over all encoder hidden states. To make the decoder distinguish hidden states from different turns, they further proposed a relative distance embedding ${\phi }^{d}$ in attention computing. Taking the above into account, Equation DISPLAY_FORM8 is as: where $t{\in }[0,\dots ,h]$ represents the relative distance. Methodology ::: Recent Questions as Context ::: Gate To jointly model the decoder attention in token-level and question-level, inspired by the advances of open-domain dialogue area BIBREF16, we propose a gate mechanism to automatically compute the importance of each question. The importance is computed by: where $\lbrace \mathbf {V}^{g},\mathbf {W}^g,\mathbf {U}^g\rbrace $ are learned parameters and $0\,{\le }\,t\,{\le }\,h$. As done in Equation DISPLAY_FORM17 except for the relative distance embedding, the decoder of Gate also attends over all the encoder hidden states. And the question-level importance $\bar{g}_{i-t}$ is employed as the coefficient of the attention scores at turn $i\!-\!t$. Methodology ::: Precedent SQL as Context Besides recent questions, as mentioned in Section SECREF1, the precedent SQL can also be context. As shown in Figure FIGREF27, the usage of $\mathbf {y}_{i-1}$ requires a SQL encoder, where we employ another BiLSTM to achieve it. The $m$-th contextual action representation at turn $i\!-\!1$, $\mathbf {h}^A_{i-1,m}$, can be obtained by passing the action sequence through the SQL encoder. Methodology ::: Precedent SQL as Context ::: SQL Attn Attention over $\mathbf {y}_{i-1}$ is a straightforward method to incorporate the SQL context. Given $\mathbf {h}^A_{i-1,m}$, we employ a similar manner as Equation DISPLAY_FORM8 to compute attention score and thus obtain the SQL context vector. This vector is employed as an additional input for decoder in Equation DISPLAY_FORM7. Methodology ::: Precedent SQL as Context ::: Action Copy To reuse the precedent generated SQL, BIBREF5 presented a token-level copy mechanism on their non-grammar based parser. Inspired by them, we propose an action-level copy mechanism suited for grammar-based decoding. It enables the decoder to copy actions appearing in $\mathbf {y}_{i-1}$, when the actions are compatible to the current expanded nonterminal. As the copied actions lie in the same semantic space with the generated ones, the output probability for action $\gamma $ is a mix of generating ($\mathbf {g}$) and copying ($\mathbf {c}$). The generating probability $P(y_{i,j}\!=\!{\gamma }\,|\,\mathbf {g})$ follows Equation DISPLAY_FORM10 and DISPLAY_FORM11, while the copying probability is: where $\mathbf {W}^l$ is a learned matrix. Denoting $P^{copy}_{i,j}$ the probability of copying at decoding step $j$ of turn $i$, it can be obtained by $\sigma (\mathbf {W}^{c}\mathbf {h}^{\overrightarrow{D}}_{i,j}+\mathbf {b}^{c})$, where $\lbrace \mathbf {W}^{c},\mathbf {b}^{c}\rbrace $ are learned parameters and $\sigma $ is the sigmoid function. The final probability $P(y_{i,j}={\gamma })$ is computed by: Methodology ::: Precedent SQL as Context ::: Tree Copy Besides the action-level copy, we also introduce a tree-level copy mechanism. As illustrated in Figure FIGREF27, tree-level copy mechanism enables the decoder to copy action subtrees extracted from $\mathbf {y}_{i-1}$, which shrinks the number of decoding steps by a large margin. Similar idea has been proposed in a non-grammar based decoder BIBREF4. In fact, a subtree is an action sequence starting from specific nonterminals, such as ${\rm Select}$. To give an example, $\langle $ $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ makes up a subtree for the tree in Figure FIGREF6. For a subtree $\upsilon $, its representation $\phi ^{t}(\upsilon )$ is the final hidden state of SQL encoder, which encodes its corresponding action sequence. Then we can obtain the output probability of subtree $\upsilon $ as: where $\mathbf {W}^t$ is a learned matrix. The output probabilities of subtrees are normalized together with Equation DISPLAY_FORM10 and DISPLAY_FORM11. Methodology ::: BERT Enhanced Embedding We employ BERT BIBREF10 to augment our model via enhancing the embedding of questions and schemas. We first concatenate the input question and all the schemas in a deterministic order with [SEP] as delimiter BIBREF17. For instance, the input for $Q_1$ in Figure FIGREF1 is “What is id ... max horsepower? [SEP] CARS_NAMES [SEP] MakeId ... [SEP] Horsepower”. Feeding it into BERT, we obtain the schema-aware question representations and question-aware schema representations. These contextual representations are used to substitute $\phi ^x$ subsequently, while other parts of the model remain the same. Experiment & Analysis We conduct experiments to study whether the introduced methods are able to effectively model context in the task of SPC (Section SECREF36), and further perform a fine-grained analysis on various contextual phenomena (Section SECREF40). Experiment & Analysis ::: Experimental Setup ::: Dataset Two large complex cross-domain datasets are used: SParC BIBREF2 consists of 3034 / 422 dialogues for train / development, and CoSQL BIBREF6 consists of 2164 / 292 ones. The average turn numbers of SParC and CoSQL are $3.0$ and $5.2$, respectively. Experiment & Analysis ::: Experimental Setup ::: Evaluation Metrics We evaluate each predicted SQL query using exact set match accuracy BIBREF2. Based on it, we consider three metrics: Question Match (Ques.Match), the match accuracy over all questions, Interaction Match (Int.Match), the match accuracy over all dialogues, and Turn $i$ Match, the match accuracy over questions at turn $i$. Experiment & Analysis ::: Experimental Setup ::: Implementation Detail Our implementation is based on PyTorch BIBREF18, AllenNLP BIBREF19 and the library transformers BIBREF20. We adopt the Adam optimizer and set the learning rate as 1e-3 on all modules except for BERT, for which a learning rate of 1e-5 is used BIBREF21. The dimensions of word embedding, action embedding and distance embedding are 100, while the hidden state dimensions of question encoder, grammar-based decoder, turn-level encoder and SQL encoder are 200. We initialize word embedding using Glove BIBREF22 for non-BERT models. For methods which use recent $h$ questions, $h$ is set as 5 on both datasets. Experiment & Analysis ::: Experimental Setup ::: Baselines We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy). Experiment & Analysis ::: Model Comparison Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively. To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected. As mentioned in Section SECREF1, intuitively, methods which only use the precedent SQL enjoys better generalizability. To validate it, we further conduct an out-of-distribution experiment to assess the generalizability of different context modeling methods. Concretely, we select three representative methods and train them on questions at turn 1 and 2, whereas test them at turn 3, 4 and beyond. As shown in Figure FIGREF38, Action Copy has a consistently comparable or better performance, validating the intuition. Meanwhile, Concat appears to be strikingly competitive, demonstrating it also has a good generalizability. Compared with them, Turn is more vulnerable to out-of-distribution questions. In conclusion, existing context modeling methods in the task of SPC are not as effective as expected, since they do not show a significant advantage over the simple concatenation method. Experiment & Analysis ::: Fine-grained Analysis By a careful investigation on contextual phenomena, we summarize them in multiple hierarchies. Roughly, there are three kinds of contextual phenomena in questions: semantically complete, coreference and ellipsis. Semantically complete means a question can reflect all the meaning of its corresponding SQL. Coreference means a question contains pronouns, while ellipsis means the question cannot reflect all of its SQL, even if resolving its pronouns. In the fine-grained level, coreference can be divided into 5 types according to its pronoun BIBREF1. Ellipsis can be characterized by its intention: continuation and substitution. Continuation is to augment extra semantics (e.g. ${\rm Filter}$), and substitution refers to the situation where current question is intended to substitute particular semantics in the precedent question. Substitution can be further branched into 4 types: explicit vs. implicit and schema vs. operator. Explicit means the current question provides contextual clues (i.e. partial context overlaps with the precedent question) to help locate the substitution target, while implicit does not. On most cases, the target is schema or operator. In order to study the effect of context modeling methods on various phenomena, as shown in Table TABREF39, we take the development set of SParC as an example to perform our analysis. The analysis begins by presenting Ques.Match of three representative models on above fine-grained types in Figure FIGREF42. As shown, though different methods have different strengths, they all perform poorly on certain types, which will be elaborated below. Experiment & Analysis ::: Fine-grained Analysis ::: Coreference Diving deep into the coreference (left of Figure FIGREF42), we observe that all methods struggle with two fine-grained types: definite noun phrases and one anaphora. Through our study, we find the scope of antecedent is a key factor. An antecedent is one or more entities referred by a pronoun. Its scope is either whole, where the antecedent is the precedent answer, or partial, where the antecedent is part of the precedent question. The above-mentioned fine-grained types are more challenging as their partial proportion are nearly $40\%$, while for demonstrative pronoun it is only $22\%$. It is reasonable as partial requires complex inference on context. Considering the 4th example in Table TABREF39, “one” refers to “pets” instead of “age” because the accompanying verb is “weigh”. From this observation, we draw the conclusion that current context modeling methods do not succeed on pronouns which require complex inference on context. Experiment & Analysis ::: Fine-grained Analysis ::: Ellipsis As for ellipsis (right of Figure FIGREF42), we obtain three interesting findings by comparisons in three aspects. The first finding is that all models have a better performance on continuation than substitution. This is expected since there are redundant semantics in substitution, while not in continuation. Considering the 8th example in Table TABREF39, “horsepower” is a redundant semantic which may raise noise in SQL prediction. The second finding comes from the unexpected drop from implicit(substitution) to explicit(substitution). Intuitively, explicit should surpass implicit on substitution as it provides more contextual clues. The finding demonstrates that contextual clues are obviously not well utilized by the context modeling methods. Third, compared with schema(substitution), operator(substitution) achieves a comparable or better performance consistently. We believe it is caused by the cross-domain setting, which makes schema related substitution more difficult. Related Work The most related work is the line of semantic parsing in context. In the topic of SQL, BIBREF24 proposed a context-independent CCG parser and then applied it to do context-dependent substitution, BIBREF3 applied a search-based method for sequential questions, and BIBREF4 provided the first sequence-to-sequence solution in the area. More recently, BIBREF5 presented a edit-based method to reuse the precedent generated SQL. With respect to other logic forms, BIBREF25 focuses on understanding execution commands in context, BIBREF26 on question answering over knowledge base in a conversation, and BIBREF27 on code generation in environment context. Our work is different from theirs as we perform an exploratory study, not fulfilled by previous works. There are also several related works that provided studies on context. BIBREF17 explored the contextual representations in context-independent semantic parsing, and BIBREF28 studied how conversational agents use conversation history to generate response. Different from them, our task focuses on context modeling for semantic parsing. Under the same task, BIBREF1 summarized contextual phenomena in a coarse-grained level, while BIBREF0 performed a wizard-of-oz experiment to study the most frequent phenomena. What makes our work different from them is that we not only summarize contextual phenomena by fine-grained types, but also perform an analysis on context modeling methods. Conclusion & Future Work This work conducts an exploratory study on semantic parsing in context, to realize how far we are from effective context modeling. Through a thorough comparison, we find that existing context modeling methods are not as effective as expected. A simple concatenation method can be much competitive. Furthermore, by performing a fine-grained analysis, we summarize two potential directions as our future work: incorporating common sense for better pronouns inference, and modeling contextual clues in a more explicit manner. By open-sourcing our code and materials, we believe our work can facilitate the community to debug models in a fine-grained level and make more progress.
SParC BIBREF2 and CoSQL BIBREF6
471d624498ab48549ce492ada9e6129da05debac
471d624498ab48549ce492ada9e6129da05debac_0
Q: What context modelling methods are evaluated? Text: Introduction Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries. A number of context modeling methods have been suggested in the literature to address SPC BIBREF3, BIBREF4, BIBREF2, BIBREF5, BIBREF6. These methods proposed to leverage two categories of context: recent questions and precedent logic form. It is natural to leverage recent questions as context. Taking the example from Figure FIGREF1, when parsing $Q_3$, we also need to take $Q_1$ and $Q_2$ as input. We can either simply concatenate the input questions, or use a model to encode them hierarchically BIBREF4. As for the second category, instead of taking a bag of recent questions as input, it only considers the precedent logic form. For instance, when parsing $Q_3$, we only need to take $S_2$ as context. With such a context, the decoder can attend over it, or reuse it via a copy mechanism BIBREF4, BIBREF5. Intuitively, methods that fall into this category enjoy better generalizability, as they only rely on the last logic form as context, no matter at which turn. Notably, these two categories of context can be used simultaneously. However, it remains unclear how far we are from effective context modeling. First, there is a lack of thorough comparisons of typical context modeling methods on complex SPC (e.g. cross-domain). Second, none of previous works verified their proposed context modeling methods with the grammar-based decoding technique, which has been developed for years and proven to be highly effective in semantic parsing BIBREF7, BIBREF8, BIBREF9. To obtain better performance, it is worthwhile to study how context modeling methods collaborate with the grammar-based decoding. Last but not the least, there is limited understanding of how context modeling methods perform on various contextual phenomena. An in-depth analysis can shed light on potential research directions. In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance. Methodology In the task of semantic parsing in context, we are given a dataset composed of dialogues. Denoting $\langle \mathbf {x}_1,...,\mathbf {x}_n\rangle $ a sequence of natural language questions in a dialogue, $\langle \mathbf {y}_1,...,\mathbf {y}_n\rangle $ are their corresponding SQL queries. Each SQL query is conditioned on a multi-table database schema, and the databases used in test do not appear in training. In this section, we first present a base model without considering context. Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods. Finally, we present how to augment the model with BERT BIBREF10. Methodology ::: Base Model We employ the popularly used attention-based sequence-to-sequence architecture BIBREF11, BIBREF12 to build our base model. As shown in Figure FIGREF6, the base model consists of a question encoder and a grammar-based decoder. For each question, the encoder provides contextual representations, while the decoder generates its corresponding SQL query according to a predefined grammar. Methodology ::: Base Model ::: Question Encoder To capture contextual information within a question, we apply Bidirectional Long Short-Term Memory Neural Network (BiLSTM) as our question encoder BIBREF13, BIBREF14. Specifically, at turn $i$, firstly every token $x_{i,k}$ in $\mathbf {x}_{i}$ is fed into a word embedding layer $\mathbf {\phi }^x$ to get its embedding representation $\mathbf {\phi }^x{(x_{i,k})}$. On top of the embedding representation, the question encoder obtains a contextual representation $\mathbf {h}^{E}_{i,k}=[\mathop {{\mathbf {h}}^{\overrightarrow{E}}_{i,k}}\,;{\mathbf {h}}^{\overleftarrow{E}}_{i,k}]$, where the forward hidden state is computed as following: Methodology ::: Base Model ::: Grammar-based Decoder The decoder is grammar-based with attention on the input question BIBREF7. Different from producing a SQL query word by word, our decoder outputs a sequence of grammar rule (i.e. action). Such a sequence has one-to-one correspondence with the abstract syntax tree of the SQL query. Taking the SQL query in Figure FIGREF6 as an example, it is transformed to the action sequence $\langle $ $\rm \scriptstyle {Start}\rightarrow \rm {Root}$, $\rm \scriptstyle {Root}\rightarrow \rm {Select\ Order}$, $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$, $\rm \scriptstyle {Order}\rightarrow \rm {desc\ limit\ Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {none\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ by left-to-right depth-first traversing on the tree. At each decoding step, a nonterminal is expanded using one of its corresponding grammar rules. The rules are either schema-specific (e.g. $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$), or schema-agnostic (e.g. $\rm \scriptstyle {Start}\rightarrow \rm {Root}$). More specifically, as shown at the top of Figure FIGREF6, we make a little modification on $\rm {Order}$-related rules upon the grammar proposed by BIBREF9, which has been proven to have better performance than vanilla SQL grammar. Denoting $\mathbf {LSTM}^{\overrightarrow{D}}$ the unidirectional LSTM used in the decoder, at each decoding step $j$ of turn $i$, it takes the embedding of the previous generated grammar rule $\mathbf {\phi }^y(y_{i,j-1})$ (indicated as the dash lines in Figure FIGREF6), and updates its hidden state as: where $\mathbf {c}_{i,j-1}$ is the context vector produced by attending on each encoder hidden state $\mathbf {h}^E_{i,k}$ in the previous step: where $\mathbf {W}^e$ is a learned matrix. $\mathbf {h}^{\overrightarrow{D}}_{i,0}$ is initialized by the final encoder hidden state $\mathbf {h}^E_{i,|\mathbf {x}_{i}|}$, while $\mathbf {c}_{i,0}$ is a zero-vector. For each schema-agnostic grammar rule, $\mathbf {\phi }^y$ returns a learned embedding. For schema-specific one, the embedding is obtained by passing its schema (i.e. table or column) through another unidirectional LSTM, namely schema encoder $\mathbf {LSTM}^{\overrightarrow{S}}$. For example, the embedding of $\rm \scriptstyle {Col}\rightarrow \rm {Id}$ is: As for the output $y_{i,j}$, if the expanded nonterminal corresponds to schema-agnostic grammar rules, we can obtain the output probability of action ${\gamma }$ as: where $\mathbf {W}^o$ is a learned matrix. When it comes to schema-specific grammar rules, the main challenge is that the model may encounter schemas never appeared in training due to the cross-domain setting. To deal with it, we do not directly compute the similarity between the decoder hidden state and the schema-specific grammar rule embedding. Instead, we first obtain the unnormalized linking score $l(x_{i,k},\gamma )$ between the $k$-th token in $\mathbf {x}_i$ and the schema in action $\gamma $. It is computed by both handcraft features (e.g. word exact match) BIBREF15 and learned similarity (i.e. dot product between word embedding and grammar rule embedding). With the input question as bridge, we reuse the attention score $a_{i,k}$ in Equation DISPLAY_FORM8 to measure the probability of outputting a schema-specific action $\gamma $ as: Methodology ::: Recent Questions as Context To take advantage of the question context, we provide the base model with recent $h$ questions as additional input. As shown in Figure FIGREF13, we summarize and generalize three ways to incorporate recent questions as context. Methodology ::: Recent Questions as Context ::: Concat The method concatenates recent questions with the current question in order, making the input of the question encoder be $[\mathbf {x}_{i-h},\dots ,\mathbf {x}_{i}]$, while the architecture of the base model remains the same. We do not insert special delimiters between questions, as there are punctuation marks. Methodology ::: Recent Questions as Context ::: Turn A dialogue can be seen as a sequence of questions which, in turn, are sequences of words. Considering such hierarchy, BIBREF4 employed a turn-level encoder (i.e. an unidirectional LSTM) to encode recent questions hierarchically. At turn $i$, the turn-level encoder takes the previous question vector $[\mathbf {h}^{\overleftarrow{E}}_{i-1,1},\mathbf {h}^{\overrightarrow{E}}_{i-1,|\mathbf {x}_{i-1}|}]$ as input, and updates its hidden state to $\mathbf {h}^{\overrightarrow{T}}_{i}$. Then $\mathbf {h}^{\overrightarrow{T}}_{i}$ is fed into $\mathbf {LSTM}^E$ as an implicit context. Accordingly Equation DISPLAY_FORM4 is rewritten as: Similar to Concat, BIBREF4 allowed the decoder to attend over all encoder hidden states. To make the decoder distinguish hidden states from different turns, they further proposed a relative distance embedding ${\phi }^{d}$ in attention computing. Taking the above into account, Equation DISPLAY_FORM8 is as: where $t{\in }[0,\dots ,h]$ represents the relative distance. Methodology ::: Recent Questions as Context ::: Gate To jointly model the decoder attention in token-level and question-level, inspired by the advances of open-domain dialogue area BIBREF16, we propose a gate mechanism to automatically compute the importance of each question. The importance is computed by: where $\lbrace \mathbf {V}^{g},\mathbf {W}^g,\mathbf {U}^g\rbrace $ are learned parameters and $0\,{\le }\,t\,{\le }\,h$. As done in Equation DISPLAY_FORM17 except for the relative distance embedding, the decoder of Gate also attends over all the encoder hidden states. And the question-level importance $\bar{g}_{i-t}$ is employed as the coefficient of the attention scores at turn $i\!-\!t$. Methodology ::: Precedent SQL as Context Besides recent questions, as mentioned in Section SECREF1, the precedent SQL can also be context. As shown in Figure FIGREF27, the usage of $\mathbf {y}_{i-1}$ requires a SQL encoder, where we employ another BiLSTM to achieve it. The $m$-th contextual action representation at turn $i\!-\!1$, $\mathbf {h}^A_{i-1,m}$, can be obtained by passing the action sequence through the SQL encoder. Methodology ::: Precedent SQL as Context ::: SQL Attn Attention over $\mathbf {y}_{i-1}$ is a straightforward method to incorporate the SQL context. Given $\mathbf {h}^A_{i-1,m}$, we employ a similar manner as Equation DISPLAY_FORM8 to compute attention score and thus obtain the SQL context vector. This vector is employed as an additional input for decoder in Equation DISPLAY_FORM7. Methodology ::: Precedent SQL as Context ::: Action Copy To reuse the precedent generated SQL, BIBREF5 presented a token-level copy mechanism on their non-grammar based parser. Inspired by them, we propose an action-level copy mechanism suited for grammar-based decoding. It enables the decoder to copy actions appearing in $\mathbf {y}_{i-1}$, when the actions are compatible to the current expanded nonterminal. As the copied actions lie in the same semantic space with the generated ones, the output probability for action $\gamma $ is a mix of generating ($\mathbf {g}$) and copying ($\mathbf {c}$). The generating probability $P(y_{i,j}\!=\!{\gamma }\,|\,\mathbf {g})$ follows Equation DISPLAY_FORM10 and DISPLAY_FORM11, while the copying probability is: where $\mathbf {W}^l$ is a learned matrix. Denoting $P^{copy}_{i,j}$ the probability of copying at decoding step $j$ of turn $i$, it can be obtained by $\sigma (\mathbf {W}^{c}\mathbf {h}^{\overrightarrow{D}}_{i,j}+\mathbf {b}^{c})$, where $\lbrace \mathbf {W}^{c},\mathbf {b}^{c}\rbrace $ are learned parameters and $\sigma $ is the sigmoid function. The final probability $P(y_{i,j}={\gamma })$ is computed by: Methodology ::: Precedent SQL as Context ::: Tree Copy Besides the action-level copy, we also introduce a tree-level copy mechanism. As illustrated in Figure FIGREF27, tree-level copy mechanism enables the decoder to copy action subtrees extracted from $\mathbf {y}_{i-1}$, which shrinks the number of decoding steps by a large margin. Similar idea has been proposed in a non-grammar based decoder BIBREF4. In fact, a subtree is an action sequence starting from specific nonterminals, such as ${\rm Select}$. To give an example, $\langle $ $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ makes up a subtree for the tree in Figure FIGREF6. For a subtree $\upsilon $, its representation $\phi ^{t}(\upsilon )$ is the final hidden state of SQL encoder, which encodes its corresponding action sequence. Then we can obtain the output probability of subtree $\upsilon $ as: where $\mathbf {W}^t$ is a learned matrix. The output probabilities of subtrees are normalized together with Equation DISPLAY_FORM10 and DISPLAY_FORM11. Methodology ::: BERT Enhanced Embedding We employ BERT BIBREF10 to augment our model via enhancing the embedding of questions and schemas. We first concatenate the input question and all the schemas in a deterministic order with [SEP] as delimiter BIBREF17. For instance, the input for $Q_1$ in Figure FIGREF1 is “What is id ... max horsepower? [SEP] CARS_NAMES [SEP] MakeId ... [SEP] Horsepower”. Feeding it into BERT, we obtain the schema-aware question representations and question-aware schema representations. These contextual representations are used to substitute $\phi ^x$ subsequently, while other parts of the model remain the same. Experiment & Analysis We conduct experiments to study whether the introduced methods are able to effectively model context in the task of SPC (Section SECREF36), and further perform a fine-grained analysis on various contextual phenomena (Section SECREF40). Experiment & Analysis ::: Experimental Setup ::: Dataset Two large complex cross-domain datasets are used: SParC BIBREF2 consists of 3034 / 422 dialogues for train / development, and CoSQL BIBREF6 consists of 2164 / 292 ones. The average turn numbers of SParC and CoSQL are $3.0$ and $5.2$, respectively. Experiment & Analysis ::: Experimental Setup ::: Evaluation Metrics We evaluate each predicted SQL query using exact set match accuracy BIBREF2. Based on it, we consider three metrics: Question Match (Ques.Match), the match accuracy over all questions, Interaction Match (Int.Match), the match accuracy over all dialogues, and Turn $i$ Match, the match accuracy over questions at turn $i$. Experiment & Analysis ::: Experimental Setup ::: Implementation Detail Our implementation is based on PyTorch BIBREF18, AllenNLP BIBREF19 and the library transformers BIBREF20. We adopt the Adam optimizer and set the learning rate as 1e-3 on all modules except for BERT, for which a learning rate of 1e-5 is used BIBREF21. The dimensions of word embedding, action embedding and distance embedding are 100, while the hidden state dimensions of question encoder, grammar-based decoder, turn-level encoder and SQL encoder are 200. We initialize word embedding using Glove BIBREF22 for non-BERT models. For methods which use recent $h$ questions, $h$ is set as 5 on both datasets. Experiment & Analysis ::: Experimental Setup ::: Baselines We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy). Experiment & Analysis ::: Model Comparison Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively. To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected. As mentioned in Section SECREF1, intuitively, methods which only use the precedent SQL enjoys better generalizability. To validate it, we further conduct an out-of-distribution experiment to assess the generalizability of different context modeling methods. Concretely, we select three representative methods and train them on questions at turn 1 and 2, whereas test them at turn 3, 4 and beyond. As shown in Figure FIGREF38, Action Copy has a consistently comparable or better performance, validating the intuition. Meanwhile, Concat appears to be strikingly competitive, demonstrating it also has a good generalizability. Compared with them, Turn is more vulnerable to out-of-distribution questions. In conclusion, existing context modeling methods in the task of SPC are not as effective as expected, since they do not show a significant advantage over the simple concatenation method. Experiment & Analysis ::: Fine-grained Analysis By a careful investigation on contextual phenomena, we summarize them in multiple hierarchies. Roughly, there are three kinds of contextual phenomena in questions: semantically complete, coreference and ellipsis. Semantically complete means a question can reflect all the meaning of its corresponding SQL. Coreference means a question contains pronouns, while ellipsis means the question cannot reflect all of its SQL, even if resolving its pronouns. In the fine-grained level, coreference can be divided into 5 types according to its pronoun BIBREF1. Ellipsis can be characterized by its intention: continuation and substitution. Continuation is to augment extra semantics (e.g. ${\rm Filter}$), and substitution refers to the situation where current question is intended to substitute particular semantics in the precedent question. Substitution can be further branched into 4 types: explicit vs. implicit and schema vs. operator. Explicit means the current question provides contextual clues (i.e. partial context overlaps with the precedent question) to help locate the substitution target, while implicit does not. On most cases, the target is schema or operator. In order to study the effect of context modeling methods on various phenomena, as shown in Table TABREF39, we take the development set of SParC as an example to perform our analysis. The analysis begins by presenting Ques.Match of three representative models on above fine-grained types in Figure FIGREF42. As shown, though different methods have different strengths, they all perform poorly on certain types, which will be elaborated below. Experiment & Analysis ::: Fine-grained Analysis ::: Coreference Diving deep into the coreference (left of Figure FIGREF42), we observe that all methods struggle with two fine-grained types: definite noun phrases and one anaphora. Through our study, we find the scope of antecedent is a key factor. An antecedent is one or more entities referred by a pronoun. Its scope is either whole, where the antecedent is the precedent answer, or partial, where the antecedent is part of the precedent question. The above-mentioned fine-grained types are more challenging as their partial proportion are nearly $40\%$, while for demonstrative pronoun it is only $22\%$. It is reasonable as partial requires complex inference on context. Considering the 4th example in Table TABREF39, “one” refers to “pets” instead of “age” because the accompanying verb is “weigh”. From this observation, we draw the conclusion that current context modeling methods do not succeed on pronouns which require complex inference on context. Experiment & Analysis ::: Fine-grained Analysis ::: Ellipsis As for ellipsis (right of Figure FIGREF42), we obtain three interesting findings by comparisons in three aspects. The first finding is that all models have a better performance on continuation than substitution. This is expected since there are redundant semantics in substitution, while not in continuation. Considering the 8th example in Table TABREF39, “horsepower” is a redundant semantic which may raise noise in SQL prediction. The second finding comes from the unexpected drop from implicit(substitution) to explicit(substitution). Intuitively, explicit should surpass implicit on substitution as it provides more contextual clues. The finding demonstrates that contextual clues are obviously not well utilized by the context modeling methods. Third, compared with schema(substitution), operator(substitution) achieves a comparable or better performance consistently. We believe it is caused by the cross-domain setting, which makes schema related substitution more difficult. Related Work The most related work is the line of semantic parsing in context. In the topic of SQL, BIBREF24 proposed a context-independent CCG parser and then applied it to do context-dependent substitution, BIBREF3 applied a search-based method for sequential questions, and BIBREF4 provided the first sequence-to-sequence solution in the area. More recently, BIBREF5 presented a edit-based method to reuse the precedent generated SQL. With respect to other logic forms, BIBREF25 focuses on understanding execution commands in context, BIBREF26 on question answering over knowledge base in a conversation, and BIBREF27 on code generation in environment context. Our work is different from theirs as we perform an exploratory study, not fulfilled by previous works. There are also several related works that provided studies on context. BIBREF17 explored the contextual representations in context-independent semantic parsing, and BIBREF28 studied how conversational agents use conversation history to generate response. Different from them, our task focuses on context modeling for semantic parsing. Under the same task, BIBREF1 summarized contextual phenomena in a coarse-grained level, while BIBREF0 performed a wizard-of-oz experiment to study the most frequent phenomena. What makes our work different from them is that we not only summarize contextual phenomena by fine-grained types, but also perform an analysis on context modeling methods. Conclusion & Future Work This work conducts an exploratory study on semantic parsing in context, to realize how far we are from effective context modeling. Through a thorough comparison, we find that existing context modeling methods are not as effective as expected. A simple concatenation method can be much competitive. Furthermore, by performing a fine-grained analysis, we summarize two potential directions as our future work: incorporating common sense for better pronouns inference, and modeling contextual clues in a more explicit manner. By open-sourcing our code and materials, we believe our work can facilitate the community to debug models in a fine-grained level and make more progress.
Concat Turn Gate Action Copy Tree Copy SQL Attn Concat + Action Copy Concat + Tree Copy Concat + SQL Attn Turn + Action Copy Turn + Tree Copy Turn + SQL Attn Turn + SQL Attn + Action Copy
f858031ebe57b6139af46ee0f25c10870bb00c3c
f858031ebe57b6139af46ee0f25c10870bb00c3c_0
Q: What are two datasets models are tested on? Text: Introduction Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries. A number of context modeling methods have been suggested in the literature to address SPC BIBREF3, BIBREF4, BIBREF2, BIBREF5, BIBREF6. These methods proposed to leverage two categories of context: recent questions and precedent logic form. It is natural to leverage recent questions as context. Taking the example from Figure FIGREF1, when parsing $Q_3$, we also need to take $Q_1$ and $Q_2$ as input. We can either simply concatenate the input questions, or use a model to encode them hierarchically BIBREF4. As for the second category, instead of taking a bag of recent questions as input, it only considers the precedent logic form. For instance, when parsing $Q_3$, we only need to take $S_2$ as context. With such a context, the decoder can attend over it, or reuse it via a copy mechanism BIBREF4, BIBREF5. Intuitively, methods that fall into this category enjoy better generalizability, as they only rely on the last logic form as context, no matter at which turn. Notably, these two categories of context can be used simultaneously. However, it remains unclear how far we are from effective context modeling. First, there is a lack of thorough comparisons of typical context modeling methods on complex SPC (e.g. cross-domain). Second, none of previous works verified their proposed context modeling methods with the grammar-based decoding technique, which has been developed for years and proven to be highly effective in semantic parsing BIBREF7, BIBREF8, BIBREF9. To obtain better performance, it is worthwhile to study how context modeling methods collaborate with the grammar-based decoding. Last but not the least, there is limited understanding of how context modeling methods perform on various contextual phenomena. An in-depth analysis can shed light on potential research directions. In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance. Methodology In the task of semantic parsing in context, we are given a dataset composed of dialogues. Denoting $\langle \mathbf {x}_1,...,\mathbf {x}_n\rangle $ a sequence of natural language questions in a dialogue, $\langle \mathbf {y}_1,...,\mathbf {y}_n\rangle $ are their corresponding SQL queries. Each SQL query is conditioned on a multi-table database schema, and the databases used in test do not appear in training. In this section, we first present a base model without considering context. Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods. Finally, we present how to augment the model with BERT BIBREF10. Methodology ::: Base Model We employ the popularly used attention-based sequence-to-sequence architecture BIBREF11, BIBREF12 to build our base model. As shown in Figure FIGREF6, the base model consists of a question encoder and a grammar-based decoder. For each question, the encoder provides contextual representations, while the decoder generates its corresponding SQL query according to a predefined grammar. Methodology ::: Base Model ::: Question Encoder To capture contextual information within a question, we apply Bidirectional Long Short-Term Memory Neural Network (BiLSTM) as our question encoder BIBREF13, BIBREF14. Specifically, at turn $i$, firstly every token $x_{i,k}$ in $\mathbf {x}_{i}$ is fed into a word embedding layer $\mathbf {\phi }^x$ to get its embedding representation $\mathbf {\phi }^x{(x_{i,k})}$. On top of the embedding representation, the question encoder obtains a contextual representation $\mathbf {h}^{E}_{i,k}=[\mathop {{\mathbf {h}}^{\overrightarrow{E}}_{i,k}}\,;{\mathbf {h}}^{\overleftarrow{E}}_{i,k}]$, where the forward hidden state is computed as following: Methodology ::: Base Model ::: Grammar-based Decoder The decoder is grammar-based with attention on the input question BIBREF7. Different from producing a SQL query word by word, our decoder outputs a sequence of grammar rule (i.e. action). Such a sequence has one-to-one correspondence with the abstract syntax tree of the SQL query. Taking the SQL query in Figure FIGREF6 as an example, it is transformed to the action sequence $\langle $ $\rm \scriptstyle {Start}\rightarrow \rm {Root}$, $\rm \scriptstyle {Root}\rightarrow \rm {Select\ Order}$, $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$, $\rm \scriptstyle {Order}\rightarrow \rm {desc\ limit\ Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {none\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ by left-to-right depth-first traversing on the tree. At each decoding step, a nonterminal is expanded using one of its corresponding grammar rules. The rules are either schema-specific (e.g. $\rm \scriptstyle {Col}\rightarrow \rm {Horsepower}$), or schema-agnostic (e.g. $\rm \scriptstyle {Start}\rightarrow \rm {Root}$). More specifically, as shown at the top of Figure FIGREF6, we make a little modification on $\rm {Order}$-related rules upon the grammar proposed by BIBREF9, which has been proven to have better performance than vanilla SQL grammar. Denoting $\mathbf {LSTM}^{\overrightarrow{D}}$ the unidirectional LSTM used in the decoder, at each decoding step $j$ of turn $i$, it takes the embedding of the previous generated grammar rule $\mathbf {\phi }^y(y_{i,j-1})$ (indicated as the dash lines in Figure FIGREF6), and updates its hidden state as: where $\mathbf {c}_{i,j-1}$ is the context vector produced by attending on each encoder hidden state $\mathbf {h}^E_{i,k}$ in the previous step: where $\mathbf {W}^e$ is a learned matrix. $\mathbf {h}^{\overrightarrow{D}}_{i,0}$ is initialized by the final encoder hidden state $\mathbf {h}^E_{i,|\mathbf {x}_{i}|}$, while $\mathbf {c}_{i,0}$ is a zero-vector. For each schema-agnostic grammar rule, $\mathbf {\phi }^y$ returns a learned embedding. For schema-specific one, the embedding is obtained by passing its schema (i.e. table or column) through another unidirectional LSTM, namely schema encoder $\mathbf {LSTM}^{\overrightarrow{S}}$. For example, the embedding of $\rm \scriptstyle {Col}\rightarrow \rm {Id}$ is: As for the output $y_{i,j}$, if the expanded nonterminal corresponds to schema-agnostic grammar rules, we can obtain the output probability of action ${\gamma }$ as: where $\mathbf {W}^o$ is a learned matrix. When it comes to schema-specific grammar rules, the main challenge is that the model may encounter schemas never appeared in training due to the cross-domain setting. To deal with it, we do not directly compute the similarity between the decoder hidden state and the schema-specific grammar rule embedding. Instead, we first obtain the unnormalized linking score $l(x_{i,k},\gamma )$ between the $k$-th token in $\mathbf {x}_i$ and the schema in action $\gamma $. It is computed by both handcraft features (e.g. word exact match) BIBREF15 and learned similarity (i.e. dot product between word embedding and grammar rule embedding). With the input question as bridge, we reuse the attention score $a_{i,k}$ in Equation DISPLAY_FORM8 to measure the probability of outputting a schema-specific action $\gamma $ as: Methodology ::: Recent Questions as Context To take advantage of the question context, we provide the base model with recent $h$ questions as additional input. As shown in Figure FIGREF13, we summarize and generalize three ways to incorporate recent questions as context. Methodology ::: Recent Questions as Context ::: Concat The method concatenates recent questions with the current question in order, making the input of the question encoder be $[\mathbf {x}_{i-h},\dots ,\mathbf {x}_{i}]$, while the architecture of the base model remains the same. We do not insert special delimiters between questions, as there are punctuation marks. Methodology ::: Recent Questions as Context ::: Turn A dialogue can be seen as a sequence of questions which, in turn, are sequences of words. Considering such hierarchy, BIBREF4 employed a turn-level encoder (i.e. an unidirectional LSTM) to encode recent questions hierarchically. At turn $i$, the turn-level encoder takes the previous question vector $[\mathbf {h}^{\overleftarrow{E}}_{i-1,1},\mathbf {h}^{\overrightarrow{E}}_{i-1,|\mathbf {x}_{i-1}|}]$ as input, and updates its hidden state to $\mathbf {h}^{\overrightarrow{T}}_{i}$. Then $\mathbf {h}^{\overrightarrow{T}}_{i}$ is fed into $\mathbf {LSTM}^E$ as an implicit context. Accordingly Equation DISPLAY_FORM4 is rewritten as: Similar to Concat, BIBREF4 allowed the decoder to attend over all encoder hidden states. To make the decoder distinguish hidden states from different turns, they further proposed a relative distance embedding ${\phi }^{d}$ in attention computing. Taking the above into account, Equation DISPLAY_FORM8 is as: where $t{\in }[0,\dots ,h]$ represents the relative distance. Methodology ::: Recent Questions as Context ::: Gate To jointly model the decoder attention in token-level and question-level, inspired by the advances of open-domain dialogue area BIBREF16, we propose a gate mechanism to automatically compute the importance of each question. The importance is computed by: where $\lbrace \mathbf {V}^{g},\mathbf {W}^g,\mathbf {U}^g\rbrace $ are learned parameters and $0\,{\le }\,t\,{\le }\,h$. As done in Equation DISPLAY_FORM17 except for the relative distance embedding, the decoder of Gate also attends over all the encoder hidden states. And the question-level importance $\bar{g}_{i-t}$ is employed as the coefficient of the attention scores at turn $i\!-\!t$. Methodology ::: Precedent SQL as Context Besides recent questions, as mentioned in Section SECREF1, the precedent SQL can also be context. As shown in Figure FIGREF27, the usage of $\mathbf {y}_{i-1}$ requires a SQL encoder, where we employ another BiLSTM to achieve it. The $m$-th contextual action representation at turn $i\!-\!1$, $\mathbf {h}^A_{i-1,m}$, can be obtained by passing the action sequence through the SQL encoder. Methodology ::: Precedent SQL as Context ::: SQL Attn Attention over $\mathbf {y}_{i-1}$ is a straightforward method to incorporate the SQL context. Given $\mathbf {h}^A_{i-1,m}$, we employ a similar manner as Equation DISPLAY_FORM8 to compute attention score and thus obtain the SQL context vector. This vector is employed as an additional input for decoder in Equation DISPLAY_FORM7. Methodology ::: Precedent SQL as Context ::: Action Copy To reuse the precedent generated SQL, BIBREF5 presented a token-level copy mechanism on their non-grammar based parser. Inspired by them, we propose an action-level copy mechanism suited for grammar-based decoding. It enables the decoder to copy actions appearing in $\mathbf {y}_{i-1}$, when the actions are compatible to the current expanded nonterminal. As the copied actions lie in the same semantic space with the generated ones, the output probability for action $\gamma $ is a mix of generating ($\mathbf {g}$) and copying ($\mathbf {c}$). The generating probability $P(y_{i,j}\!=\!{\gamma }\,|\,\mathbf {g})$ follows Equation DISPLAY_FORM10 and DISPLAY_FORM11, while the copying probability is: where $\mathbf {W}^l$ is a learned matrix. Denoting $P^{copy}_{i,j}$ the probability of copying at decoding step $j$ of turn $i$, it can be obtained by $\sigma (\mathbf {W}^{c}\mathbf {h}^{\overrightarrow{D}}_{i,j}+\mathbf {b}^{c})$, where $\lbrace \mathbf {W}^{c},\mathbf {b}^{c}\rbrace $ are learned parameters and $\sigma $ is the sigmoid function. The final probability $P(y_{i,j}={\gamma })$ is computed by: Methodology ::: Precedent SQL as Context ::: Tree Copy Besides the action-level copy, we also introduce a tree-level copy mechanism. As illustrated in Figure FIGREF27, tree-level copy mechanism enables the decoder to copy action subtrees extracted from $\mathbf {y}_{i-1}$, which shrinks the number of decoding steps by a large margin. Similar idea has been proposed in a non-grammar based decoder BIBREF4. In fact, a subtree is an action sequence starting from specific nonterminals, such as ${\rm Select}$. To give an example, $\langle $ $\rm \scriptstyle {Select}\rightarrow \rm {Agg}$, $\rm \scriptstyle {Agg}\rightarrow \rm {max\ Col\ Tab}$, $\rm \scriptstyle {Col}\rightarrow \rm {Id}$, $\rm \scriptstyle {Tab}\rightarrow \rm {CARS\_DATA}$ $\rangle $ makes up a subtree for the tree in Figure FIGREF6. For a subtree $\upsilon $, its representation $\phi ^{t}(\upsilon )$ is the final hidden state of SQL encoder, which encodes its corresponding action sequence. Then we can obtain the output probability of subtree $\upsilon $ as: where $\mathbf {W}^t$ is a learned matrix. The output probabilities of subtrees are normalized together with Equation DISPLAY_FORM10 and DISPLAY_FORM11. Methodology ::: BERT Enhanced Embedding We employ BERT BIBREF10 to augment our model via enhancing the embedding of questions and schemas. We first concatenate the input question and all the schemas in a deterministic order with [SEP] as delimiter BIBREF17. For instance, the input for $Q_1$ in Figure FIGREF1 is “What is id ... max horsepower? [SEP] CARS_NAMES [SEP] MakeId ... [SEP] Horsepower”. Feeding it into BERT, we obtain the schema-aware question representations and question-aware schema representations. These contextual representations are used to substitute $\phi ^x$ subsequently, while other parts of the model remain the same. Experiment & Analysis We conduct experiments to study whether the introduced methods are able to effectively model context in the task of SPC (Section SECREF36), and further perform a fine-grained analysis on various contextual phenomena (Section SECREF40). Experiment & Analysis ::: Experimental Setup ::: Dataset Two large complex cross-domain datasets are used: SParC BIBREF2 consists of 3034 / 422 dialogues for train / development, and CoSQL BIBREF6 consists of 2164 / 292 ones. The average turn numbers of SParC and CoSQL are $3.0$ and $5.2$, respectively. Experiment & Analysis ::: Experimental Setup ::: Evaluation Metrics We evaluate each predicted SQL query using exact set match accuracy BIBREF2. Based on it, we consider three metrics: Question Match (Ques.Match), the match accuracy over all questions, Interaction Match (Int.Match), the match accuracy over all dialogues, and Turn $i$ Match, the match accuracy over questions at turn $i$. Experiment & Analysis ::: Experimental Setup ::: Implementation Detail Our implementation is based on PyTorch BIBREF18, AllenNLP BIBREF19 and the library transformers BIBREF20. We adopt the Adam optimizer and set the learning rate as 1e-3 on all modules except for BERT, for which a learning rate of 1e-5 is used BIBREF21. The dimensions of word embedding, action embedding and distance embedding are 100, while the hidden state dimensions of question encoder, grammar-based decoder, turn-level encoder and SQL encoder are 200. We initialize word embedding using Glove BIBREF22 for non-BERT models. For methods which use recent $h$ questions, $h$ is set as 5 on both datasets. Experiment & Analysis ::: Experimental Setup ::: Baselines We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy). Experiment & Analysis ::: Model Comparison Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively. To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected. As mentioned in Section SECREF1, intuitively, methods which only use the precedent SQL enjoys better generalizability. To validate it, we further conduct an out-of-distribution experiment to assess the generalizability of different context modeling methods. Concretely, we select three representative methods and train them on questions at turn 1 and 2, whereas test them at turn 3, 4 and beyond. As shown in Figure FIGREF38, Action Copy has a consistently comparable or better performance, validating the intuition. Meanwhile, Concat appears to be strikingly competitive, demonstrating it also has a good generalizability. Compared with them, Turn is more vulnerable to out-of-distribution questions. In conclusion, existing context modeling methods in the task of SPC are not as effective as expected, since they do not show a significant advantage over the simple concatenation method. Experiment & Analysis ::: Fine-grained Analysis By a careful investigation on contextual phenomena, we summarize them in multiple hierarchies. Roughly, there are three kinds of contextual phenomena in questions: semantically complete, coreference and ellipsis. Semantically complete means a question can reflect all the meaning of its corresponding SQL. Coreference means a question contains pronouns, while ellipsis means the question cannot reflect all of its SQL, even if resolving its pronouns. In the fine-grained level, coreference can be divided into 5 types according to its pronoun BIBREF1. Ellipsis can be characterized by its intention: continuation and substitution. Continuation is to augment extra semantics (e.g. ${\rm Filter}$), and substitution refers to the situation where current question is intended to substitute particular semantics in the precedent question. Substitution can be further branched into 4 types: explicit vs. implicit and schema vs. operator. Explicit means the current question provides contextual clues (i.e. partial context overlaps with the precedent question) to help locate the substitution target, while implicit does not. On most cases, the target is schema or operator. In order to study the effect of context modeling methods on various phenomena, as shown in Table TABREF39, we take the development set of SParC as an example to perform our analysis. The analysis begins by presenting Ques.Match of three representative models on above fine-grained types in Figure FIGREF42. As shown, though different methods have different strengths, they all perform poorly on certain types, which will be elaborated below. Experiment & Analysis ::: Fine-grained Analysis ::: Coreference Diving deep into the coreference (left of Figure FIGREF42), we observe that all methods struggle with two fine-grained types: definite noun phrases and one anaphora. Through our study, we find the scope of antecedent is a key factor. An antecedent is one or more entities referred by a pronoun. Its scope is either whole, where the antecedent is the precedent answer, or partial, where the antecedent is part of the precedent question. The above-mentioned fine-grained types are more challenging as their partial proportion are nearly $40\%$, while for demonstrative pronoun it is only $22\%$. It is reasonable as partial requires complex inference on context. Considering the 4th example in Table TABREF39, “one” refers to “pets” instead of “age” because the accompanying verb is “weigh”. From this observation, we draw the conclusion that current context modeling methods do not succeed on pronouns which require complex inference on context. Experiment & Analysis ::: Fine-grained Analysis ::: Ellipsis As for ellipsis (right of Figure FIGREF42), we obtain three interesting findings by comparisons in three aspects. The first finding is that all models have a better performance on continuation than substitution. This is expected since there are redundant semantics in substitution, while not in continuation. Considering the 8th example in Table TABREF39, “horsepower” is a redundant semantic which may raise noise in SQL prediction. The second finding comes from the unexpected drop from implicit(substitution) to explicit(substitution). Intuitively, explicit should surpass implicit on substitution as it provides more contextual clues. The finding demonstrates that contextual clues are obviously not well utilized by the context modeling methods. Third, compared with schema(substitution), operator(substitution) achieves a comparable or better performance consistently. We believe it is caused by the cross-domain setting, which makes schema related substitution more difficult. Related Work The most related work is the line of semantic parsing in context. In the topic of SQL, BIBREF24 proposed a context-independent CCG parser and then applied it to do context-dependent substitution, BIBREF3 applied a search-based method for sequential questions, and BIBREF4 provided the first sequence-to-sequence solution in the area. More recently, BIBREF5 presented a edit-based method to reuse the precedent generated SQL. With respect to other logic forms, BIBREF25 focuses on understanding execution commands in context, BIBREF26 on question answering over knowledge base in a conversation, and BIBREF27 on code generation in environment context. Our work is different from theirs as we perform an exploratory study, not fulfilled by previous works. There are also several related works that provided studies on context. BIBREF17 explored the contextual representations in context-independent semantic parsing, and BIBREF28 studied how conversational agents use conversation history to generate response. Different from them, our task focuses on context modeling for semantic parsing. Under the same task, BIBREF1 summarized contextual phenomena in a coarse-grained level, while BIBREF0 performed a wizard-of-oz experiment to study the most frequent phenomena. What makes our work different from them is that we not only summarize contextual phenomena by fine-grained types, but also perform an analysis on context modeling methods. Conclusion & Future Work This work conducts an exploratory study on semantic parsing in context, to realize how far we are from effective context modeling. Through a thorough comparison, we find that existing context modeling methods are not as effective as expected. A simple concatenation method can be much competitive. Furthermore, by performing a fine-grained analysis, we summarize two potential directions as our future work: incorporating common sense for better pronouns inference, and modeling contextual clues in a more explicit manner. By open-sourcing our code and materials, we believe our work can facilitate the community to debug models in a fine-grained level and make more progress.
SParC BIBREF2 and CoSQL BIBREF6
1763a029daca7cab10f18634aba02a6bd1b6faa7
1763a029daca7cab10f18634aba02a6bd1b6faa7_0
Q: How big is the improvement over the state-of-the-art results? Text: Introduction Aspect based sentiment analysis (ABSA) is a fine-grained task in sentiment analysis, which can provide important sentiment information for other natural language processing (NLP) tasks. There are two different subtasks in ABSA, namely, aspect-category sentiment analysis and aspect-term sentiment analysis BIBREF0, BIBREF1. Aspect-category sentiment analysis aims at predicting the sentiment polarity towards the given aspect, which is in predefined several categories and it may not appear in the sentence. For instance, in Table TABREF2, the aspect-category sentiment analysis is going to predict the sentiment polarity towards the aspect “food”, which is not appeared in the sentence. By contrast, the goal of aspect-term sentiment analysis is to predict the sentiment polarity over the aspect term which is a subsequence of the sentence. For instance, the aspect-term sentiment analysis will predict the sentiment polarity towards the aspect term “The appetizers”, which is a subsequence of the sentence. Additionally, the number of categories of the aspect term is more than one thousand in the training corpus. As shown in Table TABREF2, sentiment polarity may be different when different aspects are considered. Thus, the given aspect (term) is crucial to ABSA tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Besides, BIBREF7 show that not all words of a sentence are useful for the sentiment prediction towards a given aspect (term). For instance, when the given aspect is the “service”, the words “appetizers” and “ok” are irrelevant for the sentiment prediction. Therefore, an aspect-independent (weakly associative) encoder may encode such background words (e.g., “appetizers” and “ok”) into the final representation, which may lead to an incorrect prediction. Numerous existing models BIBREF8, BIBREF9, BIBREF10, BIBREF1 typically utilize an aspect-independent encoder to generate the sentence representation, and then apply the attention mechanism BIBREF11 or gating mechanism to conduct feature selection and extraction, while feature selection and extraction may base on noised representations. In addition, some models BIBREF12, BIBREF13, BIBREF14 simply concatenate the aspect embedding with each word embedding of the sentence, and then leverage conventional Long Short-Term Memories (LSTMs) BIBREF15 to generate the sentence representation. However, it is insufficient to exploit the given aspect and conduct potentially complex feature selection and extraction. To address this issue, we investigate a novel architecture to enhance the capability of feature selection and extraction with the guidance of the given aspect from scratch. Based on the deep transition Gated Recurrent Unit (GRU) BIBREF16, BIBREF17, BIBREF18, BIBREF19, an aspect-guided GRU encoder is thus proposed, which utilizes the given aspect to guide the sentence encoding procedure at the very beginning stage. In particular, we specially design an aspect-gate for the deep transition GRU to control the information flow of each token input, with the aim of guiding feature selection and extraction from scratch, i.e. sentence representation generation. Furthermore, we design an aspect-oriented objective to enforce our model to reconstruct the given aspect, with the sentence representation generated by the aspect-guided encoder. We name this Aspect-Guided Deep Transition model as AGDT. With all the above contributions, our AGDT can accurately generate an aspect-specific representation for a sentence, and thus conduct more accurate sentiment predictions towards the given aspect. We evaluate the AGDT on multiple datasets of two subtasks in ABSA. Experimental results demonstrate the effectiveness of our proposed approach. And the AGDT significantly surpasses existing models with the same setting and achieves state-of-the-art performance among the models without using additional features (e.g., BERT BIBREF20). Moreover, we also provide empirical and visualization analysis to reveal the advantages of our model. Our contributions can be summarized as follows: We propose an aspect-guided encoder, which utilizes the given aspect to guide the encoding of a sentence from scratch, in order to conduct the aspect-specific feature selection and extraction at the very beginning stage. We propose an aspect-reconstruction approach to further guarantee that the aspect-specific information has been fully embedded into the sentence representation. Our AGDT substantially outperforms previous systems with the same setting, and achieves state-of-the-art results on benchmark datasets compared to those models without leveraging additional features (e.g., BERT). Model Description As shown in Figure FIGREF6, the AGDT model mainly consists of three parts: aspect-guided encoder, aspect-reconstruction and aspect concatenated embedding. The aspect-guided encoder is specially designed to guide the encoding of a sentence from scratch for conducting the aspect-specific feature selection and extraction at the very beginning stage. The aspect-reconstruction aims to guarantee that the aspect-specific information has been fully embedded in the sentence representation for more accurate predictions. The aspect concatenated embedding part is used to concatenate the aspect embedding and the generated sentence representation so as to make the final prediction. Model Description ::: Aspect-Guided Encoder The aspect-guided encoder is the core module of AGDT, which consists of two key components: Aspect-guided GRU and Transition GRU BIBREF16. A-GRU: Aspect-guided GRU (A-GRU) is a specially-designed unit for the ABSA tasks, which is an extension of the L-GRU proposed by BIBREF19. In particular, we design an aspect-gate to select aspect-specific representations through controlling the transformation scale of token embeddings at each time step. At time step $t$, the hidden state $\mathbf {h}_{t}$ is computed as follows: where $\odot $ represents element-wise product; $\mathbf {z}_{t}$ is the update gate BIBREF16; and $\widetilde{\mathbf {h}}_{t}$ is the candidate activation, which is computed as: where $\mathbf {g}_{t}$ denotes the aspect-gate; $\mathbf {x}_{t}$ represents the input word embedding at time step $t$; $\mathbf {r}_{t}$ is the reset gate BIBREF16; $\textbf {H}_1(\mathbf {x}_{t})$ and $\textbf {H}_2(\mathbf {x}_{t})$ are the linear transformation of the input $\mathbf {x}_{t}$, and $\mathbf {l}_{t}$ is the linear transformation gate for $\mathbf {x}_{t}$ BIBREF19. $\mathbf {r}_{t}$, $\mathbf {z}_{t}$, $\mathbf {l}_{t}$, $\mathbf {g}_{t}$, $\textbf {H}_{1}(\mathbf {x}_{t})$ and $\textbf {H}_{2}(\mathbf {x}_{t})$ are computed as: where “$\mathbf {a}$" denotes the embedding of the given aspect, which is the same at each time step. The update gate $\mathbf {z}_t$ and reset gate $\mathbf {r}_t$ are the same as them in the conventional GRU. In Eq. (DISPLAY_FORM9) $\sim $ (), the aspect-gate $\mathbf {g}_{t}$ controls both nonlinear and linear transformations of the input $\mathbf {x}_{t}$ under the guidance of the given aspect at each time step. Besides, we also exploit a linear transformation gate $\mathbf {l}_{t}$ to control the linear transformation of the input, according to the current input $\mathbf {x}_t$ and previous hidden state $\mathbf {h}_{t-1}$, which has been proved powerful in the deep transition architecture BIBREF19. As a consequence, A-GRU can control both non-linear transformation and linear transformation for input $\mathbf {x}_{t}$ at each time step, with the guidance of the given aspect, i.e., A-GRU can guide the encoding of aspect-specific features and block the aspect-irrelevant information at the very beginning stage. T-GRU: Transition GRU (T-GRU) BIBREF17 is a crucial component of deep transition block, which is a special case of GRU with only “state” as an input, namely its input embedding is zero embedding. As in Figure FIGREF6, a deep transition block consists of an A-GRU followed by several T-GRUs at each time step. For the current time step $t$, the output of one A-GRU/T-GRU is fed into the next T-GRU as the input. The output of the last T-GRU at time step $t$ is fed into A-GRU at the time step $t+1$. For a T-GRU, each hidden state at both time step $t$ and transition depth $i$ is computed as: where the update gate $\mathbf {z}_{t}^i$ and the reset gate $\mathbf {r}_{t}^i$ are computed as: The AGDT encoder is based on deep transition cells, where each cell is composed of one A-GRU at the bottom, followed by several T-GRUs. Such AGDT model can encode the sentence representation with the guidance of aspect information by utilizing the specially designed architecture. Model Description ::: Aspect-Reconstruction We propose an aspect-reconstruction approach to guarantee the aspect-specific information has been fully embedded in the sentence representation. Particularly, we devise two objectives for two subtasks in ABSA respectively. In terms of aspect-category sentiment analysis datasets, there are only several predefined aspect categories. While in aspect-term sentiment analysis datasets, the number of categories of term is more than one thousand. In a real-life scenario, the number of term is infinite, while the words that make up terms are limited. Thus we design different loss-functions for these two scenarios. For the aspect-category sentiment analysis task, we aim to reconstruct the aspect according to the aspect-specific representation. It is a multi-class problem. We take the softmax cross-entropy as the loss function: where C1 is the number of predefined aspects in the training example; ${y}_{i}^{c}$ is the ground-truth and ${p}_{i}^{c}$ is the estimated probability of a aspect. For the aspect-term sentiment analysis task, we intend to reconstruct the aspect term (may consist of multiple words) according to the aspect-specific representation. It is a multi-label problem and thus the sigmoid cross-entropy is applied: where C2 denotes the number of words that constitute all terms in the training example, ${y}_{i}^{t}$ is the ground-truth and ${p}_{i}^{t}$ represents the predicted value of a word. Our aspect-oriented objective consists of $\mathcal {L}_{c}$ and $\mathcal {L}_{t}$, which guarantee that the aspect-specific information has been fully embedded into the sentence representation. Model Description ::: Training Objective The final loss function is as follows: where the underlined part denotes the conventional loss function; C is the number of sentiment labels; ${y}_{i}$ is the ground-truth and ${p}_{i}$ represents the estimated probability of the sentiment label; $\mathcal {L}$ is the aspect-oriented objective, where Eq. DISPLAY_FORM14 is for the aspect-category sentiment analysis task and Eq. DISPLAY_FORM15 is for the aspect-term sentiment analysis task. And $\lambda $ is the weight of $\mathcal {L}$. As shown in Figure FIGREF6, we employ the aspect reconstruction approach to reconstruct the aspect (term), where “softmax” is for the aspect-category sentiment analysis task and “sigmoid” is for the aspect-term sentiment analysis task. Additionally, we concatenate the aspect embedding on the aspect-guided sentence representation to predict the sentiment polarity. Under that loss function (Eq. DISPLAY_FORM17), the AGDT can produce aspect-specific sentence representations. Experiments ::: Datasets and Metrics ::: Data Preparation. We conduct experiments on two datasets of the aspect-category based task and two datasets of the aspect-term based task. For these four datasets, we name the full dataset as “DS". In each “DS", there are some sentences like the example in Table TABREF2, containing different sentiment labels, each of which associates with an aspect (term). For instance, Table TABREF2 shows the customer's different attitude towards two aspects: “food” (“The appetizers") and “service”. In order to measure whether a model can detect different sentiment polarities in one sentence towards different aspects, we extract a hard dataset from each “DS”, named “HDS”, in which each sentence only has different sentiment labels associated with different aspects. When processing the original sentence $s$ that has multiple aspects ${a}_{1},{a}_{2},...,{a}_{n}$ and corresponding sentiment labels ${l}_{1},{l}_{2},...,{l}_{n}$ ($n$ is the number of aspects or terms in a sentence), the sentence will be expanded into (s, ${a}_{1}$, ${l}_{1}$), (s, ${a}_{2}$, ${l}_{2}$), ..., (s, ${a}_{n}$, ${l}_{n}$) in each dataset BIBREF21, BIBREF22, BIBREF1, i.e, there will be $n$ duplicated sentences associated with different aspects and labels. Experiments ::: Datasets and Metrics ::: Aspect-Category Sentiment Analysis. For comparison, we follow BIBREF1 and use the restaurant reviews dataset of SemEval 2014 (“restaurant-14”) Task 4 BIBREF0 to evaluate our AGDT model. The dataset contains five predefined aspects and four sentiment labels. A large dataset (“restaurant-large”) involves restaurant reviews of three years, i.e., 2014 $\sim $ 2016 BIBREF0. There are eight predefined aspects and three labels in that dataset. When creating the “restaurant-large” dataset, we follow the same procedure as in BIBREF1. Statistics of datasets are shown in Table TABREF19. Experiments ::: Datasets and Metrics ::: Aspect-Term Sentiment Analysis. We use the restaurant and laptop review datasets of SemEval 2014 Task 4 BIBREF0 to evaluate our model. Both datasets contain four sentiment labels. Meanwhile, we also conduct a three-class experiment, in order to compare with some work BIBREF13, BIBREF3, BIBREF7 which removed “conflict” labels. Statistics of both datasets are shown in Table TABREF20. Experiments ::: Datasets and Metrics ::: Metrics. The evaluation metrics are accuracy. All instances are shown in Table TABREF19 and Table TABREF20. Each experiment is repeated five times. The mean and the standard deviation are reported. Experiments ::: Implementation Details We use the pre-trained 300d Glove embeddings BIBREF23 to initialize word embeddings, which is fixed in all models. For out-of-vocabulary words, we randomly sample their embeddings by the uniform distribution $U(-0.25, 0.25)$. Following BIBREF8, BIBREF24, BIBREF25, we take the averaged word embedding as the aspect representation for multi-word aspect terms. The transition depth of deep transition model is 4 (see Section SECREF30). The hidden size is set to 300. We set the dropout rate BIBREF26 to 0.5 for input token embeddings and 0.3 for hidden states. All models are optimized using Adam optimizer BIBREF27 with gradient clipping equals to 5 BIBREF28. The initial learning rate is set to 0.01 and the batch size is set to 4096 at the token level. The weight of the reconstruction loss $\lambda $ in Eq. DISPLAY_FORM17 is fine-tuned (see Section SECREF30) and respectively set to 0.4, 0.4, 0.2 and 0.5 for four datasets. Experiments ::: Baselines To comprehensively evaluate our AGDT, we compare the AGDT with several competitive models. ATAE-LSTM. It is an attention-based LSTM model. It appends the given aspect embedding with each word embedding, and then the concatenated embedding is taken as the input of LSTM. The output of LSTM is appended aspect embedding again. Furthermore, attention is applied to extract features for final predictions. CNN. This model focuses on extracting n-gram features to generate sentence representation for the sentiment classification. TD-LSTM. This model uses two LSTMs to capture the left and right context of the term to generate target-dependent representations for the sentiment prediction. IAN. This model employs two LSTMs and interactive attention mechanism to learn representations of the sentence and the aspect, and concatenates them for the sentiment prediction. RAM. This model applies multiple attentions and memory networks to produce the sentence representation. GCAE. It uses CNNs to extract features and then employs two Gated Tanh-Relu units to selectively output the sentiment information flow towards the aspect for predicting sentiment labels. Experiments ::: Main Results and Analysis ::: Aspect-Category Sentiment Analysis Task We present the overall performance of our model and baseline models in Table TABREF27. Results show that our AGDT outperforms all baseline models on both “restaurant-14” and “restaurant-large” datasets. ATAE-LSTM employs an aspect-weakly associative encoder to generate the aspect-specific sentence representation by simply concatenating the aspect, which is insufficient to exploit the given aspect. Although GCAE incorporates the gating mechanism to control the sentiment information flow according to the given aspect, the information flow is generated by an aspect-independent encoder. Compared with GCAE, our AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, respectively. These results demonstrate that our AGDT can sufficiently exploit the given aspect to generate the aspect-guided sentence representation, and thus conduct accurate sentiment prediction. Our model benefits from the following aspects. First, our AGDT utilizes an aspect-guided encoder, which leverages the given aspect to guide the sentence encoding from scratch and generates the aspect-guided representation. Second, the AGDT guarantees that the aspect-specific information has been fully embedded in the sentence representation via reconstructing the given aspect. Third, the given aspect embedding is concatenated on the aspect-guided sentence representation for final predictions. The “HDS”, which is designed to measure whether a model can detect different sentiment polarities in a sentence, consists of replicated sentences with different sentiments towards multiple aspects. Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets. This indicates that the given aspect information is very pivotal to the accurate sentiment prediction, especially when the sentence has different sentiment labels, which is consistent with existing work BIBREF2, BIBREF3, BIBREF4. Those results demonstrate the effectiveness of our model and suggest that our AGDT has better ability to distinguish the different sentiments of multiple aspects compared to GCAE. Experiments ::: Main Results and Analysis ::: Aspect-Term Sentiment Analysis Task As shown in Table TABREF28, our AGDT consistently outperforms all compared methods on both domains. In this task, TD-LSTM and ATAE-LSTM use a aspect-weakly associative encoder. IAN, RAM and GCAE employ an aspect-independent encoder. In the “DS” part, our AGDT model surpasses all baseline models, which shows that the inclusion of A-GRU (aspect-guided encoder), aspect-reconstruction and aspect concatenated embedding has an overall positive impact on the classification process. In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain, which shows that our AGDT has stronger ability for the multi-sentiment problem against GCAE. These results further demonstrate that our model works well across tasks and datasets. Experiments ::: Main Results and Analysis ::: Ablation Study We conduct ablation experiments to investigate the impacts of each part in AGDT, where the GRU is stacked with 4 layers. Here “AC” represents aspect concatenated embedding , “AG” stands for A-GRU (Eq. (DISPLAY_FORM8) $\sim $ ()) and “AR” denotes the aspect-reconstruction (Eq. (DISPLAY_FORM14) $\sim $ (DISPLAY_FORM17)). From Table TABREF31 and Table TABREF32, we can conclude: Deep Transition (DT) achieves superior performances than GRU, which is consistent with previous work BIBREF18, BIBREF19 (2 vs. 1). Utilizing “AG” to guide encoding aspect-related features from scratch has a significant impact for highly competitive results and particularly in the “HDS” part, which demonstrates that it has the stronger ability to identify different sentiment polarities towards different aspects. (3 vs. 2). Aspect concatenated embedding can promote the accuracy to a degree (4 vs. 3). The aspect-reconstruction approach (“AR”) substantially improves the performance, especially in the “HDS" part (5 vs. 4). the results in 6 show that all modules have an overall positive impact on the sentiment classification. Experiments ::: Main Results and Analysis ::: Impact of Model Depth We have demonstrated the effectiveness of the AGDT. Here, we investigate the impact of model depth of AGDT, varying the depth from 1 to 6. Table TABREF39 shows the change of accuracy on the test sets as depth increases. We find that the best results can be obtained when the depth is equal to 4 at most case, and further depth do not provide considerable performance improvement. Experiments ::: Main Results and Analysis ::: Effectiveness of Aspect-reconstruction Approach Here, we investigate how well the AGDT can reconstruct the aspect information. For the aspect-term reconstruction, we count the construction is correct when all words of the term are reconstructed. Table TABREF40 shows all results on four test datasets, which shows the effectiveness of aspect-reconstruction approach again. Experiments ::: Main Results and Analysis ::: Impact of Loss Weight @!START@$\lambda $@!END@ We randomly sample a temporary development set from the “HDS" part of the training set to choose the lambda for each dataset. And we investigate the impact of $\lambda $ for aspect-oriented objectives. Specifically, $\lambda $ is increased from 0.1 to 1.0. Figure FIGREF33 illustrates all results on four “HDS" datasets, which show that reconstructing the given aspect can enhance aspect-specific sentiment features and thus obtain better performances. Experiments ::: Main Results and Analysis ::: Comparison on Three-Class for the Aspect-Term Sentiment Analysis Task We also conduct a three-class experiment to compare our AGDT with previous models, i.e., IARM, TNet, VAE, PBAN, AOA and MGAN, in Table TABREF41. These previous models are based on an aspect-independent (weakly associative) encoder to generate sentence representations. Results on all domains suggest that our AGDT substantially outperforms most competitive models, except for the TNet on the laptop dataset. The reason may be TNet incorporates additional features (e.g., position features, local ngrams and word-level features) compared to ours (only word-level features). Analysis and Discussion ::: Case Study and Visualization. To give an intuitive understanding of how the proposed A-GRU works from scratch with different aspects, we take a review sentence as an example. As the example “the appetizers are ok, but the service is slow.” shown in Table TABREF2, it has different sentiment labels towards different aspects. The color depth denotes the semantic relatedness level between the given aspect and each word. More depth means stronger relation to the given aspect. Figure FIGREF43 shows that the A-GRU can effectively guide encoding the aspect-related features with the given aspect and identify corresponding sentiment. In another case, “overpriced Japanese food with mediocre service.”, there are two extremely strong sentiment words. As the above of Figure FIGREF44 shows, our A-GRU generates almost the same weight to the word “overpriced” and “mediocre”. The bottom of Figure FIGREF44 shows that reconstructing the given aspect can effectively enhance aspect-specific sentiment features and produce correct sentiment predictions. Analysis and Discussion ::: Error Analysis. We further investigate the errors from AGDT, which can be roughly divided into 3 types. 1) The decision boundary among the sentiment polarity is unclear, even the annotators can not sure what sentiment orientation over the given aspect in the sentence. 2) The “conflict/neutral” instances are extremely easily misclassified as “positive” or “negative”, due to the imbalanced label distribution in training corpus. 3) The polarity of complex instances is hard to predict, such as the sentence that express subtle emotions, which are hardly effectively captured, or containing negation words (e.g., never, less and not), which easily affect the sentiment polarity. Related Work ::: Sentiment Analysis. There are kinds of sentiment analysis tasks, such as document-level BIBREF34, sentence-level BIBREF35, BIBREF36, aspect-level BIBREF0, BIBREF37 and multimodal BIBREF38, BIBREF39 sentiment analysis. For the aspect-level sentiment analysis, previous work typically apply attention mechanism BIBREF11 combining with memory network BIBREF40 or gating units to solve this task BIBREF8, BIBREF41, BIBREF42, BIBREF1, BIBREF43, BIBREF44, BIBREF45, BIBREF46, where an aspect-independent encoder is used to generate the sentence representation. In addition, some work leverage the aspect-weakly associative encoder to generate aspect-specific sentence representation BIBREF12, BIBREF13, BIBREF14. All of these methods make insufficient use of the given aspect information. There are also some work which jointly extract the aspect term (and opinion term) and predict its sentiment polarity BIBREF47, BIBREF48, BIBREF49, BIBREF50, BIBREF51, BIBREF52, BIBREF53, BIBREF54, BIBREF55. In this paper, we focus on the latter problem and leave aspect extraction BIBREF56 to future work. And some work BIBREF57, BIBREF58, BIBREF59, BIBREF30, BIBREF60, BIBREF51 employ the well-known BERT BIBREF20 or document-level corpora to enhance ABSA tasks, which will be considered in our future work to further improve the performance. Related Work ::: Deep Transition. Deep transition has been proved its superiority in language modeling BIBREF17 and machine translation BIBREF18, BIBREF19. We follow the deep transition architecture in BIBREF19 and extend it by incorporating a novel A-GRU for ABSA tasks. Conclusions In this paper, we propose a novel aspect-guided encoder (AGDT) for ABSA tasks, based on a deep transition architecture. Our AGDT can guide the sentence encoding from scratch for the aspect-specific feature selection and extraction. Furthermore, we design an aspect-reconstruction approach to enforce AGDT to reconstruct the given aspect with the generated sentence representation. Empirical studies on four datasets suggest that the AGDT outperforms existing state-of-the-art models substantially on both aspect-category sentiment analysis task and aspect-term sentiment analysis task of ABSA without additional features. Acknowledgments We sincerely thank the anonymous reviewers for their thorough reviewing and insightful suggestions. Liang, Xu, and Chen are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010).
AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets, In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain
f9de9ddea0c70630b360167354004ab8cbfff041
f9de9ddea0c70630b360167354004ab8cbfff041_0
Q: Is the model evaluated against other Aspect-Based models? Text: Introduction Aspect based sentiment analysis (ABSA) is a fine-grained task in sentiment analysis, which can provide important sentiment information for other natural language processing (NLP) tasks. There are two different subtasks in ABSA, namely, aspect-category sentiment analysis and aspect-term sentiment analysis BIBREF0, BIBREF1. Aspect-category sentiment analysis aims at predicting the sentiment polarity towards the given aspect, which is in predefined several categories and it may not appear in the sentence. For instance, in Table TABREF2, the aspect-category sentiment analysis is going to predict the sentiment polarity towards the aspect “food”, which is not appeared in the sentence. By contrast, the goal of aspect-term sentiment analysis is to predict the sentiment polarity over the aspect term which is a subsequence of the sentence. For instance, the aspect-term sentiment analysis will predict the sentiment polarity towards the aspect term “The appetizers”, which is a subsequence of the sentence. Additionally, the number of categories of the aspect term is more than one thousand in the training corpus. As shown in Table TABREF2, sentiment polarity may be different when different aspects are considered. Thus, the given aspect (term) is crucial to ABSA tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Besides, BIBREF7 show that not all words of a sentence are useful for the sentiment prediction towards a given aspect (term). For instance, when the given aspect is the “service”, the words “appetizers” and “ok” are irrelevant for the sentiment prediction. Therefore, an aspect-independent (weakly associative) encoder may encode such background words (e.g., “appetizers” and “ok”) into the final representation, which may lead to an incorrect prediction. Numerous existing models BIBREF8, BIBREF9, BIBREF10, BIBREF1 typically utilize an aspect-independent encoder to generate the sentence representation, and then apply the attention mechanism BIBREF11 or gating mechanism to conduct feature selection and extraction, while feature selection and extraction may base on noised representations. In addition, some models BIBREF12, BIBREF13, BIBREF14 simply concatenate the aspect embedding with each word embedding of the sentence, and then leverage conventional Long Short-Term Memories (LSTMs) BIBREF15 to generate the sentence representation. However, it is insufficient to exploit the given aspect and conduct potentially complex feature selection and extraction. To address this issue, we investigate a novel architecture to enhance the capability of feature selection and extraction with the guidance of the given aspect from scratch. Based on the deep transition Gated Recurrent Unit (GRU) BIBREF16, BIBREF17, BIBREF18, BIBREF19, an aspect-guided GRU encoder is thus proposed, which utilizes the given aspect to guide the sentence encoding procedure at the very beginning stage. In particular, we specially design an aspect-gate for the deep transition GRU to control the information flow of each token input, with the aim of guiding feature selection and extraction from scratch, i.e. sentence representation generation. Furthermore, we design an aspect-oriented objective to enforce our model to reconstruct the given aspect, with the sentence representation generated by the aspect-guided encoder. We name this Aspect-Guided Deep Transition model as AGDT. With all the above contributions, our AGDT can accurately generate an aspect-specific representation for a sentence, and thus conduct more accurate sentiment predictions towards the given aspect. We evaluate the AGDT on multiple datasets of two subtasks in ABSA. Experimental results demonstrate the effectiveness of our proposed approach. And the AGDT significantly surpasses existing models with the same setting and achieves state-of-the-art performance among the models without using additional features (e.g., BERT BIBREF20). Moreover, we also provide empirical and visualization analysis to reveal the advantages of our model. Our contributions can be summarized as follows: We propose an aspect-guided encoder, which utilizes the given aspect to guide the encoding of a sentence from scratch, in order to conduct the aspect-specific feature selection and extraction at the very beginning stage. We propose an aspect-reconstruction approach to further guarantee that the aspect-specific information has been fully embedded into the sentence representation. Our AGDT substantially outperforms previous systems with the same setting, and achieves state-of-the-art results on benchmark datasets compared to those models without leveraging additional features (e.g., BERT). Model Description As shown in Figure FIGREF6, the AGDT model mainly consists of three parts: aspect-guided encoder, aspect-reconstruction and aspect concatenated embedding. The aspect-guided encoder is specially designed to guide the encoding of a sentence from scratch for conducting the aspect-specific feature selection and extraction at the very beginning stage. The aspect-reconstruction aims to guarantee that the aspect-specific information has been fully embedded in the sentence representation for more accurate predictions. The aspect concatenated embedding part is used to concatenate the aspect embedding and the generated sentence representation so as to make the final prediction. Model Description ::: Aspect-Guided Encoder The aspect-guided encoder is the core module of AGDT, which consists of two key components: Aspect-guided GRU and Transition GRU BIBREF16. A-GRU: Aspect-guided GRU (A-GRU) is a specially-designed unit for the ABSA tasks, which is an extension of the L-GRU proposed by BIBREF19. In particular, we design an aspect-gate to select aspect-specific representations through controlling the transformation scale of token embeddings at each time step. At time step $t$, the hidden state $\mathbf {h}_{t}$ is computed as follows: where $\odot $ represents element-wise product; $\mathbf {z}_{t}$ is the update gate BIBREF16; and $\widetilde{\mathbf {h}}_{t}$ is the candidate activation, which is computed as: where $\mathbf {g}_{t}$ denotes the aspect-gate; $\mathbf {x}_{t}$ represents the input word embedding at time step $t$; $\mathbf {r}_{t}$ is the reset gate BIBREF16; $\textbf {H}_1(\mathbf {x}_{t})$ and $\textbf {H}_2(\mathbf {x}_{t})$ are the linear transformation of the input $\mathbf {x}_{t}$, and $\mathbf {l}_{t}$ is the linear transformation gate for $\mathbf {x}_{t}$ BIBREF19. $\mathbf {r}_{t}$, $\mathbf {z}_{t}$, $\mathbf {l}_{t}$, $\mathbf {g}_{t}$, $\textbf {H}_{1}(\mathbf {x}_{t})$ and $\textbf {H}_{2}(\mathbf {x}_{t})$ are computed as: where “$\mathbf {a}$" denotes the embedding of the given aspect, which is the same at each time step. The update gate $\mathbf {z}_t$ and reset gate $\mathbf {r}_t$ are the same as them in the conventional GRU. In Eq. (DISPLAY_FORM9) $\sim $ (), the aspect-gate $\mathbf {g}_{t}$ controls both nonlinear and linear transformations of the input $\mathbf {x}_{t}$ under the guidance of the given aspect at each time step. Besides, we also exploit a linear transformation gate $\mathbf {l}_{t}$ to control the linear transformation of the input, according to the current input $\mathbf {x}_t$ and previous hidden state $\mathbf {h}_{t-1}$, which has been proved powerful in the deep transition architecture BIBREF19. As a consequence, A-GRU can control both non-linear transformation and linear transformation for input $\mathbf {x}_{t}$ at each time step, with the guidance of the given aspect, i.e., A-GRU can guide the encoding of aspect-specific features and block the aspect-irrelevant information at the very beginning stage. T-GRU: Transition GRU (T-GRU) BIBREF17 is a crucial component of deep transition block, which is a special case of GRU with only “state” as an input, namely its input embedding is zero embedding. As in Figure FIGREF6, a deep transition block consists of an A-GRU followed by several T-GRUs at each time step. For the current time step $t$, the output of one A-GRU/T-GRU is fed into the next T-GRU as the input. The output of the last T-GRU at time step $t$ is fed into A-GRU at the time step $t+1$. For a T-GRU, each hidden state at both time step $t$ and transition depth $i$ is computed as: where the update gate $\mathbf {z}_{t}^i$ and the reset gate $\mathbf {r}_{t}^i$ are computed as: The AGDT encoder is based on deep transition cells, where each cell is composed of one A-GRU at the bottom, followed by several T-GRUs. Such AGDT model can encode the sentence representation with the guidance of aspect information by utilizing the specially designed architecture. Model Description ::: Aspect-Reconstruction We propose an aspect-reconstruction approach to guarantee the aspect-specific information has been fully embedded in the sentence representation. Particularly, we devise two objectives for two subtasks in ABSA respectively. In terms of aspect-category sentiment analysis datasets, there are only several predefined aspect categories. While in aspect-term sentiment analysis datasets, the number of categories of term is more than one thousand. In a real-life scenario, the number of term is infinite, while the words that make up terms are limited. Thus we design different loss-functions for these two scenarios. For the aspect-category sentiment analysis task, we aim to reconstruct the aspect according to the aspect-specific representation. It is a multi-class problem. We take the softmax cross-entropy as the loss function: where C1 is the number of predefined aspects in the training example; ${y}_{i}^{c}$ is the ground-truth and ${p}_{i}^{c}$ is the estimated probability of a aspect. For the aspect-term sentiment analysis task, we intend to reconstruct the aspect term (may consist of multiple words) according to the aspect-specific representation. It is a multi-label problem and thus the sigmoid cross-entropy is applied: where C2 denotes the number of words that constitute all terms in the training example, ${y}_{i}^{t}$ is the ground-truth and ${p}_{i}^{t}$ represents the predicted value of a word. Our aspect-oriented objective consists of $\mathcal {L}_{c}$ and $\mathcal {L}_{t}$, which guarantee that the aspect-specific information has been fully embedded into the sentence representation. Model Description ::: Training Objective The final loss function is as follows: where the underlined part denotes the conventional loss function; C is the number of sentiment labels; ${y}_{i}$ is the ground-truth and ${p}_{i}$ represents the estimated probability of the sentiment label; $\mathcal {L}$ is the aspect-oriented objective, where Eq. DISPLAY_FORM14 is for the aspect-category sentiment analysis task and Eq. DISPLAY_FORM15 is for the aspect-term sentiment analysis task. And $\lambda $ is the weight of $\mathcal {L}$. As shown in Figure FIGREF6, we employ the aspect reconstruction approach to reconstruct the aspect (term), where “softmax” is for the aspect-category sentiment analysis task and “sigmoid” is for the aspect-term sentiment analysis task. Additionally, we concatenate the aspect embedding on the aspect-guided sentence representation to predict the sentiment polarity. Under that loss function (Eq. DISPLAY_FORM17), the AGDT can produce aspect-specific sentence representations. Experiments ::: Datasets and Metrics ::: Data Preparation. We conduct experiments on two datasets of the aspect-category based task and two datasets of the aspect-term based task. For these four datasets, we name the full dataset as “DS". In each “DS", there are some sentences like the example in Table TABREF2, containing different sentiment labels, each of which associates with an aspect (term). For instance, Table TABREF2 shows the customer's different attitude towards two aspects: “food” (“The appetizers") and “service”. In order to measure whether a model can detect different sentiment polarities in one sentence towards different aspects, we extract a hard dataset from each “DS”, named “HDS”, in which each sentence only has different sentiment labels associated with different aspects. When processing the original sentence $s$ that has multiple aspects ${a}_{1},{a}_{2},...,{a}_{n}$ and corresponding sentiment labels ${l}_{1},{l}_{2},...,{l}_{n}$ ($n$ is the number of aspects or terms in a sentence), the sentence will be expanded into (s, ${a}_{1}$, ${l}_{1}$), (s, ${a}_{2}$, ${l}_{2}$), ..., (s, ${a}_{n}$, ${l}_{n}$) in each dataset BIBREF21, BIBREF22, BIBREF1, i.e, there will be $n$ duplicated sentences associated with different aspects and labels. Experiments ::: Datasets and Metrics ::: Aspect-Category Sentiment Analysis. For comparison, we follow BIBREF1 and use the restaurant reviews dataset of SemEval 2014 (“restaurant-14”) Task 4 BIBREF0 to evaluate our AGDT model. The dataset contains five predefined aspects and four sentiment labels. A large dataset (“restaurant-large”) involves restaurant reviews of three years, i.e., 2014 $\sim $ 2016 BIBREF0. There are eight predefined aspects and three labels in that dataset. When creating the “restaurant-large” dataset, we follow the same procedure as in BIBREF1. Statistics of datasets are shown in Table TABREF19. Experiments ::: Datasets and Metrics ::: Aspect-Term Sentiment Analysis. We use the restaurant and laptop review datasets of SemEval 2014 Task 4 BIBREF0 to evaluate our model. Both datasets contain four sentiment labels. Meanwhile, we also conduct a three-class experiment, in order to compare with some work BIBREF13, BIBREF3, BIBREF7 which removed “conflict” labels. Statistics of both datasets are shown in Table TABREF20. Experiments ::: Datasets and Metrics ::: Metrics. The evaluation metrics are accuracy. All instances are shown in Table TABREF19 and Table TABREF20. Each experiment is repeated five times. The mean and the standard deviation are reported. Experiments ::: Implementation Details We use the pre-trained 300d Glove embeddings BIBREF23 to initialize word embeddings, which is fixed in all models. For out-of-vocabulary words, we randomly sample their embeddings by the uniform distribution $U(-0.25, 0.25)$. Following BIBREF8, BIBREF24, BIBREF25, we take the averaged word embedding as the aspect representation for multi-word aspect terms. The transition depth of deep transition model is 4 (see Section SECREF30). The hidden size is set to 300. We set the dropout rate BIBREF26 to 0.5 for input token embeddings and 0.3 for hidden states. All models are optimized using Adam optimizer BIBREF27 with gradient clipping equals to 5 BIBREF28. The initial learning rate is set to 0.01 and the batch size is set to 4096 at the token level. The weight of the reconstruction loss $\lambda $ in Eq. DISPLAY_FORM17 is fine-tuned (see Section SECREF30) and respectively set to 0.4, 0.4, 0.2 and 0.5 for four datasets. Experiments ::: Baselines To comprehensively evaluate our AGDT, we compare the AGDT with several competitive models. ATAE-LSTM. It is an attention-based LSTM model. It appends the given aspect embedding with each word embedding, and then the concatenated embedding is taken as the input of LSTM. The output of LSTM is appended aspect embedding again. Furthermore, attention is applied to extract features for final predictions. CNN. This model focuses on extracting n-gram features to generate sentence representation for the sentiment classification. TD-LSTM. This model uses two LSTMs to capture the left and right context of the term to generate target-dependent representations for the sentiment prediction. IAN. This model employs two LSTMs and interactive attention mechanism to learn representations of the sentence and the aspect, and concatenates them for the sentiment prediction. RAM. This model applies multiple attentions and memory networks to produce the sentence representation. GCAE. It uses CNNs to extract features and then employs two Gated Tanh-Relu units to selectively output the sentiment information flow towards the aspect for predicting sentiment labels. Experiments ::: Main Results and Analysis ::: Aspect-Category Sentiment Analysis Task We present the overall performance of our model and baseline models in Table TABREF27. Results show that our AGDT outperforms all baseline models on both “restaurant-14” and “restaurant-large” datasets. ATAE-LSTM employs an aspect-weakly associative encoder to generate the aspect-specific sentence representation by simply concatenating the aspect, which is insufficient to exploit the given aspect. Although GCAE incorporates the gating mechanism to control the sentiment information flow according to the given aspect, the information flow is generated by an aspect-independent encoder. Compared with GCAE, our AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, respectively. These results demonstrate that our AGDT can sufficiently exploit the given aspect to generate the aspect-guided sentence representation, and thus conduct accurate sentiment prediction. Our model benefits from the following aspects. First, our AGDT utilizes an aspect-guided encoder, which leverages the given aspect to guide the sentence encoding from scratch and generates the aspect-guided representation. Second, the AGDT guarantees that the aspect-specific information has been fully embedded in the sentence representation via reconstructing the given aspect. Third, the given aspect embedding is concatenated on the aspect-guided sentence representation for final predictions. The “HDS”, which is designed to measure whether a model can detect different sentiment polarities in a sentence, consists of replicated sentences with different sentiments towards multiple aspects. Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets. This indicates that the given aspect information is very pivotal to the accurate sentiment prediction, especially when the sentence has different sentiment labels, which is consistent with existing work BIBREF2, BIBREF3, BIBREF4. Those results demonstrate the effectiveness of our model and suggest that our AGDT has better ability to distinguish the different sentiments of multiple aspects compared to GCAE. Experiments ::: Main Results and Analysis ::: Aspect-Term Sentiment Analysis Task As shown in Table TABREF28, our AGDT consistently outperforms all compared methods on both domains. In this task, TD-LSTM and ATAE-LSTM use a aspect-weakly associative encoder. IAN, RAM and GCAE employ an aspect-independent encoder. In the “DS” part, our AGDT model surpasses all baseline models, which shows that the inclusion of A-GRU (aspect-guided encoder), aspect-reconstruction and aspect concatenated embedding has an overall positive impact on the classification process. In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain, which shows that our AGDT has stronger ability for the multi-sentiment problem against GCAE. These results further demonstrate that our model works well across tasks and datasets. Experiments ::: Main Results and Analysis ::: Ablation Study We conduct ablation experiments to investigate the impacts of each part in AGDT, where the GRU is stacked with 4 layers. Here “AC” represents aspect concatenated embedding , “AG” stands for A-GRU (Eq. (DISPLAY_FORM8) $\sim $ ()) and “AR” denotes the aspect-reconstruction (Eq. (DISPLAY_FORM14) $\sim $ (DISPLAY_FORM17)). From Table TABREF31 and Table TABREF32, we can conclude: Deep Transition (DT) achieves superior performances than GRU, which is consistent with previous work BIBREF18, BIBREF19 (2 vs. 1). Utilizing “AG” to guide encoding aspect-related features from scratch has a significant impact for highly competitive results and particularly in the “HDS” part, which demonstrates that it has the stronger ability to identify different sentiment polarities towards different aspects. (3 vs. 2). Aspect concatenated embedding can promote the accuracy to a degree (4 vs. 3). The aspect-reconstruction approach (“AR”) substantially improves the performance, especially in the “HDS" part (5 vs. 4). the results in 6 show that all modules have an overall positive impact on the sentiment classification. Experiments ::: Main Results and Analysis ::: Impact of Model Depth We have demonstrated the effectiveness of the AGDT. Here, we investigate the impact of model depth of AGDT, varying the depth from 1 to 6. Table TABREF39 shows the change of accuracy on the test sets as depth increases. We find that the best results can be obtained when the depth is equal to 4 at most case, and further depth do not provide considerable performance improvement. Experiments ::: Main Results and Analysis ::: Effectiveness of Aspect-reconstruction Approach Here, we investigate how well the AGDT can reconstruct the aspect information. For the aspect-term reconstruction, we count the construction is correct when all words of the term are reconstructed. Table TABREF40 shows all results on four test datasets, which shows the effectiveness of aspect-reconstruction approach again. Experiments ::: Main Results and Analysis ::: Impact of Loss Weight @!START@$\lambda $@!END@ We randomly sample a temporary development set from the “HDS" part of the training set to choose the lambda for each dataset. And we investigate the impact of $\lambda $ for aspect-oriented objectives. Specifically, $\lambda $ is increased from 0.1 to 1.0. Figure FIGREF33 illustrates all results on four “HDS" datasets, which show that reconstructing the given aspect can enhance aspect-specific sentiment features and thus obtain better performances. Experiments ::: Main Results and Analysis ::: Comparison on Three-Class for the Aspect-Term Sentiment Analysis Task We also conduct a three-class experiment to compare our AGDT with previous models, i.e., IARM, TNet, VAE, PBAN, AOA and MGAN, in Table TABREF41. These previous models are based on an aspect-independent (weakly associative) encoder to generate sentence representations. Results on all domains suggest that our AGDT substantially outperforms most competitive models, except for the TNet on the laptop dataset. The reason may be TNet incorporates additional features (e.g., position features, local ngrams and word-level features) compared to ours (only word-level features). Analysis and Discussion ::: Case Study and Visualization. To give an intuitive understanding of how the proposed A-GRU works from scratch with different aspects, we take a review sentence as an example. As the example “the appetizers are ok, but the service is slow.” shown in Table TABREF2, it has different sentiment labels towards different aspects. The color depth denotes the semantic relatedness level between the given aspect and each word. More depth means stronger relation to the given aspect. Figure FIGREF43 shows that the A-GRU can effectively guide encoding the aspect-related features with the given aspect and identify corresponding sentiment. In another case, “overpriced Japanese food with mediocre service.”, there are two extremely strong sentiment words. As the above of Figure FIGREF44 shows, our A-GRU generates almost the same weight to the word “overpriced” and “mediocre”. The bottom of Figure FIGREF44 shows that reconstructing the given aspect can effectively enhance aspect-specific sentiment features and produce correct sentiment predictions. Analysis and Discussion ::: Error Analysis. We further investigate the errors from AGDT, which can be roughly divided into 3 types. 1) The decision boundary among the sentiment polarity is unclear, even the annotators can not sure what sentiment orientation over the given aspect in the sentence. 2) The “conflict/neutral” instances are extremely easily misclassified as “positive” or “negative”, due to the imbalanced label distribution in training corpus. 3) The polarity of complex instances is hard to predict, such as the sentence that express subtle emotions, which are hardly effectively captured, or containing negation words (e.g., never, less and not), which easily affect the sentiment polarity. Related Work ::: Sentiment Analysis. There are kinds of sentiment analysis tasks, such as document-level BIBREF34, sentence-level BIBREF35, BIBREF36, aspect-level BIBREF0, BIBREF37 and multimodal BIBREF38, BIBREF39 sentiment analysis. For the aspect-level sentiment analysis, previous work typically apply attention mechanism BIBREF11 combining with memory network BIBREF40 or gating units to solve this task BIBREF8, BIBREF41, BIBREF42, BIBREF1, BIBREF43, BIBREF44, BIBREF45, BIBREF46, where an aspect-independent encoder is used to generate the sentence representation. In addition, some work leverage the aspect-weakly associative encoder to generate aspect-specific sentence representation BIBREF12, BIBREF13, BIBREF14. All of these methods make insufficient use of the given aspect information. There are also some work which jointly extract the aspect term (and opinion term) and predict its sentiment polarity BIBREF47, BIBREF48, BIBREF49, BIBREF50, BIBREF51, BIBREF52, BIBREF53, BIBREF54, BIBREF55. In this paper, we focus on the latter problem and leave aspect extraction BIBREF56 to future work. And some work BIBREF57, BIBREF58, BIBREF59, BIBREF30, BIBREF60, BIBREF51 employ the well-known BERT BIBREF20 or document-level corpora to enhance ABSA tasks, which will be considered in our future work to further improve the performance. Related Work ::: Deep Transition. Deep transition has been proved its superiority in language modeling BIBREF17 and machine translation BIBREF18, BIBREF19. We follow the deep transition architecture in BIBREF19 and extend it by incorporating a novel A-GRU for ABSA tasks. Conclusions In this paper, we propose a novel aspect-guided encoder (AGDT) for ABSA tasks, based on a deep transition architecture. Our AGDT can guide the sentence encoding from scratch for the aspect-specific feature selection and extraction. Furthermore, we design an aspect-reconstruction approach to enforce AGDT to reconstruct the given aspect with the generated sentence representation. Empirical studies on four datasets suggest that the AGDT outperforms existing state-of-the-art models substantially on both aspect-category sentiment analysis task and aspect-term sentiment analysis task of ABSA without additional features. Acknowledgments We sincerely thank the anonymous reviewers for their thorough reviewing and insightful suggestions. Liang, Xu, and Chen are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010).
Yes
fc8bc6a3c837a9d1c869b7ee90cf4e3c39bcd102
fc8bc6a3c837a9d1c869b7ee90cf4e3c39bcd102_0
Q: Is the baseline a non-heirarchical model like BERT? Text: Introduction Automatic document summarization is the task of rewriting a document into its shorter form while still retaining its important content. Over the years, many paradigms for document summarization have been explored (see Nenkova:McKeown:2011 for an overview). The most popular two among them are extractive approaches and abstractive approaches. As the name implies, extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document. Extractive summarization is usually modeled as a sentence ranking problem with length constraints (e.g., max number of words or sentences). Top ranked sentences (under constraints) are selected as summaries. Early attempts mostly leverage manually engineered features BIBREF1 . Based on these sparse features, sentence are selected using a classifier or a regression model. Later, the feature engineering part in this paradigm is replaced with neural networks. cheng:2016:acl propose a hierarchical long short-term memory network (LSTM; BIBREF2 ) to encode a document and then use another LSTM to predict binary labels for each sentence in the document. This architecture is widely adopted recently BIBREF3 , BIBREF4 , BIBREF5 . Our model also employs a hierarchical document encoder, but we adopt a hierarchical transformer BIBREF6 rather a hierarchical LSTM. Because recent studies BIBREF6 , BIBREF0 show the transformer model performs better than LSTM in many tasks. Abstractive models do not attract much attention until recently. They are mostly based on sequence to sequence (seq2seq) models BIBREF7 , where a document is viewed a sequence and its summary is viewed as another sequence. Although seq2seq based summarizers can be equipped with copy mechanism BIBREF8 , BIBREF9 , coverage model BIBREF9 and reinforcement learning BIBREF10 , there is still no guarantee that the generated summaries are grammatical and convey the same meaning as the original document does. It seems that extractive models are more reliable than their abstractive counterparts. However, extractive models require sentence level labels, which are usually not included in most summarization datasets (most datasets only contain document-summary pairs). Sentence labels are usually obtained by rule-based methods (e.g., maximizing the ROUGE score between a set of sentences and reference summaries) and may not be accurate. Extractive models proposed recently BIBREF11 , BIBREF3 employ hierarchical document encoders and even have neural decoders, which are complex. Training such complex neural models with inaccurate binary labels is challenging. We observed in our initial experiments on one of our dataset that our extractive model (see Section "Extractive Summarization" for details) overfits to the training set quickly after the second epoch, which indicates the training set may not be fully utilized. Inspired by the recent pre-training work in natural language processing BIBREF12 , BIBREF13 , BIBREF0 , our solution to this problem is to first pre-train the “complex”' part (i.e., the hierarchical encoder) of the extractive model on unlabeled data and then we learn to classify sentences with our model initialized from the pre-trained encoder. In this paper, we propose Hibert, which stands for HIerachical Bidirectional Encoder Representations from Transformers. We design an unsupervised method to pre-train Hibert for document modeling. We apply the pre-trained Hibert to the task of document summarization and achieve state-of-the-art performance on both the CNN/Dailymail and New York Times dataset. Related Work In this section, we introduce work on extractive summarization, abstractive summarization and pre-trained natural language processing models. For a more comprehensive review of summarization, we refer the interested readers to Nenkova:McKeown:2011 and Mani:01. Model In this section, we present our model Hibert. We first introduce how documents are represented in Hibert. We then describe our method to pre-train Hibert and finally move on to the application of Hibert to summarization. Document Representation Let $\mathcal {D} = (S_1, S_2, \dots , S_{| \mathcal {D} |})$ denote a document, where $S_i = (w_1^i, w_2^i, \dots , w_{|S_i|}^i)$ is a sentence in $\mathcal {D}$ and $w_j^i$ a word in $S_i$ . Note that following common practice in natural language processing literatures, $w_{|S_i|}^i$ is an artificial EOS (End Of Sentence) token. To obtain the representation of $\mathcal {D}$ , we use two encoders: a sentence encoder to transform each sentence in $\mathcal {D}$ to a vector and a document encoder to learn sentence representations given their surrounding sentences as context. Both the sentence encoder and document encoder are based on the Transformer encoder described in vaswani:2017:nips. As shown in Figure 1 , they are nested in a hierarchical fashion. A transformer encoder usually has multiple layers and each layer is composed of a multi-head self attentive sub-layer followed by a feed-forward sub-layer with residual connections BIBREF30 and layer normalizations BIBREF31 . For more details of the Transformer encoder, we refer the interested readers to vaswani:2017:nips. To learn the representation of $S_i$ , $S_i= (w_1^i, w_2^i, \dots , w_{|S_i|}^i)$ is first mapped into continuous space $$\begin{split} \mathbf {E}_i = (\mathbf {e}_1^i, \mathbf {e}_2^i, \dots , \mathbf {e}_{|S_i|}^i) \\ \quad \quad \text{where} \quad \mathbf {e}_j^i = e(w_j^i) + \mathbf {p}_j \end{split}$$ (Eq. 6) where $e(w_j^i)$ and $\mathbf {p}_j$ are the word and positional embeddings of $w_j^i$ , respectively. The word embedding matrix is randomly initialized and we adopt the sine-cosine positional embedding BIBREF6 . Then the sentence encoder (a Transformer) transforms $\mathbf {E}_i$ into a list of hidden representations $(\mathbf {h}_1^i, \mathbf {h}_2^i, \dots , \mathbf {h}_{|S_i|}^i)$ . We take the last hidden representation $\mathbf {h}_{|S_i|}^i$ (i.e., the representation at the EOS token) as the representation of sentence $S_i$ . Similar to the representation of each word in $S_i$ , we also take the sentence position into account. The final representation of $S_i$ is $$\hat{\mathbf {h}}_i = \mathbf {h}_{|S_i|}^i + \mathbf {p}_i$$ (Eq. 8) Note that words and sentences share the same positional embedding matrix. In analogy to the sentence encoder, as shown in Figure 1 , the document encoder is yet another Transformer but applies on the sentence level. After running the Transformer on a sequence of sentence representations $( \hat{\mathbf {h}}_1, \hat{\mathbf {h}}_2, \dots , \hat{\mathbf {h}}_{|\mathcal {D}|} )$ , we obtain the context sensitive sentence representations $( \mathbf {d}_1, \mathbf {d}_2, \dots , \mathbf {d}_{|\mathcal {D}|} )$ . Now we have finished the encoding of a document with a hierarchical bidirectional transformer encoder Hibert. Note that in previous work, document representation are also learned with hierarchical models, but each hierarchy is a Recurrent Neural Network BIBREF3 , BIBREF21 or Convolutional Neural Network BIBREF11 . We choose the Transformer because it outperforms CNN and RNN in machine translation BIBREF6 , semantic role labeling BIBREF32 and other NLP tasks BIBREF0 . In the next section we will introduce how we train Hibert with an unsupervised training objective. Pre-training Most recent encoding neural models used in NLP (e.g., RNNs, CNNs or Transformers) can be pre-trained by predicting a word in a sentence (or a text span) using other words within the same sentence (or span). For example, ELMo BIBREF12 and OpenAI-GPT BIBREF13 predict a word using all words on its left (or right); while word2vec BIBREF33 predicts one word with its surrounding words in a fixed window and BERT BIBREF0 predicts (masked) missing words in a sentence given all the other words. All the models above learn the representation of a sentence, where its basic units are words. Hibert aims to learn the representation of a document, where its basic units are sentences. Therefore, a natural way of pre-training a document level model (e.g., Hibert) is to predict a sentence (or sentences) instead of a word (or words). We could predict a sentence in a document with all the sentences on its left (or right) as in a (document level) language model. However, in summarization, context on both directions are available. We therefore opt to predict a sentence using all sentences on both its left and right. Specifically, suppose $\mathcal {D} = (S_1, S_2, \dots , S_{| \mathcal {D} |})$ is a document, where $S_i = (w_1^i, w_2^i, \dots , w_{|S_i|}^i)$ is a sentence in it. We randomly select 15% of the sentences in $\mathcal {D}$ and mask them. Then, we predict these masked sentences. The prediction task here is similar with the Cloze task BIBREF34 , BIBREF0 , but the missing part is a sentence. However, during test time the input document is not masked, to make our model can adapt to documents without masks, we do not always mask the selected sentences. Once a sentence is selected (as one of the 15% selected masked sentences), we transform it with one of three methods below. We will use an example to demonstrate the transformation. For instance, we have the following document and the second sentence is selected: William Shakespeare is a poet . He died in 1616 . He is regarded as the greatest writer . In 80% of the cases, we mask the selected sentence (i.e., we replace each word in the sentence with a mask token [MASK]). The document above becomes William Shakespeare is a poet . [MASK] [MASK] [MASK] [MASK] [MASK] He is regarded as the greatest writer . (where “He died in 1616 . ” is masked). In 10% of the cases, we keep the selected sentence as it is. This strategy is to simulate the input document during test time (with no masked sentences). In the rest 10% cases, we replace the selected sentence with a random sentence. In this case, the document after transformation is William Shakespeare is a poet . Birds can fly . He is regarded as the greatest writer . The second sentence is replaced with “Birds can fly .” This strategy intends to add some noise during training and make the model more robust. After the application of the above procedures to a document $\mathcal {D} = (S_1, S_2, \dots , S_{| \mathcal {D} |})$ , we obtain the masked document $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$ . Let $\mathcal {K} $ denote the set of indicies of selected sentences in $\mathcal {D}$ . Now we are ready to predict the masked sentences $\mathcal {M} = \lbrace S_k | k \in \mathcal {K} \rbrace $ using $\widetilde{ \mathcal {D} }$ . We first apply the hierarchical encoder Hibert in Section "Conclusions" to $\widetilde{ \mathcal {D} }$ and obtain its context sensitive sentence representations $( \tilde{ \mathbf {d}_1 }, \tilde{ \mathbf {d}_2 }, \dots , \tilde{ \mathbf {d}_{| \mathcal {D} |} } )$ . We will demonstrate how we predict the masked sentence $S_k = (w_0^k, w_1^k, w_2^k, \dots , w_{|S_k|}^k)$ one word per step ( $w_0^k$ is an artificially added BOS token). At the $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$0 th step, we predict $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$1 given $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$2 and $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$3 . $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$4 already encodes the information of $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$5 with a focus around its $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$6 th sentence $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$7 . As shown in Figure 1 , we employ a Transformer decoder BIBREF6 to predict $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$8 with $\widetilde{ \mathcal {D} }= (\tilde{S_1}, \tilde{S_2}, \dots , \tilde{S_{| \mathcal {D} |}})$9 as its additional input. The transformer decoder we used here is slightly different from the original one. The original decoder employs two multi-head attention layers to include both the context in encoder and decoder, while we only need one to learn the decoder context, since the context in encoder is a vector (i.e., $\mathcal {K} $0 ). Specifically, after applying the word and positional embeddings to ( $\mathcal {K} $1 ), we obtain $\mathcal {K} $2 (also see Equation 6 ). Then we apply multi-head attention sub-layer to $\mathcal {K} $3 : $$\begin{split} \tilde{\mathbf {h}_{j-1}} &= \text{MultiHead}(\mathbf {q}_{j-1}, \mathbf {K}_{j-1}, \mathbf {V}_{j-1}) \\ \mathbf {q}_{j-1} &= \mathbf {W}^Q \: \tilde{\mathbf {e}_{j-1}^k} \\ \mathbf {K}_{j-1} &= \mathbf {W}^K \: \widetilde{ \mathbf {E} }^k_{1:j-1} \\ \mathbf {K}_{j-1} &= \mathbf {W}^V \: \widetilde{ \mathbf {E} }^k_{1:j-1} \end{split}$$ (Eq. 13) where $\mathbf {q}_{j-1}$ , $\mathbf {K}_{j-1}$ , $\mathbf {V}_{j-1}$ are the input query, key and value matrices of the multi-head attention function BIBREF6 $\text{MultiHead}(\cdot , \cdot , \cdot )$ , respectively. $\mathbf {W}^Q \in \mathbb {R}^{d \times d}$ , $\mathbf {W}^K \in \mathbb {R}^{d \times d}$ and $\mathbf {W}^V \in \mathbb {R}^{d \times d}$ are weight matrices. Then we include the information of $\widetilde{ \mathcal {D} }$ by addition: $$\tilde{\mathbf {x}_{j-1}} = \tilde{\mathbf {h}_{j-1}} + \tilde{ \mathbf {d}_k }$$ (Eq. 14) We also follow a feedforward sub-layer (one hidden layer with ReLU BIBREF35 activation function) after $\tilde{\mathbf {x}_{j-1}}$ as in vaswani:2017:nips: $$\tilde{\mathbf {g}_{j-1}} = \mathbf {W}^{ff}_2 \max (0, \mathbf {W}^{ff}_1 \tilde{\mathbf {x}_{j-1}} + \mathbf {b}_1) + \mathbf {b}_2$$ (Eq. 15) Note that the transformer decoder can have multiple layers by applying Equation ( 13 ) to ( 15 ) multiple times and we only show the computation of one layer for simplicity. The probability of $w_j^k$ given $w_0^k,\dots ,w_{j-1}^k$ and $\widetilde{ \mathcal {D} }$ is: $$p( w_j^k | w_{0:j-1}^k, \widetilde{ \mathcal {D} } ) = \text{softmax}( \mathbf {W}^O \: \tilde{\mathbf {g}_{j-1}} )$$ (Eq. 16) Finally the probability of all masked sentences $ \mathcal {M} $ given $\widetilde{ \mathcal {D} }$ is $$p(\mathcal {M} | \widetilde{ \mathcal {D} }) = \prod _{k \in \mathcal {K}} \prod _{j=1}^{|S_k|} p(w_j^k | w_{0:j-1}^k, \widetilde{ \mathcal {D} })$$ (Eq. 17) The model above can be trained by minimizing the negative log-likelihood of all masked sentences given their paired documents. We can in theory have unlimited amount of training data for Hibert, since they can be generated automatically from (unlabeled) documents. Therefore, we can first train Hibert on large amount of data and then apply it to downstream tasks. In the next section, we will introduce its application to document summarization. Extractive Summarization Extractive summarization selects the most important sentences in a document as its summary. In this section, summarization is modeled as a sequence labeling problem. Specifically, a document is viewed as a sequence of sentences and a summarization model is expected to assign a True or False label for each sentence, where True means this sentence should be included in the summary. In the following, we will introduce the details of our summarization model based Hibert. Let $\mathcal {D} = (S_1, S_2, \dots , S_{| \mathcal {D} |})$ denote a document and $Y = (y_1, y_2, \dots , y_{| \mathcal {D} |})$ its sentence labels (methods for obtaining these labels are in Section "Datasets" ). As shown in Figure 2 , we first apply the hierarchical bidirectional transformer encoder Hibert to $\mathcal {D}$ and yields the context dependent representations for all sentences $( \mathbf {d}_1, \mathbf {d}_2, \dots , \mathbf {d}_{|\mathcal {D}|} )$ . The probability of the label of $S_i$ can be estimated using an additional linear projection and a softmax: $$p( y_i | \mathcal {D} ) = \text{softmax}(\mathbf {W}^S \: \mathbf {d}_i)$$ (Eq. 20) where $\mathbf {W}^S \in \mathbb {R}^{2 \times d}$ . The summarization model can be trained by minimizing the negative log-likelihood of all sentence labels given their paired documents. Experiments In this section we assess the performance of our model on the document summarization task. We first introduce the dataset we used for pre-training and the summarization task and give implementation details of our model. We also compare our model against multiple previous models. Datasets We conducted our summarization experiments on the non-anonymous version CNN/Dailymail (CNNDM) dataset BIBREF36 , BIBREF9 , and the New York Times dataset BIBREF37 , BIBREF38 . For the CNNDM dataset, we preprocessed the dataset using the scripts from the authors of see:2017:acl. The resulting dataset contains 287,226 documents with summaries for training, 13,368 for validation and 11,490 for test. Following BIBREF38 , BIBREF37 , we created the NYT50 dataset by removing the documents whose summaries are shorter than 50 words from New York Times dataset. We used the same training/validation/test splits as in xu:2019:arxiv, which contain 137,778 documents for training, 17,222 for validation and 17,223 for test. To create sentence level labels for extractive summarization, we used a strategy similar to nallapati:2017:aaai. We label the subset of sentences in a document that maximizes Rouge BIBREF39 (against the human summary) as True and all other sentences as False. To unsupervisedly pre-train our document model Hibert (see Section "Pre-training" for details), we created the GIGA-CM dataset (totally 6,626,842 documents and 2,854 million words), which includes 6,339,616 documents sampled from the English Gigaword dataset and the training split of the CNNDM dataset. We used the validation set of CNNDM as the validation set of GIGA-CM as well. As in see:2017:acl, documents and summaries in CNNDM, NYT50 and GIGA-CM are all segmented and tokenized using Stanford CoreNLP toolkit BIBREF40 . To reduce the vocabulary size, we applied byte pair encoding (BPE; BIBREF41 ) to all of our datasets. To limit the memory consumption during training, we limit the length of each sentence to be 50 words (51th word and onwards are removed) and split documents with more than 30 sentences into smaller documents with each containing at most 30 sentences. Implementation Details Our model is trained in three stages, which includes two pre-training stages and one finetuning stage. The first stage is the open-domain pre-training and in this stage we train Hibert with the pre-training objective (Section "Pre-training" ) on GIGA-CM dataset. In the second stage, we perform the in-domain pre-training on the CNNDM (or NYT50) dataset still with the same pre-training objective. In the final stage, we finetune Hibert in the summarization model (Section "Extractive Summarization" ) to predict extractive sentence labels on CNNDM (or NYT50). The sizes of the sentence and document level Transformers as well as the Transformer decoder in Hibert are the same. Let $L$ denote the number of layers in Transformer, $H$ the hidden size and $A$ the number of attention heads. As in BIBREF6 , BIBREF0 , the hidden size of the feedforward sublayer is $4H$ . We mainly trained two model sizes: $\text{\sc Hibert}_S$ ( $L=6$ , $H=512$ and $A=8$ ) and $\text{\sc Hibert}_M$ ( $L=6$ , $H$0 and $H$1 ). We trained both $H$2 and $H$3 on a single machine with 8 Nvidia Tesla V100 GPUs with a batch size of 256 documents. We optimized our models using Adam with learning rate of 1e-4, $H$4 , $H$5 , L2 norm of 0.01, learning rate warmup 10,000 steps and learning rate decay afterwards using the strategies in vaswani:2017:nips. The dropout rate in all layers are 0.1. In pre-training stages, we trained our models until validation perplexities do not decrease significantly (around 45 epochs on GIGA-CM dataset and 100 to 200 epochs on CNNDM and NYT50). Training $H$6 for one epoch on GIGA-CM dataset takes approximately 20 hours. Our models during fine-tuning stage can be trained on a single GPU. The hyper-parameters are almost identical to these in the pre-training stages except that the learning rate is 5e-5, the batch size is 32, the warmup steps are 4,000 and we train our models for 5 epochs. During inference, we rank sentences using $p( y_i | \mathcal {D} ) $ (Equation ( 20 )) and choose the top $K$ sentences as summary, where $K$ is tuned on the validation set. Evaluations We evaluated the quality of summaries from different systems automatically using ROUGE BIBREF39 . We reported the full length F1 based ROUGE-1, ROUGE-2 and ROUGE-L on the CNNDM and NYT50 datasets. We compute ROUGE scores using the ROUGE-1.5.5.pl script. Additionally, we also evaluated the generated summaries by eliciting human judgments. Following BIBREF11 , BIBREF4 , we randomly sampled 20 documents from the CNNDM test set. Participants were presented with a document and a list of summaries produced by different systems. We asked subjects to rank these summaries (ties allowed) by taking informativeness (is the summary capture the important information from the document?) and fluency (is the summary grammatical?) into account. Each document is annotated by three different subjects. Results Our main results on the CNNDM dataset are shown in Table 1 , with abstractive models in the top block and extractive models in the bottom block. Pointer+Coverage BIBREF9 , Abstract-ML+RL BIBREF10 and DCA BIBREF42 are all sequence to sequence learning based models with copy and coverage modeling, reinforcement learning and deep communicating agents extensions. SentRewrite BIBREF26 and InconsisLoss BIBREF25 all try to decompose the word by word summary generation into sentence selection from document and “sentence” level summarization (or compression). Bottom-Up BIBREF27 generates summaries by combines a word prediction model with the decoder attention model. The extractive models are usually based on hierarchical encoders (SummaRuNNer; BIBREF3 and NeuSum; BIBREF11 ). They have been extended with reinforcement learning (Refresh; BIBREF4 and BanditSum; BIBREF20 ), Maximal Marginal Relevance (NeuSum-MMR; BIBREF21 ), latent variable modeling (LatentSum; BIBREF5 ) and syntactic compression (JECS; BIBREF38 ). Lead3 is a baseline which simply selects the first three sentences. Our model $\text{\sc Hibert}_S$ (in-domain), which only use one pre-training stage on the in-domain CNNDM training set, outperforms all of them and differences between them are all significant with a 0.95 confidence interval (estimated with the ROUGE script). Note that pre-training $\text{\sc Hibert}_S$ (in-domain) is very fast and it only takes around 30 minutes for one epoch on the CNNDM training set. Our models with two pre-training stages ( $\text{\sc Hibert}_S$ ) or larger size ( $\text{\sc Hibert}_M$ ) perform even better and $\text{\sc Hibert}_M$ outperforms BERT by 0.5 ROUGE. We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in "Extractive Summarization" ) without pre-training. Note the setting for HeriTransfomer is ( $L=4$ , $H=300$ and $A=4$ ) . We can see that the pre-training (details in Section "Pre-training" ) leads to a +1.25 ROUGE improvement. Another baseline is based on a pre-trained BERT BIBREF0 and finetuned on the CNNDM dataset. We used the $\text{BERT}_{\text{base}}$ model because our 16G RAM V100 GPU cannot fit $\text{BERT}_{\text{large}}$ for the summarization task even with batch size of 1. The positional embedding of BERT supports input length up to 512 words, we therefore split documents with more than 10 sentences into multiple blocks (each block with 10 sentences). We feed each block (the BOS and EOS tokens of each sentence are replaced with [CLS] and [SEP] tokens) into BERT and use the representation at [CLS] token to classify each sentence. Our model $\text{\sc Hibert}_S$1 outperforms BERT by 0.4 to 0.5 ROUGE despite with only half the number of model parameters ( $\text{\sc Hibert}_S$2 54.6M v.s. BERT 110M). Results on the NYT50 dataset show the similar trends (see Table 2 ). EXTRACTION is a extractive model based hierarchical LSTM and we use the numbers reported by xu:2019:arxiv. The improvement of $\text{\sc Hibert}_S$3 over the baseline without pre-training (HeriTransformer) becomes 2.0 ROUGE. $\text{\sc Hibert}_S$4 (in-domain), $\text{\sc Hibert}_S$5 (in-domain), $\text{\sc Hibert}_S$6 and $\text{\sc Hibert}_S$7 all outperform BERT significantly according to the ROUGE script. We also conducted human experiment with 20 randomly sampled documents from the CNNDM test set. We compared our model $\text{\sc Hibert}_M$ against Lead3, DCA, Latent, BERT and the human reference (Human). We asked the subjects to rank the outputs of these systems from best to worst. As shown in Table 4 , the output of $\text{\sc Hibert}_M$ is selected as the best in 30% of cases and we obtained lower mean rank than all systems except for Human. We also converted the rank numbers into ratings (rank $i$ to $7-i$ ) and applied student $t$ -test on the ratings. $\text{\sc Hibert}_M$ is significantly different from all systems in comparison ( $p < 0.05$ ), which indicates our model still lags behind Human, but is better than all other systems. As mentioned earlier, our pre-training includes two stages. The first stage is the open-domain pre-training stage on the GIGA-CM dataset and the following stage is the in-domain pre-training on the CNNDM (or NYT50) dataset. As shown in Table 3 , we pretrained $\text{\sc Hibert}_S$ using only open-domain stage (Open-Domain), only in-domain stage (In-Domain) or both stages (Open+In-Domain) and applied it to the CNNDM summarization task. Results on the validation set of CNNDM indicate the two-stage pre-training process is necessary. Conclusions The core part of a neural extractive summarization model is the hierarchical document encoder. We proposed a method to pre-train document level hierarchical bidirectional transformer encoders on unlabeled data. When we only pre-train hierarchical transformers on the training sets of summarization datasets with our proposed objective, application of the pre-trained hierarchical transformers to extractive summarization models already leads to wide improvement of summarization performance. Adding the large open-domain dataset to pre-training leads to even better performance. In the future, we plan to apply models to other tasks that also require hierarchical document encodings (e.g., document question answering). We are also interested in improving the architectures of hierarchical document encoders and designing other objectives to train hierarchical transformers.
There were hierarchical and non-hierarchical baselines; BERT was one of those baselines
58e65741184c81c9e7fe0ca15832df2d496beb6f
58e65741184c81c9e7fe0ca15832df2d496beb6f_0
Q: Do they build a model to recognize discourse relations on their dataset? Text: Introduction Researchers have recognized that performance improvements in natural language processing (NLP) tasks such as summarization BIBREF0, question answering BIBREF1, and machine translation BIBREF2 can come from recognizing discourse-level properties of text. These include properties such as the how new entities are introduced into the text, how entities are subsequently referenced (e.g., coreference chains), and how clauses and sentences relate to one another. Corpora in which such properties have been manually annotated by experts can be used as training data for such tasks, or seed data for creating additional "silver annotated” data. Penn Discourse Treebank (PDTB), a lexically grounded method for annotation, is a shallow approach to discourse structure which can be adapted to different genres. Annotating discourse relations both within and across sentences, it aims to have wide application in the field of natural language processing. PDTB can effectively help extract discourse semantic features, thus serving as a useful substrate for the development and evaluation of neural models in many downstream NLP applications. Few Chinese corpora are both annotated for discourse properties and publicly available. The available annotated texts are primarily newspaper articles. The work described here annotates another type of text – the planned monologues found in TED talks, following the annotation style used in the Penn Discourse TreeBank, but adapted to take account of properties of Chinese described in Section 3. TED talks (TED is short for technology, entertainment, design), as examples of planned monologues delivered to a live audience BIBREF3, are scrupulously translated to various languages. Although TED talks have been annotated for discourse relations in several languages BIBREF4, this is the first attempt to annotate TED talks in Chinese (either translated into Chinese, or presented in Chinese), providing data on features of Chinese spoken discourse. Our annotation by and large follows the annotation scheme in the PDTB-3, adapted to features of Chinese spoken discourse described below. The rest of the paper is organized as follows: in Section 2, we review the related existing discourse annotation work. In Section 3, we briefly introduce PDTB-3 BIBREF5 and our adapted annotation scheme by examples. In Section 4, we elaborate our annotation process and the results of our inteannotator-agreement study. Finally, in Section 5, we display the results of our annotation and preliminarily analyze corpus statistics, which we compare to the relation distribution of the CUHK Discourse TreeBank for Chinese. (CUHK-DTBC)BIBREF6. Related work Following the release of the Penn Discourse Treebank (PDTB-2) in 2008 BIBREF7, several remarkable Chinese discourse corpora have since adapted the PDTB framework BIBREF8, including the Chinese Discourse Treebank BIBREF9, HIT Chinese Discourse Treebank (HIT-CDTB) zhou2014cuhk, and the Discourse Treebank for Chinese (DTBC) BIBREF6. Specifically, Xue proposed the Chinese Discourse Treebank (CDTB) Project BIBREF10. From their annotation work, they discussed the matters such as features of Chinese discourse connectives, definition and scope of arguments, and senses disambiguation, and they argued that determining the argument scope is the most challenging part of the annotation. To further promote their research, zhou2012pdtb presented a PDTB-style discourse corpus for Chinese. They also discussed the key characteristics of Chinese text which differs from English, e.g., the parallel connectives, comma-delimited intra-sentential implicit relations etc. Their data set contains 98 documents from the Chinese Treebank BIBREF10. In 2015, Zhou and Xue expanded their corpus to 164 documents, with more than 5000 relations being annotated. huang-chen-2011-chinese constructed a Chinese discourse corpus with 81 articles. They adopted the top-level senses from PDTB sense hierarchy and focused on the annotation of inter-sentential discourse relations. zhang2014chinese analyzed the differences between Chinese and English, and then presented a new Chinese discourse relation hierarchy based on the PDTB system, in which the discourse relations are divided into 6 types: temporal, causal, condition, comparison, expansion and conjunction. And they constructed a Chinese Discourse Relation corpus called HIT-CDTB based on this hierarchy. Then, zhou2014cuhk presented the first open discourse treebank for Chinese, the CUHK Discourse Treebank for Chinese. They adapted the annotation scheme of Penn Discourse Treebank 2 (PDTB-2) to Chinese language and made adjustments to 3 aspects according to the previous study of Chinese linguistics. However, they just reannotated the documents of the Chinese Treebank and did not annotate inter-sentence level discourse relations. It is worth noting that, all these corpora display a similar unbalanced distribution that is likely to be associated with them all being limited to text from the same NEWS genre. In particular, these two senses (Expansion and Conjunction) represent 80 % of the relations annotated in the CDTB. In addition, although annotating spoken TED talks has been done on other several languages before BIBREF4, to our knowledge, there is no recent annotation work for Chinese spoken discourses, or particularly for Chinese Ted talks. However, there is some evidence that noticeable differences in the use of discourse connectives and discourse relations can be found between written and spoken discourses BIBREF11. Here, by using the new PDTB-3 sense hierarchy and annotator, which has not been used for Chinese annotation before, we annotated Chinese Ted talks to help others be aware of the differences between the Chinese discourse structure of written and spoken texts and will make our corpus publicly available to benefit the discourse-level NLP researches for spoken discourses. PDTB and our Annotation Scheme The annotation scheme we adopted in this work is based on the framework of PDTB, incorporating the most recent PDTB (PDTB-3) relational taxonomy and sense hierarchy BIBREF5, shown in Table 1. PDTB follows a lexically grounded approach to the representation of discourse relations BIBREF12. Discourse relations are taken to hold between two abstract object arguments, named Arg1 and Arg2 using syntactic conventions, and are triggered either by explicit connectives or, otherwise, by adjacency between clauses and sentences. As we can see from Table 1, the PDTB-3 sense hierarchy has 4 top-level senses (Expansion, Temporal, Contingency, Contrast) and second- and third-level senses for some cases. With obvious differences ranging from the conventions used in annotation, to differences in senses hierarchy, PDTB-3 gives rigorous attention to achieving as much consistency as possible while annotating discourse relations. Previously, all Chinese annotation work using PDTB style followed the settings of PDTB-2. Some researchers tried to adapt it in lines of the Chinese characteristics. For example, zhou2012pdtb annotated the parallel connectives continuously rather than discontinuously due to the greater use of parallel connectives in Chinese and a reduced use of explicit connectives in general. zhou2014cuhk added some additional senses into the hierarchy. However, PDTB-3, as a new and enriched version, not only has paid greater attention to intra-sentential senses, but also has incorporated some of those additional senses. Therefore, we just made several modifications including removing, adding, or disambiguating for the practical use of PDTB-3 into our Chinese annotation. In practice, using the PDTB annotator tool, we annotated an explicit connective, identified its two arguments in which the connective occurs, and then labeled the sense. For implicit relations, when we inferred the type of relation between two arguments, we tried to insert a connective for this relation, and also the inserted connective is not so strictly restricted, extending to expressions that can convey the sense of the arguments. If a connective conveys more than one sense or more than one relation can be inferred, multiple senses would be assigned to the token. Our adaptations towards PDTB-3 will be introduced from the perspectives of arguments, relations and senses as follows. PDTB and our Annotation Scheme ::: Arguments The argument-labelling conventions used in the PDTB-2 had to be modified to deal with the wider variety of discourse relations that needed to be annotated consistently within sentences in the PDTB-3. In particular, in labelling intra-sentential discourse relations, a distinction was made between relations whose arguments were in coordinating syntactic structures and ones whose arguments were in subordinating syntactic structures. For coordinating structures, arguments were labelled by position (Arg1 first, then Arg2), while for subordinating structures, the argument in subordinate position was labelled Arg2, and the other, Arg1, independent of position. For discourse in Chinese, this can introduce an unwanted ambiguity. Example 1 is a typical example for illustrate this phenomenon. In the examples throughout the paper, explicit connectives are underlined, while implicit Discourse Connectives and the lexicalizing expression for Alternative Lexicalizations are shown in parentheses and square brackets respectively. The position of the arguments is indicated by the attached composite labels to the right square brackets, and the relation lables and sense lables can be seen in the parentheses at the end of arguments. When the arguments, relations or senses are ambiguous, there may be no corresponding labels shown in the examples. UTF8gbsn 因为 你 让 我 生气, 所以,我 要让 Because you make me angry, so I want 你 更难过。(Explicit, Cause.Result) you to be sadder. “You made me angry, so I return it double back.” While“because”and“so”are rarely found together as connectives in a sentence in English, it is not uncommon to find them used concurrently as a paired connective in Chinese. Therefore, due to this difference, the annotators tend to have no idea about which clause is subordinate. Therefore, if we regard the first clause as subordinating structure and “因 为”(because)as connective, then the sense would be Contingency.Cause.Reason. By contrast, the sense would be Contingency.Cause.Result, when the second clause is regarded as Arg2. To get rid of this kind of ambiguity, we just take the first as Arg1 and the second Arg2 regardless of the fact that the parallel connectives are surbodinating or coordinating. PDTB and our Annotation Scheme ::: Relations There are two new types of relation in PDTB-3: AltlexC and Hypophora. Hypophora is an explicitly marked question-response pairs, first used in annotating the TED- MDB BIBREF4. In Hypophora relations, Arg1 expresses a question and Arg2 offers an answer, with no explicit or implicit connective being annotated (Example 2). Because of the nature of TED talks, many relations in both the TED-MDB and in our Chinese TED talks are examples of “Hypophora”. However, not all discourse relations whose first argument is a question are Hypophora. Example 3, instead of seeking information and giving answer, is just a rhetorical question expressing negation by imposing a dramatic effect. [我到底 要 讲 什么Arg1]? I on earth am going to talk about what ? [最后 我决定 要 讲 教育Arg2]。(Hypophora) Finally, I decided to talk about education . “what am I gonna say? Finally, I decided to talk about education.” 他说 : “ 我 是 三 天 一 小 哭 、 五 天 He said, " I am three days a little cry, five days 一 大 哭 。 " 这样 你 有 比较 健康 吗 ? a lot cry." In this way, you are more healthier? 都 是 悲伤 , 并 不 是 每 一 个 人 ,每 一 次 All are sadness, not everyone, every time 感受 到 悲伤 的 时候 ,都 一定 会 流泪 、 甚至 大哭 。 feel sad 's time, would shed tears、even cry. “He said, "Three times I cry a little, and five times I cry a lot." Is that healthier? Everyone gets sad, but that's not to say that whenever someone feels sad, they necessarily will cry.” In addition, we found a new issue when identifying Hypophora, which is shown in Example 4. In this example, we have a series of questions, rather than a series of assertions or a question-response pair. We attempted to capture the rhetorical links by taking advantage of our current inventory of discourse relations. Here, two implicit relations were annotated for this example, and the senses are Arg2-as-detail and Result+SpeechAct respectively. Therefore, when there are subsequent texts related to a question or a sequence of questions, we would not just annotated them as Hypophora but had to do such analysis as what we did for the examples shown. [情绪 , 它 到底 是 什么Rel1-Arg1 ]? (具体来说) Emotion, it on earth is what? (Specially) [它 是 好还是 不 好Rel1-Arg2,Rel2-Arg1]?(Implicit,Arg2-as-detail) It is good or bad? (所以)[你 会 想要 拥有 它 吗Rel2-Arg2]? (Implicit,Result+SpeechAct) (So) You want to have it ? “What is it exactly? Is it good or bad? Do you want to have them?” Besides, it is widely accepted that the ellipsis of subject or object are frequently seen in Chinese. Then for EntRel, if facing this situation where one of the entities in Arg1 or Arg2 is omitted, we still need to annotate this as EntRel (Example 5). In this following example, we can see in Arg2, the pronoun which means “that”is omitted, but in fact which refers to the phenomenon mentioned in Arg1, so here there is still an EntRel relation between this pair of arguments. [我们会以讽刺的口吻来谈论, 并且会 We in ironic terms talk about, and 加上引号 : “进步”Arg1 ] add quotes: “Progress”. [我想是有原因的, 我们也知道 是 什么 I think there are reasons, we also know are what 原因Arg2]。(EntRel) reasons. “We talk about it in ironic terms with little quotes around it:“Progress.”Okay, there are reasons for that, and I think we know what those reasons are.” PDTB and our Annotation Scheme ::: Senses The great improvement in the sense hierarchy in PDTB-3 enables us to capture more senses with additional types and assign the senses more clearly. For example, the senses under the category of Expansion such as level of detail, manner, disjunction and similarity are indispensable to our annotation. Therefore, we nearly adopted the sense hierarchy in PDTB-3, just with few adaptations. On the one hand, we removed the third level sense “Negative condition+SpeechAct”, since it was not used to label anything in the corpus. On the other hand, we added the Level-2 sense “Expansion.Progression”. This type of sense applies when Arg1 and Arg2 are coordinating structure with different emphasis. The first argument is annotated as Arg1 and the second as Arg2. This sense is usually conveyed by such typical connectives as “不 但 (not only)... 而 且 (but also)...”, “甚 至 (even)... 何 况 (let alone)...”,“... 更 (even more)...”(Example 6). [我 去 了 聋人 俱乐部 ,观看 了 聋人 的 I went to deaf clubs, saw the deaf person’s 表演 Arg1]。[我甚至 去 了 田纳西州 纳什维尔的 performances. I even went to the Nashville ’s “ 美国 聋人 小姐 ” 选秀赛Arg2]。(Explicit, Progression.Arg2-as-progr) “the Miss Deaf” America contest. “ I went to deaf clubs. I saw performances of deaf theater and of deaf poetry. I even went to the Miss Deaf America contest in Nashville.” Another issue about sense is the inconsistency when we annotated the implicit relations. zhou2012pdtb did not insert connective for implicit relations, but we did this for further researches with regard to implicit relations. However, we found that in some cases where different connectives can be inserted into the same arguments to express the same relation, the annotators found themselves in a dilemma. We can see that Example 7 and Example 8 respectively insert “so” and “because” into the arguments between which there is a causal relation, but the senses in these two examples would be Cause.Result and Cause.Reason. The scheme we adopted for this is that we only take the connectives that we would insert into account, and the position and sense relations of arguments would depend on the inserted connectives. [“克服 逆境 ” 这一说法 对我 “Overcome the adversity” this phrase for me 根本 不 成立 Arg1],(所以)[别人 让 我 completely not justified, (so) others asked me 就 这一话题 说 几 句 的时候, 我很不自在Arg2]。(Implicit, Cause.Result) to this topic talk about some, I felt uneasy. (因为)[“克服 逆境 ” 这一说法 (Because) “overcome the adversity” this phrase 对 我来说根本 不 成立Arg2], [ 别人 for me completely not justified, (so) others 让 我就 这一话题 说几句的时候,我很 asked me to this topic, talk about some, I felt 不自在Arg1]。(Implicit, Cause.Reason) uneasy. ““overcome the adversity” this phrase never sat right with me, and I always felt uneasy trying to answer people's questions about it.” Annotation Procedure In this section, we describe our annotation process in creating the Chinese TED discourse treebank. To ensure annotation quality, the whole annotation process has three stages: annotators training, annotation, post-annotation. The training process intends to improve the annotators’ annotation ability, while after the formal annotation, the annotated work was carefully checked by the supervisor, and the possible errors and inconsistencies were dealt with through discussions and further study. Annotation Procedure ::: Annotator training The annotator team consists of a professor as the supervisor, an experienced annotator and a researcher of PDTB as counselors, two master degree candidates as annotators. Both of the annotators have a certain theoretical foundation of linguistics. To guarantee annotation quality, the annotators were trained through the following steps: firstly, the annotators read the PDTB-3 annotation manual, the PDTB-2 annotation manual and also other related papers carefully; next, the annotators tried to independently annotate same texts, finding out their own uncertainties or problems respectively and discussing these issues together; then, the annotators were asked to create sample annotations on TED talks transcripts for each sense from the top level to the third. They discussed the annotations with the researchers of the team and tried to settle disputes. When sample annotations are created, this part of process is completed; based on the manuals, previous annotation work and also the annotators’ own pre-annotation work, they made a Chinese tutorial on PDTB guidelines, in which major difficulties and perplexities, such as the position and the span of the arguments, the insert of connectives, and the distinction of different categories of relations and senses, are explained clearly in detail by typical samples. This Chinese tutorial is beneficial for those who want to carry out similar Chinese annotation, so we made this useful tutorial available to those who want to carry out similar annotation; finally, to guarantee annotation consistency, the annotators were required to repeat their annotation-discussion process until their annotation results show the Kappa value > 0.8 for each of the indicators for agreement. Annotation Procedure ::: Corpus building At present, our corpus has been released publiclyFOOTREF19. Our corpus consists of two parts with equal number of texts: (1) 8 English TED talks translated into Chinese, just like the talks in the TED-MDB, all of which were originally presented in English and translated into other languages (including German, Lithuanian, Portuguese,Polish, Russian and Turkish) BIBREF4. (2) 8 Chinese TED talks originally presented in Taipei and translated into English. We got the texts by means of extracting Chinese and English subtitles from TED talks videos . Firstly, we just annotated the talks given in English and translated in Chinese. But after considering the possible divergencies between translated texts and the original texts, we did our annotation for the Taipei TED talks, which were delivered in Chinese. The parallel English texts are also being annotated for discourse relations, but they are not ready for carrying out a systematic comparison between them. At the current stage, we annotated 3212 relations for the TED talks transcripts with 55307 words,and the length of each talk (in words) and the number of annotated relations in each talks can be found from Table 2. These TED talks we annotated were prudently selected from dozens of candidate texts. The quality of texts which is principally embodied in content, logic, punctuation and the translation are the major concerns for us. Moreover, when selecting the texts from the Taipei talks, we ruled out those texts which are heavy in dialogues. Some speakers try to interact with the audience, asking the questions, and then commenting on how they have replied. However, what we were annotating was not dialogues. In spite of critically picking over the texts, we still spent considerable time on dealing with them before annotation such as inserting punctuation and correcting the translation. Moreover, before annotation, we did word segmentation by using Stanford Segmenter and corrected improper segmentation. While annotating, we assigned the vast majority of the relations a single sense and a small proportion of relations multiple senses. Unlike previous similar Chinese corpora which primarily or just annotated the relations between sentences, we annotated not only discourse relations between sentences but intra-sentential discourse relations as well. To ensure building a high-quality corpus, the annotators regularly discussed their difficulties and confusions with the researcher and the experienced annotator in the whole process of annotation. After discussion, the annotators reached agreement or retained the differences for few ambiguities. Annotation Procedure ::: Agreement study We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses. Table 3 also shows that agreement on argument order is almost 1.0 (kappa = 0.99). This means that the guidelines were sufficiently clear that the annotators rarely had difficulty in deciding the location of Arg1 and Arg2 when the senses are determined. Concerning the scope of arguments, which is seen as the most challenging part in the annotation work BIBREF10, our agreement and Kappa value on this are 0.88 and 0.86 respectively, while the agreement of the scope of arguments depends on whether the scopes of two arguments the anotators annotated are completely the same. Under such strict requirement, our consistency in this respect is still significantly higher than that of other annotation work done before, for we strictly obeyed the rules of “minimality principle” mentioned in the PDTB-3 annotation manual and got a clearer perspective of supplementary information. Therefore, the annotators are better at excluding the information that do not fall within the scope of the discourse relation. It is useful to determine where the annotators disagreed most with each other. The three senses where most disagreement occurred are shown in Table 4. The disagreements were primarily in labelling implicit relations. The highest level of disagreement occurred with Expansion.Conjunction and Expansion.Detail, accounting for 12.5 % among all the inconsistent senses. It is because, more often than not, the annotators failed to judge whether the two arguments make the same contribution with respect to that situation or both arguments describing the same has different level of details. The second highest level of disagreement is reflected in Conjunction and Asynchronous, accounting for 9.3 %. Besides, Contrast and Concession are two similar senses, which are usually signaled by the same connectives like “但是”, “而”, “不过”, and all these words can be translated into“but”in English. Hence, the annotators sometimes tend to be inconsistent when distinguishing them. Results In regard to discourse relations, there are 3212 relations, of which 1237 are explicit relations (39%) and 1174 are implicit relation (37%) (Figure 1). The remaining 801 relations include Hypophora, AltLex, EntRel, and NoRel. Among these 4 kinds of relations, what is worth mentioning is AltLex(Alternative Lexicalizations ),which only constitutes 3% but is of tremendous significance, for we are able to discover inter- or intra-sentential relations when there is no explicit expressions but AltLex expressions conveying the relations. but AltLex expressions(eg, 这导致了(this cause), 一个例子是(one example is... ), 原 因 是 (the reason is), etc.). Originally in English, AltLex is supposed to contain both an anaphoric or deictic reference to an actual argument and an indication of the type of sense BIBREF13. While for Chinese, the instances of Altlex do not differ significantly from those annotated in English. To prove this, two examples are given as below (Example 9 and Example 10). From our annotation, we realized that Altlex deserves more attention, for which can effectively help to recogonize types of discourse relations automatically. [“国内 许多被截肢者, 无法使用 in this country, many of the amputees, cannot use 他们的假肢Arg1],[这其中的原因是] [他们由于 their prostheses, the reason was their 假肢接受腔 无法 与残肢 适配 而 prosthetic sockets cannot their leg fit well so that 感到疼痛Arg2].(AltLex, Cause.Reason) felt painful.] “Many of the amputees in the country would not use their prostheses. The reason, I would come to find out, was that their prosthetic sockets were painful because they did not fit well.” 三年级的时候考进秀朗小学的游泳班, in third grade, got in the swimming class at Xiu Lang [ 这个班每天的 游泳 elementary school, this class everyday’s swimming 训练量高达 3000 米 Arg1], 我发现 [这样的训练量 volumm reach 3000 meters, I realized the training load 使][我 无法同时兼顾两种乐器 Arg2]。(AltLex, Cause.Result) make me cannot learn the two instruments at the same time “I got in the swimming class at Xiu Lang elementary school when I was in third grade. We had to swim up to 3000 meters every day. I realized the training load was too much for me to learn the two instruments at the same time.” Obviously, there is approximately the same number of explicit and implicit relations in the corpus. This may indicate that explicit connectives and relations are more likely to present in Chinese spoken texts rather than Chinese written texts. The figures shown by Table 4 illustrate the distributions of class level senses. We make a comparison for the class level senses between our corpus and the CUHK Discourse Treebank for Chinese (CUHK-DTBC). CUHK Discourse Treebank for Chinese is a corpus annotating news reports. Therefore, our comparison with it may shed light on the differences of discourse structures in different genres. According to the statistics of CUHK-DTBC for 400 documents and our corpus, while more than half of the senses is Expansion in CUHK-DTBC, it just represents 37.5% in our corpus. In addition, it is highlighted that the ranks of the class level senses are the same in both corpora, although all of the other three senses in our corpus are more than those in CUHK-DTBC. The most frequent second-level senses in our corpus can be seen from Table 5. We can find that 20% of the senses is Cause (including Reason and Result), followed by Conjunction and Concession, each with 13%. The top 10 most frequent senses take up 86% of all senses annotated, which reveals that other senses also can validate their existence in our corpus. Therefore, these findings show that, compared with other corpora about Chinese shallow relations where the majority of the documents are news report, our corpus evidently show a more balanced and varied distribution from perspectives of both relations and senses, which in large measure proves the differences in discourse relations between Chinese written texts and Chinese spoken texts. Conclusions and Future Work In this paper, we describe our scheme and process in annotating shallow discourse relations using PDTB-style. In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy. To ensure the annotation quality, we formulated detailed annotation criteria and quality assurance strategies. After serious training, we annotated 3212 discourse relations, and we achieved a satisfactory consistency of labelling with a Kappa value of greater than 0.85 for most of the indicators. Finally, we display our annotation results in which the distribution of discourse relations and senses differ from that in other corpora which annotate news report or newspaper texts. Our corpus contains more Contingency, Temporal and Comparison relations instead of being governed by Expansion. In future work, we are planning to 1) expand our corpus by annotating more TED talks or other spoken texts; 2) build a richer and diverse connective set and AltLex expressions set; 3) use the corpus in developing a shallow discourse parser for Chinese spoken discourses; 4) also explore automatic approaches for implicit discourse relations recognition. Acknowledgement The present research was supported by the National Natural Science Foundation of China (Grant No. 61861130364) and the Royal Society (London) (NAF$\backslash $R1$\backslash $180122). We would like to thank the anonymous reviewers for their insightful comments.
No
269b05b74d5215b09c16e95a91ae50caedd9e2aa
269b05b74d5215b09c16e95a91ae50caedd9e2aa_0
Q: Which inter-annotator metric do they use? Text: Introduction Researchers have recognized that performance improvements in natural language processing (NLP) tasks such as summarization BIBREF0, question answering BIBREF1, and machine translation BIBREF2 can come from recognizing discourse-level properties of text. These include properties such as the how new entities are introduced into the text, how entities are subsequently referenced (e.g., coreference chains), and how clauses and sentences relate to one another. Corpora in which such properties have been manually annotated by experts can be used as training data for such tasks, or seed data for creating additional "silver annotated” data. Penn Discourse Treebank (PDTB), a lexically grounded method for annotation, is a shallow approach to discourse structure which can be adapted to different genres. Annotating discourse relations both within and across sentences, it aims to have wide application in the field of natural language processing. PDTB can effectively help extract discourse semantic features, thus serving as a useful substrate for the development and evaluation of neural models in many downstream NLP applications. Few Chinese corpora are both annotated for discourse properties and publicly available. The available annotated texts are primarily newspaper articles. The work described here annotates another type of text – the planned monologues found in TED talks, following the annotation style used in the Penn Discourse TreeBank, but adapted to take account of properties of Chinese described in Section 3. TED talks (TED is short for technology, entertainment, design), as examples of planned monologues delivered to a live audience BIBREF3, are scrupulously translated to various languages. Although TED talks have been annotated for discourse relations in several languages BIBREF4, this is the first attempt to annotate TED talks in Chinese (either translated into Chinese, or presented in Chinese), providing data on features of Chinese spoken discourse. Our annotation by and large follows the annotation scheme in the PDTB-3, adapted to features of Chinese spoken discourse described below. The rest of the paper is organized as follows: in Section 2, we review the related existing discourse annotation work. In Section 3, we briefly introduce PDTB-3 BIBREF5 and our adapted annotation scheme by examples. In Section 4, we elaborate our annotation process and the results of our inteannotator-agreement study. Finally, in Section 5, we display the results of our annotation and preliminarily analyze corpus statistics, which we compare to the relation distribution of the CUHK Discourse TreeBank for Chinese. (CUHK-DTBC)BIBREF6. Related work Following the release of the Penn Discourse Treebank (PDTB-2) in 2008 BIBREF7, several remarkable Chinese discourse corpora have since adapted the PDTB framework BIBREF8, including the Chinese Discourse Treebank BIBREF9, HIT Chinese Discourse Treebank (HIT-CDTB) zhou2014cuhk, and the Discourse Treebank for Chinese (DTBC) BIBREF6. Specifically, Xue proposed the Chinese Discourse Treebank (CDTB) Project BIBREF10. From their annotation work, they discussed the matters such as features of Chinese discourse connectives, definition and scope of arguments, and senses disambiguation, and they argued that determining the argument scope is the most challenging part of the annotation. To further promote their research, zhou2012pdtb presented a PDTB-style discourse corpus for Chinese. They also discussed the key characteristics of Chinese text which differs from English, e.g., the parallel connectives, comma-delimited intra-sentential implicit relations etc. Their data set contains 98 documents from the Chinese Treebank BIBREF10. In 2015, Zhou and Xue expanded their corpus to 164 documents, with more than 5000 relations being annotated. huang-chen-2011-chinese constructed a Chinese discourse corpus with 81 articles. They adopted the top-level senses from PDTB sense hierarchy and focused on the annotation of inter-sentential discourse relations. zhang2014chinese analyzed the differences between Chinese and English, and then presented a new Chinese discourse relation hierarchy based on the PDTB system, in which the discourse relations are divided into 6 types: temporal, causal, condition, comparison, expansion and conjunction. And they constructed a Chinese Discourse Relation corpus called HIT-CDTB based on this hierarchy. Then, zhou2014cuhk presented the first open discourse treebank for Chinese, the CUHK Discourse Treebank for Chinese. They adapted the annotation scheme of Penn Discourse Treebank 2 (PDTB-2) to Chinese language and made adjustments to 3 aspects according to the previous study of Chinese linguistics. However, they just reannotated the documents of the Chinese Treebank and did not annotate inter-sentence level discourse relations. It is worth noting that, all these corpora display a similar unbalanced distribution that is likely to be associated with them all being limited to text from the same NEWS genre. In particular, these two senses (Expansion and Conjunction) represent 80 % of the relations annotated in the CDTB. In addition, although annotating spoken TED talks has been done on other several languages before BIBREF4, to our knowledge, there is no recent annotation work for Chinese spoken discourses, or particularly for Chinese Ted talks. However, there is some evidence that noticeable differences in the use of discourse connectives and discourse relations can be found between written and spoken discourses BIBREF11. Here, by using the new PDTB-3 sense hierarchy and annotator, which has not been used for Chinese annotation before, we annotated Chinese Ted talks to help others be aware of the differences between the Chinese discourse structure of written and spoken texts and will make our corpus publicly available to benefit the discourse-level NLP researches for spoken discourses. PDTB and our Annotation Scheme The annotation scheme we adopted in this work is based on the framework of PDTB, incorporating the most recent PDTB (PDTB-3) relational taxonomy and sense hierarchy BIBREF5, shown in Table 1. PDTB follows a lexically grounded approach to the representation of discourse relations BIBREF12. Discourse relations are taken to hold between two abstract object arguments, named Arg1 and Arg2 using syntactic conventions, and are triggered either by explicit connectives or, otherwise, by adjacency between clauses and sentences. As we can see from Table 1, the PDTB-3 sense hierarchy has 4 top-level senses (Expansion, Temporal, Contingency, Contrast) and second- and third-level senses for some cases. With obvious differences ranging from the conventions used in annotation, to differences in senses hierarchy, PDTB-3 gives rigorous attention to achieving as much consistency as possible while annotating discourse relations. Previously, all Chinese annotation work using PDTB style followed the settings of PDTB-2. Some researchers tried to adapt it in lines of the Chinese characteristics. For example, zhou2012pdtb annotated the parallel connectives continuously rather than discontinuously due to the greater use of parallel connectives in Chinese and a reduced use of explicit connectives in general. zhou2014cuhk added some additional senses into the hierarchy. However, PDTB-3, as a new and enriched version, not only has paid greater attention to intra-sentential senses, but also has incorporated some of those additional senses. Therefore, we just made several modifications including removing, adding, or disambiguating for the practical use of PDTB-3 into our Chinese annotation. In practice, using the PDTB annotator tool, we annotated an explicit connective, identified its two arguments in which the connective occurs, and then labeled the sense. For implicit relations, when we inferred the type of relation between two arguments, we tried to insert a connective for this relation, and also the inserted connective is not so strictly restricted, extending to expressions that can convey the sense of the arguments. If a connective conveys more than one sense or more than one relation can be inferred, multiple senses would be assigned to the token. Our adaptations towards PDTB-3 will be introduced from the perspectives of arguments, relations and senses as follows. PDTB and our Annotation Scheme ::: Arguments The argument-labelling conventions used in the PDTB-2 had to be modified to deal with the wider variety of discourse relations that needed to be annotated consistently within sentences in the PDTB-3. In particular, in labelling intra-sentential discourse relations, a distinction was made between relations whose arguments were in coordinating syntactic structures and ones whose arguments were in subordinating syntactic structures. For coordinating structures, arguments were labelled by position (Arg1 first, then Arg2), while for subordinating structures, the argument in subordinate position was labelled Arg2, and the other, Arg1, independent of position. For discourse in Chinese, this can introduce an unwanted ambiguity. Example 1 is a typical example for illustrate this phenomenon. In the examples throughout the paper, explicit connectives are underlined, while implicit Discourse Connectives and the lexicalizing expression for Alternative Lexicalizations are shown in parentheses and square brackets respectively. The position of the arguments is indicated by the attached composite labels to the right square brackets, and the relation lables and sense lables can be seen in the parentheses at the end of arguments. When the arguments, relations or senses are ambiguous, there may be no corresponding labels shown in the examples. UTF8gbsn 因为 你 让 我 生气, 所以,我 要让 Because you make me angry, so I want 你 更难过。(Explicit, Cause.Result) you to be sadder. “You made me angry, so I return it double back.” While“because”and“so”are rarely found together as connectives in a sentence in English, it is not uncommon to find them used concurrently as a paired connective in Chinese. Therefore, due to this difference, the annotators tend to have no idea about which clause is subordinate. Therefore, if we regard the first clause as subordinating structure and “因 为”(because)as connective, then the sense would be Contingency.Cause.Reason. By contrast, the sense would be Contingency.Cause.Result, when the second clause is regarded as Arg2. To get rid of this kind of ambiguity, we just take the first as Arg1 and the second Arg2 regardless of the fact that the parallel connectives are surbodinating or coordinating. PDTB and our Annotation Scheme ::: Relations There are two new types of relation in PDTB-3: AltlexC and Hypophora. Hypophora is an explicitly marked question-response pairs, first used in annotating the TED- MDB BIBREF4. In Hypophora relations, Arg1 expresses a question and Arg2 offers an answer, with no explicit or implicit connective being annotated (Example 2). Because of the nature of TED talks, many relations in both the TED-MDB and in our Chinese TED talks are examples of “Hypophora”. However, not all discourse relations whose first argument is a question are Hypophora. Example 3, instead of seeking information and giving answer, is just a rhetorical question expressing negation by imposing a dramatic effect. [我到底 要 讲 什么Arg1]? I on earth am going to talk about what ? [最后 我决定 要 讲 教育Arg2]。(Hypophora) Finally, I decided to talk about education . “what am I gonna say? Finally, I decided to talk about education.” 他说 : “ 我 是 三 天 一 小 哭 、 五 天 He said, " I am three days a little cry, five days 一 大 哭 。 " 这样 你 有 比较 健康 吗 ? a lot cry." In this way, you are more healthier? 都 是 悲伤 , 并 不 是 每 一 个 人 ,每 一 次 All are sadness, not everyone, every time 感受 到 悲伤 的 时候 ,都 一定 会 流泪 、 甚至 大哭 。 feel sad 's time, would shed tears、even cry. “He said, "Three times I cry a little, and five times I cry a lot." Is that healthier? Everyone gets sad, but that's not to say that whenever someone feels sad, they necessarily will cry.” In addition, we found a new issue when identifying Hypophora, which is shown in Example 4. In this example, we have a series of questions, rather than a series of assertions or a question-response pair. We attempted to capture the rhetorical links by taking advantage of our current inventory of discourse relations. Here, two implicit relations were annotated for this example, and the senses are Arg2-as-detail and Result+SpeechAct respectively. Therefore, when there are subsequent texts related to a question or a sequence of questions, we would not just annotated them as Hypophora but had to do such analysis as what we did for the examples shown. [情绪 , 它 到底 是 什么Rel1-Arg1 ]? (具体来说) Emotion, it on earth is what? (Specially) [它 是 好还是 不 好Rel1-Arg2,Rel2-Arg1]?(Implicit,Arg2-as-detail) It is good or bad? (所以)[你 会 想要 拥有 它 吗Rel2-Arg2]? (Implicit,Result+SpeechAct) (So) You want to have it ? “What is it exactly? Is it good or bad? Do you want to have them?” Besides, it is widely accepted that the ellipsis of subject or object are frequently seen in Chinese. Then for EntRel, if facing this situation where one of the entities in Arg1 or Arg2 is omitted, we still need to annotate this as EntRel (Example 5). In this following example, we can see in Arg2, the pronoun which means “that”is omitted, but in fact which refers to the phenomenon mentioned in Arg1, so here there is still an EntRel relation between this pair of arguments. [我们会以讽刺的口吻来谈论, 并且会 We in ironic terms talk about, and 加上引号 : “进步”Arg1 ] add quotes: “Progress”. [我想是有原因的, 我们也知道 是 什么 I think there are reasons, we also know are what 原因Arg2]。(EntRel) reasons. “We talk about it in ironic terms with little quotes around it:“Progress.”Okay, there are reasons for that, and I think we know what those reasons are.” PDTB and our Annotation Scheme ::: Senses The great improvement in the sense hierarchy in PDTB-3 enables us to capture more senses with additional types and assign the senses more clearly. For example, the senses under the category of Expansion such as level of detail, manner, disjunction and similarity are indispensable to our annotation. Therefore, we nearly adopted the sense hierarchy in PDTB-3, just with few adaptations. On the one hand, we removed the third level sense “Negative condition+SpeechAct”, since it was not used to label anything in the corpus. On the other hand, we added the Level-2 sense “Expansion.Progression”. This type of sense applies when Arg1 and Arg2 are coordinating structure with different emphasis. The first argument is annotated as Arg1 and the second as Arg2. This sense is usually conveyed by such typical connectives as “不 但 (not only)... 而 且 (but also)...”, “甚 至 (even)... 何 况 (let alone)...”,“... 更 (even more)...”(Example 6). [我 去 了 聋人 俱乐部 ,观看 了 聋人 的 I went to deaf clubs, saw the deaf person’s 表演 Arg1]。[我甚至 去 了 田纳西州 纳什维尔的 performances. I even went to the Nashville ’s “ 美国 聋人 小姐 ” 选秀赛Arg2]。(Explicit, Progression.Arg2-as-progr) “the Miss Deaf” America contest. “ I went to deaf clubs. I saw performances of deaf theater and of deaf poetry. I even went to the Miss Deaf America contest in Nashville.” Another issue about sense is the inconsistency when we annotated the implicit relations. zhou2012pdtb did not insert connective for implicit relations, but we did this for further researches with regard to implicit relations. However, we found that in some cases where different connectives can be inserted into the same arguments to express the same relation, the annotators found themselves in a dilemma. We can see that Example 7 and Example 8 respectively insert “so” and “because” into the arguments between which there is a causal relation, but the senses in these two examples would be Cause.Result and Cause.Reason. The scheme we adopted for this is that we only take the connectives that we would insert into account, and the position and sense relations of arguments would depend on the inserted connectives. [“克服 逆境 ” 这一说法 对我 “Overcome the adversity” this phrase for me 根本 不 成立 Arg1],(所以)[别人 让 我 completely not justified, (so) others asked me 就 这一话题 说 几 句 的时候, 我很不自在Arg2]。(Implicit, Cause.Result) to this topic talk about some, I felt uneasy. (因为)[“克服 逆境 ” 这一说法 (Because) “overcome the adversity” this phrase 对 我来说根本 不 成立Arg2], [ 别人 for me completely not justified, (so) others 让 我就 这一话题 说几句的时候,我很 asked me to this topic, talk about some, I felt 不自在Arg1]。(Implicit, Cause.Reason) uneasy. ““overcome the adversity” this phrase never sat right with me, and I always felt uneasy trying to answer people's questions about it.” Annotation Procedure In this section, we describe our annotation process in creating the Chinese TED discourse treebank. To ensure annotation quality, the whole annotation process has three stages: annotators training, annotation, post-annotation. The training process intends to improve the annotators’ annotation ability, while after the formal annotation, the annotated work was carefully checked by the supervisor, and the possible errors and inconsistencies were dealt with through discussions and further study. Annotation Procedure ::: Annotator training The annotator team consists of a professor as the supervisor, an experienced annotator and a researcher of PDTB as counselors, two master degree candidates as annotators. Both of the annotators have a certain theoretical foundation of linguistics. To guarantee annotation quality, the annotators were trained through the following steps: firstly, the annotators read the PDTB-3 annotation manual, the PDTB-2 annotation manual and also other related papers carefully; next, the annotators tried to independently annotate same texts, finding out their own uncertainties or problems respectively and discussing these issues together; then, the annotators were asked to create sample annotations on TED talks transcripts for each sense from the top level to the third. They discussed the annotations with the researchers of the team and tried to settle disputes. When sample annotations are created, this part of process is completed; based on the manuals, previous annotation work and also the annotators’ own pre-annotation work, they made a Chinese tutorial on PDTB guidelines, in which major difficulties and perplexities, such as the position and the span of the arguments, the insert of connectives, and the distinction of different categories of relations and senses, are explained clearly in detail by typical samples. This Chinese tutorial is beneficial for those who want to carry out similar Chinese annotation, so we made this useful tutorial available to those who want to carry out similar annotation; finally, to guarantee annotation consistency, the annotators were required to repeat their annotation-discussion process until their annotation results show the Kappa value > 0.8 for each of the indicators for agreement. Annotation Procedure ::: Corpus building At present, our corpus has been released publiclyFOOTREF19. Our corpus consists of two parts with equal number of texts: (1) 8 English TED talks translated into Chinese, just like the talks in the TED-MDB, all of which were originally presented in English and translated into other languages (including German, Lithuanian, Portuguese,Polish, Russian and Turkish) BIBREF4. (2) 8 Chinese TED talks originally presented in Taipei and translated into English. We got the texts by means of extracting Chinese and English subtitles from TED talks videos . Firstly, we just annotated the talks given in English and translated in Chinese. But after considering the possible divergencies between translated texts and the original texts, we did our annotation for the Taipei TED talks, which were delivered in Chinese. The parallel English texts are also being annotated for discourse relations, but they are not ready for carrying out a systematic comparison between them. At the current stage, we annotated 3212 relations for the TED talks transcripts with 55307 words,and the length of each talk (in words) and the number of annotated relations in each talks can be found from Table 2. These TED talks we annotated were prudently selected from dozens of candidate texts. The quality of texts which is principally embodied in content, logic, punctuation and the translation are the major concerns for us. Moreover, when selecting the texts from the Taipei talks, we ruled out those texts which are heavy in dialogues. Some speakers try to interact with the audience, asking the questions, and then commenting on how they have replied. However, what we were annotating was not dialogues. In spite of critically picking over the texts, we still spent considerable time on dealing with them before annotation such as inserting punctuation and correcting the translation. Moreover, before annotation, we did word segmentation by using Stanford Segmenter and corrected improper segmentation. While annotating, we assigned the vast majority of the relations a single sense and a small proportion of relations multiple senses. Unlike previous similar Chinese corpora which primarily or just annotated the relations between sentences, we annotated not only discourse relations between sentences but intra-sentential discourse relations as well. To ensure building a high-quality corpus, the annotators regularly discussed their difficulties and confusions with the researcher and the experienced annotator in the whole process of annotation. After discussion, the annotators reached agreement or retained the differences for few ambiguities. Annotation Procedure ::: Agreement study We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses. Table 3 also shows that agreement on argument order is almost 1.0 (kappa = 0.99). This means that the guidelines were sufficiently clear that the annotators rarely had difficulty in deciding the location of Arg1 and Arg2 when the senses are determined. Concerning the scope of arguments, which is seen as the most challenging part in the annotation work BIBREF10, our agreement and Kappa value on this are 0.88 and 0.86 respectively, while the agreement of the scope of arguments depends on whether the scopes of two arguments the anotators annotated are completely the same. Under such strict requirement, our consistency in this respect is still significantly higher than that of other annotation work done before, for we strictly obeyed the rules of “minimality principle” mentioned in the PDTB-3 annotation manual and got a clearer perspective of supplementary information. Therefore, the annotators are better at excluding the information that do not fall within the scope of the discourse relation. It is useful to determine where the annotators disagreed most with each other. The three senses where most disagreement occurred are shown in Table 4. The disagreements were primarily in labelling implicit relations. The highest level of disagreement occurred with Expansion.Conjunction and Expansion.Detail, accounting for 12.5 % among all the inconsistent senses. It is because, more often than not, the annotators failed to judge whether the two arguments make the same contribution with respect to that situation or both arguments describing the same has different level of details. The second highest level of disagreement is reflected in Conjunction and Asynchronous, accounting for 9.3 %. Besides, Contrast and Concession are two similar senses, which are usually signaled by the same connectives like “但是”, “而”, “不过”, and all these words can be translated into“but”in English. Hence, the annotators sometimes tend to be inconsistent when distinguishing them. Results In regard to discourse relations, there are 3212 relations, of which 1237 are explicit relations (39%) and 1174 are implicit relation (37%) (Figure 1). The remaining 801 relations include Hypophora, AltLex, EntRel, and NoRel. Among these 4 kinds of relations, what is worth mentioning is AltLex(Alternative Lexicalizations ),which only constitutes 3% but is of tremendous significance, for we are able to discover inter- or intra-sentential relations when there is no explicit expressions but AltLex expressions conveying the relations. but AltLex expressions(eg, 这导致了(this cause), 一个例子是(one example is... ), 原 因 是 (the reason is), etc.). Originally in English, AltLex is supposed to contain both an anaphoric or deictic reference to an actual argument and an indication of the type of sense BIBREF13. While for Chinese, the instances of Altlex do not differ significantly from those annotated in English. To prove this, two examples are given as below (Example 9 and Example 10). From our annotation, we realized that Altlex deserves more attention, for which can effectively help to recogonize types of discourse relations automatically. [“国内 许多被截肢者, 无法使用 in this country, many of the amputees, cannot use 他们的假肢Arg1],[这其中的原因是] [他们由于 their prostheses, the reason was their 假肢接受腔 无法 与残肢 适配 而 prosthetic sockets cannot their leg fit well so that 感到疼痛Arg2].(AltLex, Cause.Reason) felt painful.] “Many of the amputees in the country would not use their prostheses. The reason, I would come to find out, was that their prosthetic sockets were painful because they did not fit well.” 三年级的时候考进秀朗小学的游泳班, in third grade, got in the swimming class at Xiu Lang [ 这个班每天的 游泳 elementary school, this class everyday’s swimming 训练量高达 3000 米 Arg1], 我发现 [这样的训练量 volumm reach 3000 meters, I realized the training load 使][我 无法同时兼顾两种乐器 Arg2]。(AltLex, Cause.Result) make me cannot learn the two instruments at the same time “I got in the swimming class at Xiu Lang elementary school when I was in third grade. We had to swim up to 3000 meters every day. I realized the training load was too much for me to learn the two instruments at the same time.” Obviously, there is approximately the same number of explicit and implicit relations in the corpus. This may indicate that explicit connectives and relations are more likely to present in Chinese spoken texts rather than Chinese written texts. The figures shown by Table 4 illustrate the distributions of class level senses. We make a comparison for the class level senses between our corpus and the CUHK Discourse Treebank for Chinese (CUHK-DTBC). CUHK Discourse Treebank for Chinese is a corpus annotating news reports. Therefore, our comparison with it may shed light on the differences of discourse structures in different genres. According to the statistics of CUHK-DTBC for 400 documents and our corpus, while more than half of the senses is Expansion in CUHK-DTBC, it just represents 37.5% in our corpus. In addition, it is highlighted that the ranks of the class level senses are the same in both corpora, although all of the other three senses in our corpus are more than those in CUHK-DTBC. The most frequent second-level senses in our corpus can be seen from Table 5. We can find that 20% of the senses is Cause (including Reason and Result), followed by Conjunction and Concession, each with 13%. The top 10 most frequent senses take up 86% of all senses annotated, which reveals that other senses also can validate their existence in our corpus. Therefore, these findings show that, compared with other corpora about Chinese shallow relations where the majority of the documents are news report, our corpus evidently show a more balanced and varied distribution from perspectives of both relations and senses, which in large measure proves the differences in discourse relations between Chinese written texts and Chinese spoken texts. Conclusions and Future Work In this paper, we describe our scheme and process in annotating shallow discourse relations using PDTB-style. In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy. To ensure the annotation quality, we formulated detailed annotation criteria and quality assurance strategies. After serious training, we annotated 3212 discourse relations, and we achieved a satisfactory consistency of labelling with a Kappa value of greater than 0.85 for most of the indicators. Finally, we display our annotation results in which the distribution of discourse relations and senses differ from that in other corpora which annotate news report or newspaper texts. Our corpus contains more Contingency, Temporal and Comparison relations instead of being governed by Expansion. In future work, we are planning to 1) expand our corpus by annotating more TED talks or other spoken texts; 2) build a richer and diverse connective set and AltLex expressions set; 3) use the corpus in developing a shallow discourse parser for Chinese spoken discourses; 4) also explore automatic approaches for implicit discourse relations recognition. Acknowledgement The present research was supported by the National Natural Science Foundation of China (Grant No. 61861130364) and the Royal Society (London) (NAF$\backslash $R1$\backslash $180122). We would like to thank the anonymous reviewers for their insightful comments.
agreement rates, Kappa value
0d7f514f04150468b2d1de9174c12c28e52c5511
0d7f514f04150468b2d1de9174c12c28e52c5511_0
Q: How high is the inter-annotator agreement? Text: Introduction Researchers have recognized that performance improvements in natural language processing (NLP) tasks such as summarization BIBREF0, question answering BIBREF1, and machine translation BIBREF2 can come from recognizing discourse-level properties of text. These include properties such as the how new entities are introduced into the text, how entities are subsequently referenced (e.g., coreference chains), and how clauses and sentences relate to one another. Corpora in which such properties have been manually annotated by experts can be used as training data for such tasks, or seed data for creating additional "silver annotated” data. Penn Discourse Treebank (PDTB), a lexically grounded method for annotation, is a shallow approach to discourse structure which can be adapted to different genres. Annotating discourse relations both within and across sentences, it aims to have wide application in the field of natural language processing. PDTB can effectively help extract discourse semantic features, thus serving as a useful substrate for the development and evaluation of neural models in many downstream NLP applications. Few Chinese corpora are both annotated for discourse properties and publicly available. The available annotated texts are primarily newspaper articles. The work described here annotates another type of text – the planned monologues found in TED talks, following the annotation style used in the Penn Discourse TreeBank, but adapted to take account of properties of Chinese described in Section 3. TED talks (TED is short for technology, entertainment, design), as examples of planned monologues delivered to a live audience BIBREF3, are scrupulously translated to various languages. Although TED talks have been annotated for discourse relations in several languages BIBREF4, this is the first attempt to annotate TED talks in Chinese (either translated into Chinese, or presented in Chinese), providing data on features of Chinese spoken discourse. Our annotation by and large follows the annotation scheme in the PDTB-3, adapted to features of Chinese spoken discourse described below. The rest of the paper is organized as follows: in Section 2, we review the related existing discourse annotation work. In Section 3, we briefly introduce PDTB-3 BIBREF5 and our adapted annotation scheme by examples. In Section 4, we elaborate our annotation process and the results of our inteannotator-agreement study. Finally, in Section 5, we display the results of our annotation and preliminarily analyze corpus statistics, which we compare to the relation distribution of the CUHK Discourse TreeBank for Chinese. (CUHK-DTBC)BIBREF6. Related work Following the release of the Penn Discourse Treebank (PDTB-2) in 2008 BIBREF7, several remarkable Chinese discourse corpora have since adapted the PDTB framework BIBREF8, including the Chinese Discourse Treebank BIBREF9, HIT Chinese Discourse Treebank (HIT-CDTB) zhou2014cuhk, and the Discourse Treebank for Chinese (DTBC) BIBREF6. Specifically, Xue proposed the Chinese Discourse Treebank (CDTB) Project BIBREF10. From their annotation work, they discussed the matters such as features of Chinese discourse connectives, definition and scope of arguments, and senses disambiguation, and they argued that determining the argument scope is the most challenging part of the annotation. To further promote their research, zhou2012pdtb presented a PDTB-style discourse corpus for Chinese. They also discussed the key characteristics of Chinese text which differs from English, e.g., the parallel connectives, comma-delimited intra-sentential implicit relations etc. Their data set contains 98 documents from the Chinese Treebank BIBREF10. In 2015, Zhou and Xue expanded their corpus to 164 documents, with more than 5000 relations being annotated. huang-chen-2011-chinese constructed a Chinese discourse corpus with 81 articles. They adopted the top-level senses from PDTB sense hierarchy and focused on the annotation of inter-sentential discourse relations. zhang2014chinese analyzed the differences between Chinese and English, and then presented a new Chinese discourse relation hierarchy based on the PDTB system, in which the discourse relations are divided into 6 types: temporal, causal, condition, comparison, expansion and conjunction. And they constructed a Chinese Discourse Relation corpus called HIT-CDTB based on this hierarchy. Then, zhou2014cuhk presented the first open discourse treebank for Chinese, the CUHK Discourse Treebank for Chinese. They adapted the annotation scheme of Penn Discourse Treebank 2 (PDTB-2) to Chinese language and made adjustments to 3 aspects according to the previous study of Chinese linguistics. However, they just reannotated the documents of the Chinese Treebank and did not annotate inter-sentence level discourse relations. It is worth noting that, all these corpora display a similar unbalanced distribution that is likely to be associated with them all being limited to text from the same NEWS genre. In particular, these two senses (Expansion and Conjunction) represent 80 % of the relations annotated in the CDTB. In addition, although annotating spoken TED talks has been done on other several languages before BIBREF4, to our knowledge, there is no recent annotation work for Chinese spoken discourses, or particularly for Chinese Ted talks. However, there is some evidence that noticeable differences in the use of discourse connectives and discourse relations can be found between written and spoken discourses BIBREF11. Here, by using the new PDTB-3 sense hierarchy and annotator, which has not been used for Chinese annotation before, we annotated Chinese Ted talks to help others be aware of the differences between the Chinese discourse structure of written and spoken texts and will make our corpus publicly available to benefit the discourse-level NLP researches for spoken discourses. PDTB and our Annotation Scheme The annotation scheme we adopted in this work is based on the framework of PDTB, incorporating the most recent PDTB (PDTB-3) relational taxonomy and sense hierarchy BIBREF5, shown in Table 1. PDTB follows a lexically grounded approach to the representation of discourse relations BIBREF12. Discourse relations are taken to hold between two abstract object arguments, named Arg1 and Arg2 using syntactic conventions, and are triggered either by explicit connectives or, otherwise, by adjacency between clauses and sentences. As we can see from Table 1, the PDTB-3 sense hierarchy has 4 top-level senses (Expansion, Temporal, Contingency, Contrast) and second- and third-level senses for some cases. With obvious differences ranging from the conventions used in annotation, to differences in senses hierarchy, PDTB-3 gives rigorous attention to achieving as much consistency as possible while annotating discourse relations. Previously, all Chinese annotation work using PDTB style followed the settings of PDTB-2. Some researchers tried to adapt it in lines of the Chinese characteristics. For example, zhou2012pdtb annotated the parallel connectives continuously rather than discontinuously due to the greater use of parallel connectives in Chinese and a reduced use of explicit connectives in general. zhou2014cuhk added some additional senses into the hierarchy. However, PDTB-3, as a new and enriched version, not only has paid greater attention to intra-sentential senses, but also has incorporated some of those additional senses. Therefore, we just made several modifications including removing, adding, or disambiguating for the practical use of PDTB-3 into our Chinese annotation. In practice, using the PDTB annotator tool, we annotated an explicit connective, identified its two arguments in which the connective occurs, and then labeled the sense. For implicit relations, when we inferred the type of relation between two arguments, we tried to insert a connective for this relation, and also the inserted connective is not so strictly restricted, extending to expressions that can convey the sense of the arguments. If a connective conveys more than one sense or more than one relation can be inferred, multiple senses would be assigned to the token. Our adaptations towards PDTB-3 will be introduced from the perspectives of arguments, relations and senses as follows. PDTB and our Annotation Scheme ::: Arguments The argument-labelling conventions used in the PDTB-2 had to be modified to deal with the wider variety of discourse relations that needed to be annotated consistently within sentences in the PDTB-3. In particular, in labelling intra-sentential discourse relations, a distinction was made between relations whose arguments were in coordinating syntactic structures and ones whose arguments were in subordinating syntactic structures. For coordinating structures, arguments were labelled by position (Arg1 first, then Arg2), while for subordinating structures, the argument in subordinate position was labelled Arg2, and the other, Arg1, independent of position. For discourse in Chinese, this can introduce an unwanted ambiguity. Example 1 is a typical example for illustrate this phenomenon. In the examples throughout the paper, explicit connectives are underlined, while implicit Discourse Connectives and the lexicalizing expression for Alternative Lexicalizations are shown in parentheses and square brackets respectively. The position of the arguments is indicated by the attached composite labels to the right square brackets, and the relation lables and sense lables can be seen in the parentheses at the end of arguments. When the arguments, relations or senses are ambiguous, there may be no corresponding labels shown in the examples. UTF8gbsn 因为 你 让 我 生气, 所以,我 要让 Because you make me angry, so I want 你 更难过。(Explicit, Cause.Result) you to be sadder. “You made me angry, so I return it double back.” While“because”and“so”are rarely found together as connectives in a sentence in English, it is not uncommon to find them used concurrently as a paired connective in Chinese. Therefore, due to this difference, the annotators tend to have no idea about which clause is subordinate. Therefore, if we regard the first clause as subordinating structure and “因 为”(because)as connective, then the sense would be Contingency.Cause.Reason. By contrast, the sense would be Contingency.Cause.Result, when the second clause is regarded as Arg2. To get rid of this kind of ambiguity, we just take the first as Arg1 and the second Arg2 regardless of the fact that the parallel connectives are surbodinating or coordinating. PDTB and our Annotation Scheme ::: Relations There are two new types of relation in PDTB-3: AltlexC and Hypophora. Hypophora is an explicitly marked question-response pairs, first used in annotating the TED- MDB BIBREF4. In Hypophora relations, Arg1 expresses a question and Arg2 offers an answer, with no explicit or implicit connective being annotated (Example 2). Because of the nature of TED talks, many relations in both the TED-MDB and in our Chinese TED talks are examples of “Hypophora”. However, not all discourse relations whose first argument is a question are Hypophora. Example 3, instead of seeking information and giving answer, is just a rhetorical question expressing negation by imposing a dramatic effect. [我到底 要 讲 什么Arg1]? I on earth am going to talk about what ? [最后 我决定 要 讲 教育Arg2]。(Hypophora) Finally, I decided to talk about education . “what am I gonna say? Finally, I decided to talk about education.” 他说 : “ 我 是 三 天 一 小 哭 、 五 天 He said, " I am three days a little cry, five days 一 大 哭 。 " 这样 你 有 比较 健康 吗 ? a lot cry." In this way, you are more healthier? 都 是 悲伤 , 并 不 是 每 一 个 人 ,每 一 次 All are sadness, not everyone, every time 感受 到 悲伤 的 时候 ,都 一定 会 流泪 、 甚至 大哭 。 feel sad 's time, would shed tears、even cry. “He said, "Three times I cry a little, and five times I cry a lot." Is that healthier? Everyone gets sad, but that's not to say that whenever someone feels sad, they necessarily will cry.” In addition, we found a new issue when identifying Hypophora, which is shown in Example 4. In this example, we have a series of questions, rather than a series of assertions or a question-response pair. We attempted to capture the rhetorical links by taking advantage of our current inventory of discourse relations. Here, two implicit relations were annotated for this example, and the senses are Arg2-as-detail and Result+SpeechAct respectively. Therefore, when there are subsequent texts related to a question or a sequence of questions, we would not just annotated them as Hypophora but had to do such analysis as what we did for the examples shown. [情绪 , 它 到底 是 什么Rel1-Arg1 ]? (具体来说) Emotion, it on earth is what? (Specially) [它 是 好还是 不 好Rel1-Arg2,Rel2-Arg1]?(Implicit,Arg2-as-detail) It is good or bad? (所以)[你 会 想要 拥有 它 吗Rel2-Arg2]? (Implicit,Result+SpeechAct) (So) You want to have it ? “What is it exactly? Is it good or bad? Do you want to have them?” Besides, it is widely accepted that the ellipsis of subject or object are frequently seen in Chinese. Then for EntRel, if facing this situation where one of the entities in Arg1 or Arg2 is omitted, we still need to annotate this as EntRel (Example 5). In this following example, we can see in Arg2, the pronoun which means “that”is omitted, but in fact which refers to the phenomenon mentioned in Arg1, so here there is still an EntRel relation between this pair of arguments. [我们会以讽刺的口吻来谈论, 并且会 We in ironic terms talk about, and 加上引号 : “进步”Arg1 ] add quotes: “Progress”. [我想是有原因的, 我们也知道 是 什么 I think there are reasons, we also know are what 原因Arg2]。(EntRel) reasons. “We talk about it in ironic terms with little quotes around it:“Progress.”Okay, there are reasons for that, and I think we know what those reasons are.” PDTB and our Annotation Scheme ::: Senses The great improvement in the sense hierarchy in PDTB-3 enables us to capture more senses with additional types and assign the senses more clearly. For example, the senses under the category of Expansion such as level of detail, manner, disjunction and similarity are indispensable to our annotation. Therefore, we nearly adopted the sense hierarchy in PDTB-3, just with few adaptations. On the one hand, we removed the third level sense “Negative condition+SpeechAct”, since it was not used to label anything in the corpus. On the other hand, we added the Level-2 sense “Expansion.Progression”. This type of sense applies when Arg1 and Arg2 are coordinating structure with different emphasis. The first argument is annotated as Arg1 and the second as Arg2. This sense is usually conveyed by such typical connectives as “不 但 (not only)... 而 且 (but also)...”, “甚 至 (even)... 何 况 (let alone)...”,“... 更 (even more)...”(Example 6). [我 去 了 聋人 俱乐部 ,观看 了 聋人 的 I went to deaf clubs, saw the deaf person’s 表演 Arg1]。[我甚至 去 了 田纳西州 纳什维尔的 performances. I even went to the Nashville ’s “ 美国 聋人 小姐 ” 选秀赛Arg2]。(Explicit, Progression.Arg2-as-progr) “the Miss Deaf” America contest. “ I went to deaf clubs. I saw performances of deaf theater and of deaf poetry. I even went to the Miss Deaf America contest in Nashville.” Another issue about sense is the inconsistency when we annotated the implicit relations. zhou2012pdtb did not insert connective for implicit relations, but we did this for further researches with regard to implicit relations. However, we found that in some cases where different connectives can be inserted into the same arguments to express the same relation, the annotators found themselves in a dilemma. We can see that Example 7 and Example 8 respectively insert “so” and “because” into the arguments between which there is a causal relation, but the senses in these two examples would be Cause.Result and Cause.Reason. The scheme we adopted for this is that we only take the connectives that we would insert into account, and the position and sense relations of arguments would depend on the inserted connectives. [“克服 逆境 ” 这一说法 对我 “Overcome the adversity” this phrase for me 根本 不 成立 Arg1],(所以)[别人 让 我 completely not justified, (so) others asked me 就 这一话题 说 几 句 的时候, 我很不自在Arg2]。(Implicit, Cause.Result) to this topic talk about some, I felt uneasy. (因为)[“克服 逆境 ” 这一说法 (Because) “overcome the adversity” this phrase 对 我来说根本 不 成立Arg2], [ 别人 for me completely not justified, (so) others 让 我就 这一话题 说几句的时候,我很 asked me to this topic, talk about some, I felt 不自在Arg1]。(Implicit, Cause.Reason) uneasy. ““overcome the adversity” this phrase never sat right with me, and I always felt uneasy trying to answer people's questions about it.” Annotation Procedure In this section, we describe our annotation process in creating the Chinese TED discourse treebank. To ensure annotation quality, the whole annotation process has three stages: annotators training, annotation, post-annotation. The training process intends to improve the annotators’ annotation ability, while after the formal annotation, the annotated work was carefully checked by the supervisor, and the possible errors and inconsistencies were dealt with through discussions and further study. Annotation Procedure ::: Annotator training The annotator team consists of a professor as the supervisor, an experienced annotator and a researcher of PDTB as counselors, two master degree candidates as annotators. Both of the annotators have a certain theoretical foundation of linguistics. To guarantee annotation quality, the annotators were trained through the following steps: firstly, the annotators read the PDTB-3 annotation manual, the PDTB-2 annotation manual and also other related papers carefully; next, the annotators tried to independently annotate same texts, finding out their own uncertainties or problems respectively and discussing these issues together; then, the annotators were asked to create sample annotations on TED talks transcripts for each sense from the top level to the third. They discussed the annotations with the researchers of the team and tried to settle disputes. When sample annotations are created, this part of process is completed; based on the manuals, previous annotation work and also the annotators’ own pre-annotation work, they made a Chinese tutorial on PDTB guidelines, in which major difficulties and perplexities, such as the position and the span of the arguments, the insert of connectives, and the distinction of different categories of relations and senses, are explained clearly in detail by typical samples. This Chinese tutorial is beneficial for those who want to carry out similar Chinese annotation, so we made this useful tutorial available to those who want to carry out similar annotation; finally, to guarantee annotation consistency, the annotators were required to repeat their annotation-discussion process until their annotation results show the Kappa value > 0.8 for each of the indicators for agreement. Annotation Procedure ::: Corpus building At present, our corpus has been released publiclyFOOTREF19. Our corpus consists of two parts with equal number of texts: (1) 8 English TED talks translated into Chinese, just like the talks in the TED-MDB, all of which were originally presented in English and translated into other languages (including German, Lithuanian, Portuguese,Polish, Russian and Turkish) BIBREF4. (2) 8 Chinese TED talks originally presented in Taipei and translated into English. We got the texts by means of extracting Chinese and English subtitles from TED talks videos . Firstly, we just annotated the talks given in English and translated in Chinese. But after considering the possible divergencies between translated texts and the original texts, we did our annotation for the Taipei TED talks, which were delivered in Chinese. The parallel English texts are also being annotated for discourse relations, but they are not ready for carrying out a systematic comparison between them. At the current stage, we annotated 3212 relations for the TED talks transcripts with 55307 words,and the length of each talk (in words) and the number of annotated relations in each talks can be found from Table 2. These TED talks we annotated were prudently selected from dozens of candidate texts. The quality of texts which is principally embodied in content, logic, punctuation and the translation are the major concerns for us. Moreover, when selecting the texts from the Taipei talks, we ruled out those texts which are heavy in dialogues. Some speakers try to interact with the audience, asking the questions, and then commenting on how they have replied. However, what we were annotating was not dialogues. In spite of critically picking over the texts, we still spent considerable time on dealing with them before annotation such as inserting punctuation and correcting the translation. Moreover, before annotation, we did word segmentation by using Stanford Segmenter and corrected improper segmentation. While annotating, we assigned the vast majority of the relations a single sense and a small proportion of relations multiple senses. Unlike previous similar Chinese corpora which primarily or just annotated the relations between sentences, we annotated not only discourse relations between sentences but intra-sentential discourse relations as well. To ensure building a high-quality corpus, the annotators regularly discussed their difficulties and confusions with the researcher and the experienced annotator in the whole process of annotation. After discussion, the annotators reached agreement or retained the differences for few ambiguities. Annotation Procedure ::: Agreement study We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses. Table 3 also shows that agreement on argument order is almost 1.0 (kappa = 0.99). This means that the guidelines were sufficiently clear that the annotators rarely had difficulty in deciding the location of Arg1 and Arg2 when the senses are determined. Concerning the scope of arguments, which is seen as the most challenging part in the annotation work BIBREF10, our agreement and Kappa value on this are 0.88 and 0.86 respectively, while the agreement of the scope of arguments depends on whether the scopes of two arguments the anotators annotated are completely the same. Under such strict requirement, our consistency in this respect is still significantly higher than that of other annotation work done before, for we strictly obeyed the rules of “minimality principle” mentioned in the PDTB-3 annotation manual and got a clearer perspective of supplementary information. Therefore, the annotators are better at excluding the information that do not fall within the scope of the discourse relation. It is useful to determine where the annotators disagreed most with each other. The three senses where most disagreement occurred are shown in Table 4. The disagreements were primarily in labelling implicit relations. The highest level of disagreement occurred with Expansion.Conjunction and Expansion.Detail, accounting for 12.5 % among all the inconsistent senses. It is because, more often than not, the annotators failed to judge whether the two arguments make the same contribution with respect to that situation or both arguments describing the same has different level of details. The second highest level of disagreement is reflected in Conjunction and Asynchronous, accounting for 9.3 %. Besides, Contrast and Concession are two similar senses, which are usually signaled by the same connectives like “但是”, “而”, “不过”, and all these words can be translated into“but”in English. Hence, the annotators sometimes tend to be inconsistent when distinguishing them. Results In regard to discourse relations, there are 3212 relations, of which 1237 are explicit relations (39%) and 1174 are implicit relation (37%) (Figure 1). The remaining 801 relations include Hypophora, AltLex, EntRel, and NoRel. Among these 4 kinds of relations, what is worth mentioning is AltLex(Alternative Lexicalizations ),which only constitutes 3% but is of tremendous significance, for we are able to discover inter- or intra-sentential relations when there is no explicit expressions but AltLex expressions conveying the relations. but AltLex expressions(eg, 这导致了(this cause), 一个例子是(one example is... ), 原 因 是 (the reason is), etc.). Originally in English, AltLex is supposed to contain both an anaphoric or deictic reference to an actual argument and an indication of the type of sense BIBREF13. While for Chinese, the instances of Altlex do not differ significantly from those annotated in English. To prove this, two examples are given as below (Example 9 and Example 10). From our annotation, we realized that Altlex deserves more attention, for which can effectively help to recogonize types of discourse relations automatically. [“国内 许多被截肢者, 无法使用 in this country, many of the amputees, cannot use 他们的假肢Arg1],[这其中的原因是] [他们由于 their prostheses, the reason was their 假肢接受腔 无法 与残肢 适配 而 prosthetic sockets cannot their leg fit well so that 感到疼痛Arg2].(AltLex, Cause.Reason) felt painful.] “Many of the amputees in the country would not use their prostheses. The reason, I would come to find out, was that their prosthetic sockets were painful because they did not fit well.” 三年级的时候考进秀朗小学的游泳班, in third grade, got in the swimming class at Xiu Lang [ 这个班每天的 游泳 elementary school, this class everyday’s swimming 训练量高达 3000 米 Arg1], 我发现 [这样的训练量 volumm reach 3000 meters, I realized the training load 使][我 无法同时兼顾两种乐器 Arg2]。(AltLex, Cause.Result) make me cannot learn the two instruments at the same time “I got in the swimming class at Xiu Lang elementary school when I was in third grade. We had to swim up to 3000 meters every day. I realized the training load was too much for me to learn the two instruments at the same time.” Obviously, there is approximately the same number of explicit and implicit relations in the corpus. This may indicate that explicit connectives and relations are more likely to present in Chinese spoken texts rather than Chinese written texts. The figures shown by Table 4 illustrate the distributions of class level senses. We make a comparison for the class level senses between our corpus and the CUHK Discourse Treebank for Chinese (CUHK-DTBC). CUHK Discourse Treebank for Chinese is a corpus annotating news reports. Therefore, our comparison with it may shed light on the differences of discourse structures in different genres. According to the statistics of CUHK-DTBC for 400 documents and our corpus, while more than half of the senses is Expansion in CUHK-DTBC, it just represents 37.5% in our corpus. In addition, it is highlighted that the ranks of the class level senses are the same in both corpora, although all of the other three senses in our corpus are more than those in CUHK-DTBC. The most frequent second-level senses in our corpus can be seen from Table 5. We can find that 20% of the senses is Cause (including Reason and Result), followed by Conjunction and Concession, each with 13%. The top 10 most frequent senses take up 86% of all senses annotated, which reveals that other senses also can validate their existence in our corpus. Therefore, these findings show that, compared with other corpora about Chinese shallow relations where the majority of the documents are news report, our corpus evidently show a more balanced and varied distribution from perspectives of both relations and senses, which in large measure proves the differences in discourse relations between Chinese written texts and Chinese spoken texts. Conclusions and Future Work In this paper, we describe our scheme and process in annotating shallow discourse relations using PDTB-style. In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy. To ensure the annotation quality, we formulated detailed annotation criteria and quality assurance strategies. After serious training, we annotated 3212 discourse relations, and we achieved a satisfactory consistency of labelling with a Kappa value of greater than 0.85 for most of the indicators. Finally, we display our annotation results in which the distribution of discourse relations and senses differ from that in other corpora which annotate news report or newspaper texts. Our corpus contains more Contingency, Temporal and Comparison relations instead of being governed by Expansion. In future work, we are planning to 1) expand our corpus by annotating more TED talks or other spoken texts; 2) build a richer and diverse connective set and AltLex expressions set; 3) use the corpus in developing a shallow discourse parser for Chinese spoken discourses; 4) also explore automatic approaches for implicit discourse relations recognition. Acknowledgement The present research was supported by the National Natural Science Foundation of China (Grant No. 61861130364) and the Royal Society (London) (NAF$\backslash $R1$\backslash $180122). We would like to thank the anonymous reviewers for their insightful comments.
agreement of 0.85 and Kappa value of 0.83
4d223225dbf84a80e2235448a4d7ba67bfb12490
4d223225dbf84a80e2235448a4d7ba67bfb12490_0
Q: How are resources adapted to properties of Chinese text? Text: Introduction Researchers have recognized that performance improvements in natural language processing (NLP) tasks such as summarization BIBREF0, question answering BIBREF1, and machine translation BIBREF2 can come from recognizing discourse-level properties of text. These include properties such as the how new entities are introduced into the text, how entities are subsequently referenced (e.g., coreference chains), and how clauses and sentences relate to one another. Corpora in which such properties have been manually annotated by experts can be used as training data for such tasks, or seed data for creating additional "silver annotated” data. Penn Discourse Treebank (PDTB), a lexically grounded method for annotation, is a shallow approach to discourse structure which can be adapted to different genres. Annotating discourse relations both within and across sentences, it aims to have wide application in the field of natural language processing. PDTB can effectively help extract discourse semantic features, thus serving as a useful substrate for the development and evaluation of neural models in many downstream NLP applications. Few Chinese corpora are both annotated for discourse properties and publicly available. The available annotated texts are primarily newspaper articles. The work described here annotates another type of text – the planned monologues found in TED talks, following the annotation style used in the Penn Discourse TreeBank, but adapted to take account of properties of Chinese described in Section 3. TED talks (TED is short for technology, entertainment, design), as examples of planned monologues delivered to a live audience BIBREF3, are scrupulously translated to various languages. Although TED talks have been annotated for discourse relations in several languages BIBREF4, this is the first attempt to annotate TED talks in Chinese (either translated into Chinese, or presented in Chinese), providing data on features of Chinese spoken discourse. Our annotation by and large follows the annotation scheme in the PDTB-3, adapted to features of Chinese spoken discourse described below. The rest of the paper is organized as follows: in Section 2, we review the related existing discourse annotation work. In Section 3, we briefly introduce PDTB-3 BIBREF5 and our adapted annotation scheme by examples. In Section 4, we elaborate our annotation process and the results of our inteannotator-agreement study. Finally, in Section 5, we display the results of our annotation and preliminarily analyze corpus statistics, which we compare to the relation distribution of the CUHK Discourse TreeBank for Chinese. (CUHK-DTBC)BIBREF6. Related work Following the release of the Penn Discourse Treebank (PDTB-2) in 2008 BIBREF7, several remarkable Chinese discourse corpora have since adapted the PDTB framework BIBREF8, including the Chinese Discourse Treebank BIBREF9, HIT Chinese Discourse Treebank (HIT-CDTB) zhou2014cuhk, and the Discourse Treebank for Chinese (DTBC) BIBREF6. Specifically, Xue proposed the Chinese Discourse Treebank (CDTB) Project BIBREF10. From their annotation work, they discussed the matters such as features of Chinese discourse connectives, definition and scope of arguments, and senses disambiguation, and they argued that determining the argument scope is the most challenging part of the annotation. To further promote their research, zhou2012pdtb presented a PDTB-style discourse corpus for Chinese. They also discussed the key characteristics of Chinese text which differs from English, e.g., the parallel connectives, comma-delimited intra-sentential implicit relations etc. Their data set contains 98 documents from the Chinese Treebank BIBREF10. In 2015, Zhou and Xue expanded their corpus to 164 documents, with more than 5000 relations being annotated. huang-chen-2011-chinese constructed a Chinese discourse corpus with 81 articles. They adopted the top-level senses from PDTB sense hierarchy and focused on the annotation of inter-sentential discourse relations. zhang2014chinese analyzed the differences between Chinese and English, and then presented a new Chinese discourse relation hierarchy based on the PDTB system, in which the discourse relations are divided into 6 types: temporal, causal, condition, comparison, expansion and conjunction. And they constructed a Chinese Discourse Relation corpus called HIT-CDTB based on this hierarchy. Then, zhou2014cuhk presented the first open discourse treebank for Chinese, the CUHK Discourse Treebank for Chinese. They adapted the annotation scheme of Penn Discourse Treebank 2 (PDTB-2) to Chinese language and made adjustments to 3 aspects according to the previous study of Chinese linguistics. However, they just reannotated the documents of the Chinese Treebank and did not annotate inter-sentence level discourse relations. It is worth noting that, all these corpora display a similar unbalanced distribution that is likely to be associated with them all being limited to text from the same NEWS genre. In particular, these two senses (Expansion and Conjunction) represent 80 % of the relations annotated in the CDTB. In addition, although annotating spoken TED talks has been done on other several languages before BIBREF4, to our knowledge, there is no recent annotation work for Chinese spoken discourses, or particularly for Chinese Ted talks. However, there is some evidence that noticeable differences in the use of discourse connectives and discourse relations can be found between written and spoken discourses BIBREF11. Here, by using the new PDTB-3 sense hierarchy and annotator, which has not been used for Chinese annotation before, we annotated Chinese Ted talks to help others be aware of the differences between the Chinese discourse structure of written and spoken texts and will make our corpus publicly available to benefit the discourse-level NLP researches for spoken discourses. PDTB and our Annotation Scheme The annotation scheme we adopted in this work is based on the framework of PDTB, incorporating the most recent PDTB (PDTB-3) relational taxonomy and sense hierarchy BIBREF5, shown in Table 1. PDTB follows a lexically grounded approach to the representation of discourse relations BIBREF12. Discourse relations are taken to hold between two abstract object arguments, named Arg1 and Arg2 using syntactic conventions, and are triggered either by explicit connectives or, otherwise, by adjacency between clauses and sentences. As we can see from Table 1, the PDTB-3 sense hierarchy has 4 top-level senses (Expansion, Temporal, Contingency, Contrast) and second- and third-level senses for some cases. With obvious differences ranging from the conventions used in annotation, to differences in senses hierarchy, PDTB-3 gives rigorous attention to achieving as much consistency as possible while annotating discourse relations. Previously, all Chinese annotation work using PDTB style followed the settings of PDTB-2. Some researchers tried to adapt it in lines of the Chinese characteristics. For example, zhou2012pdtb annotated the parallel connectives continuously rather than discontinuously due to the greater use of parallel connectives in Chinese and a reduced use of explicit connectives in general. zhou2014cuhk added some additional senses into the hierarchy. However, PDTB-3, as a new and enriched version, not only has paid greater attention to intra-sentential senses, but also has incorporated some of those additional senses. Therefore, we just made several modifications including removing, adding, or disambiguating for the practical use of PDTB-3 into our Chinese annotation. In practice, using the PDTB annotator tool, we annotated an explicit connective, identified its two arguments in which the connective occurs, and then labeled the sense. For implicit relations, when we inferred the type of relation between two arguments, we tried to insert a connective for this relation, and also the inserted connective is not so strictly restricted, extending to expressions that can convey the sense of the arguments. If a connective conveys more than one sense or more than one relation can be inferred, multiple senses would be assigned to the token. Our adaptations towards PDTB-3 will be introduced from the perspectives of arguments, relations and senses as follows. PDTB and our Annotation Scheme ::: Arguments The argument-labelling conventions used in the PDTB-2 had to be modified to deal with the wider variety of discourse relations that needed to be annotated consistently within sentences in the PDTB-3. In particular, in labelling intra-sentential discourse relations, a distinction was made between relations whose arguments were in coordinating syntactic structures and ones whose arguments were in subordinating syntactic structures. For coordinating structures, arguments were labelled by position (Arg1 first, then Arg2), while for subordinating structures, the argument in subordinate position was labelled Arg2, and the other, Arg1, independent of position. For discourse in Chinese, this can introduce an unwanted ambiguity. Example 1 is a typical example for illustrate this phenomenon. In the examples throughout the paper, explicit connectives are underlined, while implicit Discourse Connectives and the lexicalizing expression for Alternative Lexicalizations are shown in parentheses and square brackets respectively. The position of the arguments is indicated by the attached composite labels to the right square brackets, and the relation lables and sense lables can be seen in the parentheses at the end of arguments. When the arguments, relations or senses are ambiguous, there may be no corresponding labels shown in the examples. UTF8gbsn 因为 你 让 我 生气, 所以,我 要让 Because you make me angry, so I want 你 更难过。(Explicit, Cause.Result) you to be sadder. “You made me angry, so I return it double back.” While“because”and“so”are rarely found together as connectives in a sentence in English, it is not uncommon to find them used concurrently as a paired connective in Chinese. Therefore, due to this difference, the annotators tend to have no idea about which clause is subordinate. Therefore, if we regard the first clause as subordinating structure and “因 为”(because)as connective, then the sense would be Contingency.Cause.Reason. By contrast, the sense would be Contingency.Cause.Result, when the second clause is regarded as Arg2. To get rid of this kind of ambiguity, we just take the first as Arg1 and the second Arg2 regardless of the fact that the parallel connectives are surbodinating or coordinating. PDTB and our Annotation Scheme ::: Relations There are two new types of relation in PDTB-3: AltlexC and Hypophora. Hypophora is an explicitly marked question-response pairs, first used in annotating the TED- MDB BIBREF4. In Hypophora relations, Arg1 expresses a question and Arg2 offers an answer, with no explicit or implicit connective being annotated (Example 2). Because of the nature of TED talks, many relations in both the TED-MDB and in our Chinese TED talks are examples of “Hypophora”. However, not all discourse relations whose first argument is a question are Hypophora. Example 3, instead of seeking information and giving answer, is just a rhetorical question expressing negation by imposing a dramatic effect. [我到底 要 讲 什么Arg1]? I on earth am going to talk about what ? [最后 我决定 要 讲 教育Arg2]。(Hypophora) Finally, I decided to talk about education . “what am I gonna say? Finally, I decided to talk about education.” 他说 : “ 我 是 三 天 一 小 哭 、 五 天 He said, " I am three days a little cry, five days 一 大 哭 。 " 这样 你 有 比较 健康 吗 ? a lot cry." In this way, you are more healthier? 都 是 悲伤 , 并 不 是 每 一 个 人 ,每 一 次 All are sadness, not everyone, every time 感受 到 悲伤 的 时候 ,都 一定 会 流泪 、 甚至 大哭 。 feel sad 's time, would shed tears、even cry. “He said, "Three times I cry a little, and five times I cry a lot." Is that healthier? Everyone gets sad, but that's not to say that whenever someone feels sad, they necessarily will cry.” In addition, we found a new issue when identifying Hypophora, which is shown in Example 4. In this example, we have a series of questions, rather than a series of assertions or a question-response pair. We attempted to capture the rhetorical links by taking advantage of our current inventory of discourse relations. Here, two implicit relations were annotated for this example, and the senses are Arg2-as-detail and Result+SpeechAct respectively. Therefore, when there are subsequent texts related to a question or a sequence of questions, we would not just annotated them as Hypophora but had to do such analysis as what we did for the examples shown. [情绪 , 它 到底 是 什么Rel1-Arg1 ]? (具体来说) Emotion, it on earth is what? (Specially) [它 是 好还是 不 好Rel1-Arg2,Rel2-Arg1]?(Implicit,Arg2-as-detail) It is good or bad? (所以)[你 会 想要 拥有 它 吗Rel2-Arg2]? (Implicit,Result+SpeechAct) (So) You want to have it ? “What is it exactly? Is it good or bad? Do you want to have them?” Besides, it is widely accepted that the ellipsis of subject or object are frequently seen in Chinese. Then for EntRel, if facing this situation where one of the entities in Arg1 or Arg2 is omitted, we still need to annotate this as EntRel (Example 5). In this following example, we can see in Arg2, the pronoun which means “that”is omitted, but in fact which refers to the phenomenon mentioned in Arg1, so here there is still an EntRel relation between this pair of arguments. [我们会以讽刺的口吻来谈论, 并且会 We in ironic terms talk about, and 加上引号 : “进步”Arg1 ] add quotes: “Progress”. [我想是有原因的, 我们也知道 是 什么 I think there are reasons, we also know are what 原因Arg2]。(EntRel) reasons. “We talk about it in ironic terms with little quotes around it:“Progress.”Okay, there are reasons for that, and I think we know what those reasons are.” PDTB and our Annotation Scheme ::: Senses The great improvement in the sense hierarchy in PDTB-3 enables us to capture more senses with additional types and assign the senses more clearly. For example, the senses under the category of Expansion such as level of detail, manner, disjunction and similarity are indispensable to our annotation. Therefore, we nearly adopted the sense hierarchy in PDTB-3, just with few adaptations. On the one hand, we removed the third level sense “Negative condition+SpeechAct”, since it was not used to label anything in the corpus. On the other hand, we added the Level-2 sense “Expansion.Progression”. This type of sense applies when Arg1 and Arg2 are coordinating structure with different emphasis. The first argument is annotated as Arg1 and the second as Arg2. This sense is usually conveyed by such typical connectives as “不 但 (not only)... 而 且 (but also)...”, “甚 至 (even)... 何 况 (let alone)...”,“... 更 (even more)...”(Example 6). [我 去 了 聋人 俱乐部 ,观看 了 聋人 的 I went to deaf clubs, saw the deaf person’s 表演 Arg1]。[我甚至 去 了 田纳西州 纳什维尔的 performances. I even went to the Nashville ’s “ 美国 聋人 小姐 ” 选秀赛Arg2]。(Explicit, Progression.Arg2-as-progr) “the Miss Deaf” America contest. “ I went to deaf clubs. I saw performances of deaf theater and of deaf poetry. I even went to the Miss Deaf America contest in Nashville.” Another issue about sense is the inconsistency when we annotated the implicit relations. zhou2012pdtb did not insert connective for implicit relations, but we did this for further researches with regard to implicit relations. However, we found that in some cases where different connectives can be inserted into the same arguments to express the same relation, the annotators found themselves in a dilemma. We can see that Example 7 and Example 8 respectively insert “so” and “because” into the arguments between which there is a causal relation, but the senses in these two examples would be Cause.Result and Cause.Reason. The scheme we adopted for this is that we only take the connectives that we would insert into account, and the position and sense relations of arguments would depend on the inserted connectives. [“克服 逆境 ” 这一说法 对我 “Overcome the adversity” this phrase for me 根本 不 成立 Arg1],(所以)[别人 让 我 completely not justified, (so) others asked me 就 这一话题 说 几 句 的时候, 我很不自在Arg2]。(Implicit, Cause.Result) to this topic talk about some, I felt uneasy. (因为)[“克服 逆境 ” 这一说法 (Because) “overcome the adversity” this phrase 对 我来说根本 不 成立Arg2], [ 别人 for me completely not justified, (so) others 让 我就 这一话题 说几句的时候,我很 asked me to this topic, talk about some, I felt 不自在Arg1]。(Implicit, Cause.Reason) uneasy. ““overcome the adversity” this phrase never sat right with me, and I always felt uneasy trying to answer people's questions about it.” Annotation Procedure In this section, we describe our annotation process in creating the Chinese TED discourse treebank. To ensure annotation quality, the whole annotation process has three stages: annotators training, annotation, post-annotation. The training process intends to improve the annotators’ annotation ability, while after the formal annotation, the annotated work was carefully checked by the supervisor, and the possible errors and inconsistencies were dealt with through discussions and further study. Annotation Procedure ::: Annotator training The annotator team consists of a professor as the supervisor, an experienced annotator and a researcher of PDTB as counselors, two master degree candidates as annotators. Both of the annotators have a certain theoretical foundation of linguistics. To guarantee annotation quality, the annotators were trained through the following steps: firstly, the annotators read the PDTB-3 annotation manual, the PDTB-2 annotation manual and also other related papers carefully; next, the annotators tried to independently annotate same texts, finding out their own uncertainties or problems respectively and discussing these issues together; then, the annotators were asked to create sample annotations on TED talks transcripts for each sense from the top level to the third. They discussed the annotations with the researchers of the team and tried to settle disputes. When sample annotations are created, this part of process is completed; based on the manuals, previous annotation work and also the annotators’ own pre-annotation work, they made a Chinese tutorial on PDTB guidelines, in which major difficulties and perplexities, such as the position and the span of the arguments, the insert of connectives, and the distinction of different categories of relations and senses, are explained clearly in detail by typical samples. This Chinese tutorial is beneficial for those who want to carry out similar Chinese annotation, so we made this useful tutorial available to those who want to carry out similar annotation; finally, to guarantee annotation consistency, the annotators were required to repeat their annotation-discussion process until their annotation results show the Kappa value > 0.8 for each of the indicators for agreement. Annotation Procedure ::: Corpus building At present, our corpus has been released publiclyFOOTREF19. Our corpus consists of two parts with equal number of texts: (1) 8 English TED talks translated into Chinese, just like the talks in the TED-MDB, all of which were originally presented in English and translated into other languages (including German, Lithuanian, Portuguese,Polish, Russian and Turkish) BIBREF4. (2) 8 Chinese TED talks originally presented in Taipei and translated into English. We got the texts by means of extracting Chinese and English subtitles from TED talks videos . Firstly, we just annotated the talks given in English and translated in Chinese. But after considering the possible divergencies between translated texts and the original texts, we did our annotation for the Taipei TED talks, which were delivered in Chinese. The parallel English texts are also being annotated for discourse relations, but they are not ready for carrying out a systematic comparison between them. At the current stage, we annotated 3212 relations for the TED talks transcripts with 55307 words,and the length of each talk (in words) and the number of annotated relations in each talks can be found from Table 2. These TED talks we annotated were prudently selected from dozens of candidate texts. The quality of texts which is principally embodied in content, logic, punctuation and the translation are the major concerns for us. Moreover, when selecting the texts from the Taipei talks, we ruled out those texts which are heavy in dialogues. Some speakers try to interact with the audience, asking the questions, and then commenting on how they have replied. However, what we were annotating was not dialogues. In spite of critically picking over the texts, we still spent considerable time on dealing with them before annotation such as inserting punctuation and correcting the translation. Moreover, before annotation, we did word segmentation by using Stanford Segmenter and corrected improper segmentation. While annotating, we assigned the vast majority of the relations a single sense and a small proportion of relations multiple senses. Unlike previous similar Chinese corpora which primarily or just annotated the relations between sentences, we annotated not only discourse relations between sentences but intra-sentential discourse relations as well. To ensure building a high-quality corpus, the annotators regularly discussed their difficulties and confusions with the researcher and the experienced annotator in the whole process of annotation. After discussion, the annotators reached agreement or retained the differences for few ambiguities. Annotation Procedure ::: Agreement study We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses. Table 3 also shows that agreement on argument order is almost 1.0 (kappa = 0.99). This means that the guidelines were sufficiently clear that the annotators rarely had difficulty in deciding the location of Arg1 and Arg2 when the senses are determined. Concerning the scope of arguments, which is seen as the most challenging part in the annotation work BIBREF10, our agreement and Kappa value on this are 0.88 and 0.86 respectively, while the agreement of the scope of arguments depends on whether the scopes of two arguments the anotators annotated are completely the same. Under such strict requirement, our consistency in this respect is still significantly higher than that of other annotation work done before, for we strictly obeyed the rules of “minimality principle” mentioned in the PDTB-3 annotation manual and got a clearer perspective of supplementary information. Therefore, the annotators are better at excluding the information that do not fall within the scope of the discourse relation. It is useful to determine where the annotators disagreed most with each other. The three senses where most disagreement occurred are shown in Table 4. The disagreements were primarily in labelling implicit relations. The highest level of disagreement occurred with Expansion.Conjunction and Expansion.Detail, accounting for 12.5 % among all the inconsistent senses. It is because, more often than not, the annotators failed to judge whether the two arguments make the same contribution with respect to that situation or both arguments describing the same has different level of details. The second highest level of disagreement is reflected in Conjunction and Asynchronous, accounting for 9.3 %. Besides, Contrast and Concession are two similar senses, which are usually signaled by the same connectives like “但是”, “而”, “不过”, and all these words can be translated into“but”in English. Hence, the annotators sometimes tend to be inconsistent when distinguishing them. Results In regard to discourse relations, there are 3212 relations, of which 1237 are explicit relations (39%) and 1174 are implicit relation (37%) (Figure 1). The remaining 801 relations include Hypophora, AltLex, EntRel, and NoRel. Among these 4 kinds of relations, what is worth mentioning is AltLex(Alternative Lexicalizations ),which only constitutes 3% but is of tremendous significance, for we are able to discover inter- or intra-sentential relations when there is no explicit expressions but AltLex expressions conveying the relations. but AltLex expressions(eg, 这导致了(this cause), 一个例子是(one example is... ), 原 因 是 (the reason is), etc.). Originally in English, AltLex is supposed to contain both an anaphoric or deictic reference to an actual argument and an indication of the type of sense BIBREF13. While for Chinese, the instances of Altlex do not differ significantly from those annotated in English. To prove this, two examples are given as below (Example 9 and Example 10). From our annotation, we realized that Altlex deserves more attention, for which can effectively help to recogonize types of discourse relations automatically. [“国内 许多被截肢者, 无法使用 in this country, many of the amputees, cannot use 他们的假肢Arg1],[这其中的原因是] [他们由于 their prostheses, the reason was their 假肢接受腔 无法 与残肢 适配 而 prosthetic sockets cannot their leg fit well so that 感到疼痛Arg2].(AltLex, Cause.Reason) felt painful.] “Many of the amputees in the country would not use their prostheses. The reason, I would come to find out, was that their prosthetic sockets were painful because they did not fit well.” 三年级的时候考进秀朗小学的游泳班, in third grade, got in the swimming class at Xiu Lang [ 这个班每天的 游泳 elementary school, this class everyday’s swimming 训练量高达 3000 米 Arg1], 我发现 [这样的训练量 volumm reach 3000 meters, I realized the training load 使][我 无法同时兼顾两种乐器 Arg2]。(AltLex, Cause.Result) make me cannot learn the two instruments at the same time “I got in the swimming class at Xiu Lang elementary school when I was in third grade. We had to swim up to 3000 meters every day. I realized the training load was too much for me to learn the two instruments at the same time.” Obviously, there is approximately the same number of explicit and implicit relations in the corpus. This may indicate that explicit connectives and relations are more likely to present in Chinese spoken texts rather than Chinese written texts. The figures shown by Table 4 illustrate the distributions of class level senses. We make a comparison for the class level senses between our corpus and the CUHK Discourse Treebank for Chinese (CUHK-DTBC). CUHK Discourse Treebank for Chinese is a corpus annotating news reports. Therefore, our comparison with it may shed light on the differences of discourse structures in different genres. According to the statistics of CUHK-DTBC for 400 documents and our corpus, while more than half of the senses is Expansion in CUHK-DTBC, it just represents 37.5% in our corpus. In addition, it is highlighted that the ranks of the class level senses are the same in both corpora, although all of the other three senses in our corpus are more than those in CUHK-DTBC. The most frequent second-level senses in our corpus can be seen from Table 5. We can find that 20% of the senses is Cause (including Reason and Result), followed by Conjunction and Concession, each with 13%. The top 10 most frequent senses take up 86% of all senses annotated, which reveals that other senses also can validate their existence in our corpus. Therefore, these findings show that, compared with other corpora about Chinese shallow relations where the majority of the documents are news report, our corpus evidently show a more balanced and varied distribution from perspectives of both relations and senses, which in large measure proves the differences in discourse relations between Chinese written texts and Chinese spoken texts. Conclusions and Future Work In this paper, we describe our scheme and process in annotating shallow discourse relations using PDTB-style. In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy. To ensure the annotation quality, we formulated detailed annotation criteria and quality assurance strategies. After serious training, we annotated 3212 discourse relations, and we achieved a satisfactory consistency of labelling with a Kappa value of greater than 0.85 for most of the indicators. Finally, we display our annotation results in which the distribution of discourse relations and senses differ from that in other corpora which annotate news report or newspaper texts. Our corpus contains more Contingency, Temporal and Comparison relations instead of being governed by Expansion. In future work, we are planning to 1) expand our corpus by annotating more TED talks or other spoken texts; 2) build a richer and diverse connective set and AltLex expressions set; 3) use the corpus in developing a shallow discourse parser for Chinese spoken discourses; 4) also explore automatic approaches for implicit discourse relations recognition. Acknowledgement The present research was supported by the National Natural Science Foundation of China (Grant No. 61861130364) and the Royal Society (London) (NAF$\backslash $R1$\backslash $180122). We would like to thank the anonymous reviewers for their insightful comments.
removing AltLexC and adding Progression into our sense hierarchy
ca26cfcc755f9d0641db0e4d88b4109b903dbb26
ca26cfcc755f9d0641db0e4d88b4109b903dbb26_0
Q: How better are results compared to baseline models? Text: Introduction Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8. Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments BIBREF9, BIBREF10, BIBREF11, but it does so without controlling for the audience, the communicator or the pragmatic context. Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength BIBREF12, BIBREF13. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits. To the best of our knowledge, however, no NLP studies explicitly investigate the role of kairos — a component of pragmatic context that refers to the context-dependent “timeliness" and “appropriateness" of an argument and its claims within an argumentative discourse — in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: 10.1086/209393, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. article-3 similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument. Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media BIBREF10, BIBREF14. However, in social media not all posts have to express arguments or stay on topic BIBREF15, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals BIBREF16. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research. In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument. We define kairos as the sequence of argumentative text (e.g. claims) along a particular line of argumentative reasoning. To start, we present a dataset extracted from kialo.com of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments. With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument. Related Work Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing BIBREF17, BIBREF18, and argument component classification BIBREF19, BIBREF20. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power BIBREF9, BIBREF10, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) BIBREF16 and the audience BIBREF12, BIBREF13. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience. It has also been shown, in social science studies, that kairos, which refers to the “timeliness” and “appropropriateness” of arguments and claims, is important to consider in studies of argument impact and persuasiveness BIBREF7, BIBREF8. One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at BIBREF14 Change My View, which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them. However, as stated in BIBREF15 many posts on social media platforms either do not express an argument, or diverge from the main topic of conversation. Therefore, it is difficult to measure the effect of pragmatic context in argument impact and persuasion, without confounding factors from using noisy social media data. In contrast, we provide a dataset of claims along with their structured argument path, which only consists of claims and corresponds to a particular line of reasoning for the given controversial topic. This structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context. Consistent with previous findings in the social sciences, we find that incorporating pragmatic and discourse context is important in computational studies of persuasion, as predictive models that with the context representation outperform models that only incorporate claim-specific linguistic features, in predicting the impact of a claim. Such a system that can predict the impact of a claim given an argumentative discourse, for example, could potentially be employed by argument retrieval and generation models which aims to pick or generate the most appropriate possible claim given the discourse. Dataset Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented. Figure FIGREF1 shows a partial argument tree for the argument thesis “Physical torture of prisoners is an acceptable interrogation tool.”. Each node in the argument tree corresponds to a claim, and these argument trees are constructed and edited collaboratively by the users of the platform. Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Each path from the root to leaf nodes corresponds to an argument path which represents a particular line of reasoning on the given controversial topic. Moreover, each claim has impact votes assigned by the users of the platform. The impact votes evaluate how impactful a claim is within its context, which consists of its predecessor claims from the thesis of the tree. For example, claim O1 “It is morally wrong to harm a defenseless person” is an opposing claim for the thesis and it is an impactful claim since most of its impact votes belong to the category of very high impact. However, claim S3 “It is illegitimate for state actors to harm someone without the process” is a supporting claim for its parent O1 and it is a less impactful claim since most of the impact votes belong to the no impact and low impact categories. Distribution of impact votes. The distribution of claims with the given range of number of impact votes are shown in Table TABREF5. There are 19,512 claims in total with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. We limit our study to the claims with at least 5 votes to have a more reliable assignment for the accumulated impact label for each claim. Impact label statistics. Table TABREF7 shows the distribution of the number of votes for each of the impact categories. The claims have $241,884$ total votes. The majority of the impact votes belong to medium impact category. We observe that users assign more high impact and very high impact votes than low impact and no impact votes respectively. When we restrict the claims to the ones with at least 5 impact votes, we have $213,277$ votes in total. Agreement for the impact votes. To determine the agreement in assigning the impact label for a particular claim, for each claim, we compute the percentage of the votes that are the same as the majority impact vote for that claim. Let $c_{i}$ denote the count of the claims with the class labels C=[no impact, low impact, medium impact, high impact, very high impact] for the impact label $l$ at index $i$. For example, for claim S1 in Figure FIGREF1, the agreement score is $100 * \frac{30}{90}\%=33.33\%$ since the majority class (no impact) has 30 votes and there are 90 impact votes in total for this particular claim. We compute the agreement score for the cases where (1) we treat each impact label separately (5-class case) and (2) we combine the classes high impact and very high impact into a one class: impactful and no impact and low impact into a one class: not impactful (3-class case). Table TABREF6 shows the number of claims with the given agreement score thresholds when we include the claims with at least 5 votes. We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60% agreement. Context. In an argument tree, the claims from the thesis node (root) to each leaf node, form an argument path. This argument path represents a particular line of reasoning for the given thesis. Similarly, for each claim, all the claims along the path from the thesis to the claim, represent the context for the claim. For example, in Figure FIGREF1, the context for O1 consists of only the thesis, whereas the context for S3 consists of both the thesis and O1 since S3 is provided to support the claim O1 which is an opposing claim for the thesis. The claims are not constructed independently from their context since they are written in consideration with the line of reasoning so far. In most cases, each claim elaborates on the point made by its parent and presents cases to support or oppose the parent claim's points. Similarly, when users evaluate the impact of a claim, they consider if the claim is timely and appropriate given its context. There are cases in the dataset where the same claim has different impact labels, when presented within a different context. Therefore, we claim that it is not sufficient to only study the linguistic characteristic of a claim to determine its impact, but it is also necessary to consider its context in determining the impact. Context length ($\text{C}_{l}$) for a particular claim C is defined by number of claims included in the argument path starting from the thesis until the claim C. For example, in Figure FIGREF1, the context length for O1 and S3 are 1 and 2 respectively. Table TABREF8 shows number of claims with the given range of context length for the claims with more than 5 votes and $60\%$ agreement score. We observe that more than half of these claims have 3 or higher context length. Methodology ::: Hypothesis and Task Description Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we hypothesize that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models. Prediction task. Given a claim, we want to predict the impact label that is assigned to it by the users: not impactful, medium impact, or impactful. Preprocessing. We restrict our study to claims with at least 5 or more votes and greater than $60\%$ agreement, to have a reliable impact label assignment. We have $7,386$ claims in the dataset satisfying these constraints. We see that the impact class impacful is the majority class since around $58\%$ of the claims belong to this category. For our experiments, we split our data to train (70%), validation (15%) and test (15%) sets. Methodology ::: Baseline Models ::: Majority The majority baseline assigns the most common label of the training examples (high impact) to every test example. Methodology ::: Baseline Models ::: SVM with RBF kernel Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim. The features that represent the simple characteristics of the claim's argument tree include the distance and similarity of the claim to the thesis, the similarity of a claim with its parent, and the impact votes of the claim's parent claim. We encode the similarity of a claim to its parent and the thesis claim with the cosine similarity of their tf-idf vectors. The distance and similarity metrics aim to model whether claims which are more similar (i.e. potentially more topically relevant) to their parent claim or the thesis claim, are more impactful. We encode the quality of the parent claim as the number of votes for each impact class, and incorporate it as a feature to understand if it is more likely for a claim to impactful given an impactful parent claim. Linguistic features. To represent each claim, we extracted the linguistic features proposed by BIBREF9 such as tf-idf scores for unigrams and bigrams, ratio of quotation marks, exclamation marks, modal verbs, stop words, type-token ratio, hedging BIBREF29, named entity types, POS n-grams, sentiment BIBREF30 and subjectivity scores BIBREF31, spell-checking, readibility features such as Coleman-Liau BIBREF32, Flesch BIBREF33, argument lexicon features BIBREF34 and surface features such as word lengths, sentence lengths, word types, and number of complex words. Methodology ::: Baseline Models ::: FastText joulin-etal-2017-bag introduced a simple, yet effective baseline for text classification, which they show to be competitive with deep learning classifiers in terms of accuracy. Their method represents a sequence of text as a bag of n-grams, and each n-gram is passed through a look-up table to get its dense vector representation. The overall sequence representation is simply an average over the dense representations of the bag of n-grams, and is fed into a linear classifier to predict the label. We use the code released by joulin-etal-2017-bag to train a classifier for argument impact prediction, based on the claim text. Methodology ::: Baseline Models ::: BiLSTM with Attention Another effective baseline BIBREF35, BIBREF36 for text classification consists of encoding the text sequence using a bidirectional Long Short Term Memory (LSTM) BIBREF37, to get the token representations in context, and then attending BIBREF38 over the tokens to get the sequence representation. For the query vector for attention, we use a learned context vector, similar to yang-etal-2016-hierarchical. We picked our hyperparameters based on performance on the validation set, and report our results for the best set of hyperparameters. We initialized our word embeddings with glove vectors BIBREF39 pre-trained on Wikipedia + Gigaword, and used the Adam optimizer BIBREF40 with its default settings. Methodology ::: Fine-tuned BERT model devlin2018bert fine-tuned a pre-trained deep bi-directional transformer language model (which they call BERT), by adding a simple classification layer on top, and achieved state of the art results across a variety of NLP tasks. We employ their pre-trained language models for our task and compare it to our baseline models. For all the architectures described below, we finetune for 10 epochs, with a learning rate of 2e-5. We employ an early stopping procedure based on the model performance on a validation set. Methodology ::: Fine-tuned BERT model ::: Claim with no context In this setting, we attempt to classify the impact of the claim, based on the text of the claim only. We follow the fine-tuning procedure for sequence classification detailed in BIBREF41, and input the claim text as a sequence of tokens preceded by the special [CLS] token and followed by the special [SEP] token. We add a classification layer on top of the BERT encoder, to which we pass the representation of the [CLS] token, and fine-tune this for argument impact prediction. Methodology ::: Fine-tuned BERT model ::: Claim with parent representation In this setting, we use the parent claim's text, in addition to the target claim text, in order to classify the impact of the target claim. We treat this as a sequence pair classification task, and combine both the target claim and parent claim as a single sequence of tokens, separated by the special separator [SEP]. We then follow the same procedure above, for fine-tuning. Methodology ::: Fine-tuned BERT model ::: Incorporating larger context In this setting, we consider incorporating a larger context from the discourse, in order to assess the impact of a claim. In particular, we consider up to four previous claims in the discourse (for a total context length of 5). We attempt to incorporate larger context into the BERT model in three different ways. Flat representation of the path. The first, simple approach is to represent the entire path (claim + context) as a single sequence, where each of the claims is separated by the [SEP] token. BERT was trained on sequence pairs, and therefore the pre-trained encoders only have two segment embeddings BIBREF41. So to fit multiple sequences into this framework, we indicate all tokens of the target claim as belonging to segment A and the tokens for all the claims in the discourse context as belonging to segment B. This way of representing the input, requires no additional changes to the architecture or retraining, and we can just finetune in a similar manner as above. We refer to this representation of the context as a flat representation, and denote the model as $\text{Context}_{f}(i)$, where $i$ indicates the length of the context that is incorporated into the model. Attention over context. Recent work in incorporating argument sequence in predicting persuasiveness BIBREF14 has shown that hierarchical representations are effective in representing context. Similarly, we consider hierarchical representations for representing the discourse. We first encode each claim using the pre-trained BERT model as the claim encoder, and use the representation of the [CLS] token as claim representation. We then employ dot-product attention BIBREF38, to get a weighted representation for the context. We use a learned context vector as the query, for computing attention scores, similar to yang-etal-2016-hierarchical. The attention score $\alpha _c$ is computed as shown below: Where $V_c$ is the claim representation that was computed with the BERT encoder as described above, $V_l$ is the learned context vector that is used for computing attention scores, and $D$ is the set of claims in the discourse. After computing the attention scores, the final context representation $v_d$ is computed as follows: We then concatenate the context representation with the target claim representation $[V_d, V_r]$ and pass it to the classification layer to predict the quality. We denote this model as $\text{Context}_{a}(i)$. GRU to encode context Similar to the approach above, we consider a hierarchical representation for representing the context. We compute the claim representations, as detailed above, and we then feed the discourse claims' representations (in sequence) into a bidirectional Gated Recurrent Unit (GRU) BIBREF42, to compute the context representation. We concatenate this with the target claim representation and use this to predict the claim impact. We denote this model as $\text{Context}_{gru}(i)$. Results and Analysis Table TABREF21 shows the macro precision, recall and F1 scores for the baselines as well as the BERT models with and without context representations. We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline. We find that the BERT model with claim only representation performs significantly better ($p<0.001$) than the baseline models. Incorporating the parent representation only along with the claim representation does not give significant improvement over representing the claim only. However, incorporating the flat representation of the larger context along with the claim representation consistently achieves significantly better ($p<0.001$) performance than the claim representation alone. Similarly, attention representation over the context with the learned query vector achieves significantly better performance then the claim representation only ($p<0.05$). We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\%$). To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with flat context representation for claims with context length values 1, 2, 3 and 4 separately. Table TABREF26 shows the F1 score of the BERT model without context and with flat context representation with different lengths of context. For the claims with context length 1, adding $\text{Context}_{f}(3)$ and $\text{Context}_{f}(4)$ representation along with the claim achieves significantly better $(p<0.05)$ F1 score than modeling the claim only. Similarly for the claims with context length 3 and 4, $\text{Context}_{f}(4)$ and $\text{Context}_{f}(3)$ perform significantly better than BERT with claim only ($(p<0.05)$ and $(p<0.01)$ respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. $\text{C}_{l}=1$). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better. Conclusion In this paper, we present a dataset of claims with their corresponding impact votes, and investigate the role of argumentative discourse context in argument impact classification. We experiment with various models to represent the claims and their context and find that incorporating the context information gives significant improvement in predicting argument impact. In our study, we find that flat representation of the context gives the best improvement in the performance and our analysis indicates that the contextual models perform better even for the claims with limited context. Acknowledgements This work was supported in part by NSF grants IIS-1815455 and SES-1741441. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government.
F1 score of best authors' model is 55.98 compared to BiLSTM and FastText that have F1 score slighlty higher than 46.61.
6cdd61ebf84aa742155f4554456cc3233b6ae2bf
6cdd61ebf84aa742155f4554456cc3233b6ae2bf_0
Q: What models that rely only on claim-specific linguistic features are used as baselines? Text: Introduction Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8. Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments BIBREF9, BIBREF10, BIBREF11, but it does so without controlling for the audience, the communicator or the pragmatic context. Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength BIBREF12, BIBREF13. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits. To the best of our knowledge, however, no NLP studies explicitly investigate the role of kairos — a component of pragmatic context that refers to the context-dependent “timeliness" and “appropriateness" of an argument and its claims within an argumentative discourse — in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: 10.1086/209393, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. article-3 similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument. Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media BIBREF10, BIBREF14. However, in social media not all posts have to express arguments or stay on topic BIBREF15, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals BIBREF16. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research. In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument. We define kairos as the sequence of argumentative text (e.g. claims) along a particular line of argumentative reasoning. To start, we present a dataset extracted from kialo.com of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments. With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument. Related Work Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing BIBREF17, BIBREF18, and argument component classification BIBREF19, BIBREF20. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power BIBREF9, BIBREF10, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) BIBREF16 and the audience BIBREF12, BIBREF13. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience. It has also been shown, in social science studies, that kairos, which refers to the “timeliness” and “appropropriateness” of arguments and claims, is important to consider in studies of argument impact and persuasiveness BIBREF7, BIBREF8. One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at BIBREF14 Change My View, which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them. However, as stated in BIBREF15 many posts on social media platforms either do not express an argument, or diverge from the main topic of conversation. Therefore, it is difficult to measure the effect of pragmatic context in argument impact and persuasion, without confounding factors from using noisy social media data. In contrast, we provide a dataset of claims along with their structured argument path, which only consists of claims and corresponds to a particular line of reasoning for the given controversial topic. This structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context. Consistent with previous findings in the social sciences, we find that incorporating pragmatic and discourse context is important in computational studies of persuasion, as predictive models that with the context representation outperform models that only incorporate claim-specific linguistic features, in predicting the impact of a claim. Such a system that can predict the impact of a claim given an argumentative discourse, for example, could potentially be employed by argument retrieval and generation models which aims to pick or generate the most appropriate possible claim given the discourse. Dataset Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented. Figure FIGREF1 shows a partial argument tree for the argument thesis “Physical torture of prisoners is an acceptable interrogation tool.”. Each node in the argument tree corresponds to a claim, and these argument trees are constructed and edited collaboratively by the users of the platform. Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Each path from the root to leaf nodes corresponds to an argument path which represents a particular line of reasoning on the given controversial topic. Moreover, each claim has impact votes assigned by the users of the platform. The impact votes evaluate how impactful a claim is within its context, which consists of its predecessor claims from the thesis of the tree. For example, claim O1 “It is morally wrong to harm a defenseless person” is an opposing claim for the thesis and it is an impactful claim since most of its impact votes belong to the category of very high impact. However, claim S3 “It is illegitimate for state actors to harm someone without the process” is a supporting claim for its parent O1 and it is a less impactful claim since most of the impact votes belong to the no impact and low impact categories. Distribution of impact votes. The distribution of claims with the given range of number of impact votes are shown in Table TABREF5. There are 19,512 claims in total with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. We limit our study to the claims with at least 5 votes to have a more reliable assignment for the accumulated impact label for each claim. Impact label statistics. Table TABREF7 shows the distribution of the number of votes for each of the impact categories. The claims have $241,884$ total votes. The majority of the impact votes belong to medium impact category. We observe that users assign more high impact and very high impact votes than low impact and no impact votes respectively. When we restrict the claims to the ones with at least 5 impact votes, we have $213,277$ votes in total. Agreement for the impact votes. To determine the agreement in assigning the impact label for a particular claim, for each claim, we compute the percentage of the votes that are the same as the majority impact vote for that claim. Let $c_{i}$ denote the count of the claims with the class labels C=[no impact, low impact, medium impact, high impact, very high impact] for the impact label $l$ at index $i$. For example, for claim S1 in Figure FIGREF1, the agreement score is $100 * \frac{30}{90}\%=33.33\%$ since the majority class (no impact) has 30 votes and there are 90 impact votes in total for this particular claim. We compute the agreement score for the cases where (1) we treat each impact label separately (5-class case) and (2) we combine the classes high impact and very high impact into a one class: impactful and no impact and low impact into a one class: not impactful (3-class case). Table TABREF6 shows the number of claims with the given agreement score thresholds when we include the claims with at least 5 votes. We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60% agreement. Context. In an argument tree, the claims from the thesis node (root) to each leaf node, form an argument path. This argument path represents a particular line of reasoning for the given thesis. Similarly, for each claim, all the claims along the path from the thesis to the claim, represent the context for the claim. For example, in Figure FIGREF1, the context for O1 consists of only the thesis, whereas the context for S3 consists of both the thesis and O1 since S3 is provided to support the claim O1 which is an opposing claim for the thesis. The claims are not constructed independently from their context since they are written in consideration with the line of reasoning so far. In most cases, each claim elaborates on the point made by its parent and presents cases to support or oppose the parent claim's points. Similarly, when users evaluate the impact of a claim, they consider if the claim is timely and appropriate given its context. There are cases in the dataset where the same claim has different impact labels, when presented within a different context. Therefore, we claim that it is not sufficient to only study the linguistic characteristic of a claim to determine its impact, but it is also necessary to consider its context in determining the impact. Context length ($\text{C}_{l}$) for a particular claim C is defined by number of claims included in the argument path starting from the thesis until the claim C. For example, in Figure FIGREF1, the context length for O1 and S3 are 1 and 2 respectively. Table TABREF8 shows number of claims with the given range of context length for the claims with more than 5 votes and $60\%$ agreement score. We observe that more than half of these claims have 3 or higher context length. Methodology ::: Hypothesis and Task Description Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we hypothesize that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models. Prediction task. Given a claim, we want to predict the impact label that is assigned to it by the users: not impactful, medium impact, or impactful. Preprocessing. We restrict our study to claims with at least 5 or more votes and greater than $60\%$ agreement, to have a reliable impact label assignment. We have $7,386$ claims in the dataset satisfying these constraints. We see that the impact class impacful is the majority class since around $58\%$ of the claims belong to this category. For our experiments, we split our data to train (70%), validation (15%) and test (15%) sets. Methodology ::: Baseline Models ::: Majority The majority baseline assigns the most common label of the training examples (high impact) to every test example. Methodology ::: Baseline Models ::: SVM with RBF kernel Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim. The features that represent the simple characteristics of the claim's argument tree include the distance and similarity of the claim to the thesis, the similarity of a claim with its parent, and the impact votes of the claim's parent claim. We encode the similarity of a claim to its parent and the thesis claim with the cosine similarity of their tf-idf vectors. The distance and similarity metrics aim to model whether claims which are more similar (i.e. potentially more topically relevant) to their parent claim or the thesis claim, are more impactful. We encode the quality of the parent claim as the number of votes for each impact class, and incorporate it as a feature to understand if it is more likely for a claim to impactful given an impactful parent claim. Linguistic features. To represent each claim, we extracted the linguistic features proposed by BIBREF9 such as tf-idf scores for unigrams and bigrams, ratio of quotation marks, exclamation marks, modal verbs, stop words, type-token ratio, hedging BIBREF29, named entity types, POS n-grams, sentiment BIBREF30 and subjectivity scores BIBREF31, spell-checking, readibility features such as Coleman-Liau BIBREF32, Flesch BIBREF33, argument lexicon features BIBREF34 and surface features such as word lengths, sentence lengths, word types, and number of complex words. Methodology ::: Baseline Models ::: FastText joulin-etal-2017-bag introduced a simple, yet effective baseline for text classification, which they show to be competitive with deep learning classifiers in terms of accuracy. Their method represents a sequence of text as a bag of n-grams, and each n-gram is passed through a look-up table to get its dense vector representation. The overall sequence representation is simply an average over the dense representations of the bag of n-grams, and is fed into a linear classifier to predict the label. We use the code released by joulin-etal-2017-bag to train a classifier for argument impact prediction, based on the claim text. Methodology ::: Baseline Models ::: BiLSTM with Attention Another effective baseline BIBREF35, BIBREF36 for text classification consists of encoding the text sequence using a bidirectional Long Short Term Memory (LSTM) BIBREF37, to get the token representations in context, and then attending BIBREF38 over the tokens to get the sequence representation. For the query vector for attention, we use a learned context vector, similar to yang-etal-2016-hierarchical. We picked our hyperparameters based on performance on the validation set, and report our results for the best set of hyperparameters. We initialized our word embeddings with glove vectors BIBREF39 pre-trained on Wikipedia + Gigaword, and used the Adam optimizer BIBREF40 with its default settings. Methodology ::: Fine-tuned BERT model devlin2018bert fine-tuned a pre-trained deep bi-directional transformer language model (which they call BERT), by adding a simple classification layer on top, and achieved state of the art results across a variety of NLP tasks. We employ their pre-trained language models for our task and compare it to our baseline models. For all the architectures described below, we finetune for 10 epochs, with a learning rate of 2e-5. We employ an early stopping procedure based on the model performance on a validation set. Methodology ::: Fine-tuned BERT model ::: Claim with no context In this setting, we attempt to classify the impact of the claim, based on the text of the claim only. We follow the fine-tuning procedure for sequence classification detailed in BIBREF41, and input the claim text as a sequence of tokens preceded by the special [CLS] token and followed by the special [SEP] token. We add a classification layer on top of the BERT encoder, to which we pass the representation of the [CLS] token, and fine-tune this for argument impact prediction. Methodology ::: Fine-tuned BERT model ::: Claim with parent representation In this setting, we use the parent claim's text, in addition to the target claim text, in order to classify the impact of the target claim. We treat this as a sequence pair classification task, and combine both the target claim and parent claim as a single sequence of tokens, separated by the special separator [SEP]. We then follow the same procedure above, for fine-tuning. Methodology ::: Fine-tuned BERT model ::: Incorporating larger context In this setting, we consider incorporating a larger context from the discourse, in order to assess the impact of a claim. In particular, we consider up to four previous claims in the discourse (for a total context length of 5). We attempt to incorporate larger context into the BERT model in three different ways. Flat representation of the path. The first, simple approach is to represent the entire path (claim + context) as a single sequence, where each of the claims is separated by the [SEP] token. BERT was trained on sequence pairs, and therefore the pre-trained encoders only have two segment embeddings BIBREF41. So to fit multiple sequences into this framework, we indicate all tokens of the target claim as belonging to segment A and the tokens for all the claims in the discourse context as belonging to segment B. This way of representing the input, requires no additional changes to the architecture or retraining, and we can just finetune in a similar manner as above. We refer to this representation of the context as a flat representation, and denote the model as $\text{Context}_{f}(i)$, where $i$ indicates the length of the context that is incorporated into the model. Attention over context. Recent work in incorporating argument sequence in predicting persuasiveness BIBREF14 has shown that hierarchical representations are effective in representing context. Similarly, we consider hierarchical representations for representing the discourse. We first encode each claim using the pre-trained BERT model as the claim encoder, and use the representation of the [CLS] token as claim representation. We then employ dot-product attention BIBREF38, to get a weighted representation for the context. We use a learned context vector as the query, for computing attention scores, similar to yang-etal-2016-hierarchical. The attention score $\alpha _c$ is computed as shown below: Where $V_c$ is the claim representation that was computed with the BERT encoder as described above, $V_l$ is the learned context vector that is used for computing attention scores, and $D$ is the set of claims in the discourse. After computing the attention scores, the final context representation $v_d$ is computed as follows: We then concatenate the context representation with the target claim representation $[V_d, V_r]$ and pass it to the classification layer to predict the quality. We denote this model as $\text{Context}_{a}(i)$. GRU to encode context Similar to the approach above, we consider a hierarchical representation for representing the context. We compute the claim representations, as detailed above, and we then feed the discourse claims' representations (in sequence) into a bidirectional Gated Recurrent Unit (GRU) BIBREF42, to compute the context representation. We concatenate this with the target claim representation and use this to predict the claim impact. We denote this model as $\text{Context}_{gru}(i)$. Results and Analysis Table TABREF21 shows the macro precision, recall and F1 scores for the baselines as well as the BERT models with and without context representations. We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline. We find that the BERT model with claim only representation performs significantly better ($p<0.001$) than the baseline models. Incorporating the parent representation only along with the claim representation does not give significant improvement over representing the claim only. However, incorporating the flat representation of the larger context along with the claim representation consistently achieves significantly better ($p<0.001$) performance than the claim representation alone. Similarly, attention representation over the context with the learned query vector achieves significantly better performance then the claim representation only ($p<0.05$). We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\%$). To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with flat context representation for claims with context length values 1, 2, 3 and 4 separately. Table TABREF26 shows the F1 score of the BERT model without context and with flat context representation with different lengths of context. For the claims with context length 1, adding $\text{Context}_{f}(3)$ and $\text{Context}_{f}(4)$ representation along with the claim achieves significantly better $(p<0.05)$ F1 score than modeling the claim only. Similarly for the claims with context length 3 and 4, $\text{Context}_{f}(4)$ and $\text{Context}_{f}(3)$ perform significantly better than BERT with claim only ($(p<0.05)$ and $(p<0.01)$ respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. $\text{C}_{l}=1$). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better. Conclusion In this paper, we present a dataset of claims with their corresponding impact votes, and investigate the role of argumentative discourse context in argument impact classification. We experiment with various models to represent the claims and their context and find that incorporating the context information gives significant improvement in predicting argument impact. In our study, we find that flat representation of the context gives the best improvement in the performance and our analysis indicates that the contextual models perform better even for the claims with limited context. Acknowledgements This work was supported in part by NSF grants IIS-1815455 and SES-1741441. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government.
SVM with RBF kernel
8e8097cada29d89ca07166641c725e0f8fed6676
8e8097cada29d89ca07166641c725e0f8fed6676_0
Q: How is pargmative and discourse context added to the dataset? Text: Introduction Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8. Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments BIBREF9, BIBREF10, BIBREF11, but it does so without controlling for the audience, the communicator or the pragmatic context. Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength BIBREF12, BIBREF13. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits. To the best of our knowledge, however, no NLP studies explicitly investigate the role of kairos — a component of pragmatic context that refers to the context-dependent “timeliness" and “appropriateness" of an argument and its claims within an argumentative discourse — in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: 10.1086/209393, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. article-3 similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument. Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media BIBREF10, BIBREF14. However, in social media not all posts have to express arguments or stay on topic BIBREF15, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals BIBREF16. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research. In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument. We define kairos as the sequence of argumentative text (e.g. claims) along a particular line of argumentative reasoning. To start, we present a dataset extracted from kialo.com of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments. With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument. Related Work Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing BIBREF17, BIBREF18, and argument component classification BIBREF19, BIBREF20. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power BIBREF9, BIBREF10, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) BIBREF16 and the audience BIBREF12, BIBREF13. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience. It has also been shown, in social science studies, that kairos, which refers to the “timeliness” and “appropropriateness” of arguments and claims, is important to consider in studies of argument impact and persuasiveness BIBREF7, BIBREF8. One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at BIBREF14 Change My View, which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them. However, as stated in BIBREF15 many posts on social media platforms either do not express an argument, or diverge from the main topic of conversation. Therefore, it is difficult to measure the effect of pragmatic context in argument impact and persuasion, without confounding factors from using noisy social media data. In contrast, we provide a dataset of claims along with their structured argument path, which only consists of claims and corresponds to a particular line of reasoning for the given controversial topic. This structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context. Consistent with previous findings in the social sciences, we find that incorporating pragmatic and discourse context is important in computational studies of persuasion, as predictive models that with the context representation outperform models that only incorporate claim-specific linguistic features, in predicting the impact of a claim. Such a system that can predict the impact of a claim given an argumentative discourse, for example, could potentially be employed by argument retrieval and generation models which aims to pick or generate the most appropriate possible claim given the discourse. Dataset Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented. Figure FIGREF1 shows a partial argument tree for the argument thesis “Physical torture of prisoners is an acceptable interrogation tool.”. Each node in the argument tree corresponds to a claim, and these argument trees are constructed and edited collaboratively by the users of the platform. Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Each path from the root to leaf nodes corresponds to an argument path which represents a particular line of reasoning on the given controversial topic. Moreover, each claim has impact votes assigned by the users of the platform. The impact votes evaluate how impactful a claim is within its context, which consists of its predecessor claims from the thesis of the tree. For example, claim O1 “It is morally wrong to harm a defenseless person” is an opposing claim for the thesis and it is an impactful claim since most of its impact votes belong to the category of very high impact. However, claim S3 “It is illegitimate for state actors to harm someone without the process” is a supporting claim for its parent O1 and it is a less impactful claim since most of the impact votes belong to the no impact and low impact categories. Distribution of impact votes. The distribution of claims with the given range of number of impact votes are shown in Table TABREF5. There are 19,512 claims in total with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. We limit our study to the claims with at least 5 votes to have a more reliable assignment for the accumulated impact label for each claim. Impact label statistics. Table TABREF7 shows the distribution of the number of votes for each of the impact categories. The claims have $241,884$ total votes. The majority of the impact votes belong to medium impact category. We observe that users assign more high impact and very high impact votes than low impact and no impact votes respectively. When we restrict the claims to the ones with at least 5 impact votes, we have $213,277$ votes in total. Agreement for the impact votes. To determine the agreement in assigning the impact label for a particular claim, for each claim, we compute the percentage of the votes that are the same as the majority impact vote for that claim. Let $c_{i}$ denote the count of the claims with the class labels C=[no impact, low impact, medium impact, high impact, very high impact] for the impact label $l$ at index $i$. For example, for claim S1 in Figure FIGREF1, the agreement score is $100 * \frac{30}{90}\%=33.33\%$ since the majority class (no impact) has 30 votes and there are 90 impact votes in total for this particular claim. We compute the agreement score for the cases where (1) we treat each impact label separately (5-class case) and (2) we combine the classes high impact and very high impact into a one class: impactful and no impact and low impact into a one class: not impactful (3-class case). Table TABREF6 shows the number of claims with the given agreement score thresholds when we include the claims with at least 5 votes. We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60% agreement. Context. In an argument tree, the claims from the thesis node (root) to each leaf node, form an argument path. This argument path represents a particular line of reasoning for the given thesis. Similarly, for each claim, all the claims along the path from the thesis to the claim, represent the context for the claim. For example, in Figure FIGREF1, the context for O1 consists of only the thesis, whereas the context for S3 consists of both the thesis and O1 since S3 is provided to support the claim O1 which is an opposing claim for the thesis. The claims are not constructed independently from their context since they are written in consideration with the line of reasoning so far. In most cases, each claim elaborates on the point made by its parent and presents cases to support or oppose the parent claim's points. Similarly, when users evaluate the impact of a claim, they consider if the claim is timely and appropriate given its context. There are cases in the dataset where the same claim has different impact labels, when presented within a different context. Therefore, we claim that it is not sufficient to only study the linguistic characteristic of a claim to determine its impact, but it is also necessary to consider its context in determining the impact. Context length ($\text{C}_{l}$) for a particular claim C is defined by number of claims included in the argument path starting from the thesis until the claim C. For example, in Figure FIGREF1, the context length for O1 and S3 are 1 and 2 respectively. Table TABREF8 shows number of claims with the given range of context length for the claims with more than 5 votes and $60\%$ agreement score. We observe that more than half of these claims have 3 or higher context length. Methodology ::: Hypothesis and Task Description Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we hypothesize that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models. Prediction task. Given a claim, we want to predict the impact label that is assigned to it by the users: not impactful, medium impact, or impactful. Preprocessing. We restrict our study to claims with at least 5 or more votes and greater than $60\%$ agreement, to have a reliable impact label assignment. We have $7,386$ claims in the dataset satisfying these constraints. We see that the impact class impacful is the majority class since around $58\%$ of the claims belong to this category. For our experiments, we split our data to train (70%), validation (15%) and test (15%) sets. Methodology ::: Baseline Models ::: Majority The majority baseline assigns the most common label of the training examples (high impact) to every test example. Methodology ::: Baseline Models ::: SVM with RBF kernel Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim. The features that represent the simple characteristics of the claim's argument tree include the distance and similarity of the claim to the thesis, the similarity of a claim with its parent, and the impact votes of the claim's parent claim. We encode the similarity of a claim to its parent and the thesis claim with the cosine similarity of their tf-idf vectors. The distance and similarity metrics aim to model whether claims which are more similar (i.e. potentially more topically relevant) to their parent claim or the thesis claim, are more impactful. We encode the quality of the parent claim as the number of votes for each impact class, and incorporate it as a feature to understand if it is more likely for a claim to impactful given an impactful parent claim. Linguistic features. To represent each claim, we extracted the linguistic features proposed by BIBREF9 such as tf-idf scores for unigrams and bigrams, ratio of quotation marks, exclamation marks, modal verbs, stop words, type-token ratio, hedging BIBREF29, named entity types, POS n-grams, sentiment BIBREF30 and subjectivity scores BIBREF31, spell-checking, readibility features such as Coleman-Liau BIBREF32, Flesch BIBREF33, argument lexicon features BIBREF34 and surface features such as word lengths, sentence lengths, word types, and number of complex words. Methodology ::: Baseline Models ::: FastText joulin-etal-2017-bag introduced a simple, yet effective baseline for text classification, which they show to be competitive with deep learning classifiers in terms of accuracy. Their method represents a sequence of text as a bag of n-grams, and each n-gram is passed through a look-up table to get its dense vector representation. The overall sequence representation is simply an average over the dense representations of the bag of n-grams, and is fed into a linear classifier to predict the label. We use the code released by joulin-etal-2017-bag to train a classifier for argument impact prediction, based on the claim text. Methodology ::: Baseline Models ::: BiLSTM with Attention Another effective baseline BIBREF35, BIBREF36 for text classification consists of encoding the text sequence using a bidirectional Long Short Term Memory (LSTM) BIBREF37, to get the token representations in context, and then attending BIBREF38 over the tokens to get the sequence representation. For the query vector for attention, we use a learned context vector, similar to yang-etal-2016-hierarchical. We picked our hyperparameters based on performance on the validation set, and report our results for the best set of hyperparameters. We initialized our word embeddings with glove vectors BIBREF39 pre-trained on Wikipedia + Gigaword, and used the Adam optimizer BIBREF40 with its default settings. Methodology ::: Fine-tuned BERT model devlin2018bert fine-tuned a pre-trained deep bi-directional transformer language model (which they call BERT), by adding a simple classification layer on top, and achieved state of the art results across a variety of NLP tasks. We employ their pre-trained language models for our task and compare it to our baseline models. For all the architectures described below, we finetune for 10 epochs, with a learning rate of 2e-5. We employ an early stopping procedure based on the model performance on a validation set. Methodology ::: Fine-tuned BERT model ::: Claim with no context In this setting, we attempt to classify the impact of the claim, based on the text of the claim only. We follow the fine-tuning procedure for sequence classification detailed in BIBREF41, and input the claim text as a sequence of tokens preceded by the special [CLS] token and followed by the special [SEP] token. We add a classification layer on top of the BERT encoder, to which we pass the representation of the [CLS] token, and fine-tune this for argument impact prediction. Methodology ::: Fine-tuned BERT model ::: Claim with parent representation In this setting, we use the parent claim's text, in addition to the target claim text, in order to classify the impact of the target claim. We treat this as a sequence pair classification task, and combine both the target claim and parent claim as a single sequence of tokens, separated by the special separator [SEP]. We then follow the same procedure above, for fine-tuning. Methodology ::: Fine-tuned BERT model ::: Incorporating larger context In this setting, we consider incorporating a larger context from the discourse, in order to assess the impact of a claim. In particular, we consider up to four previous claims in the discourse (for a total context length of 5). We attempt to incorporate larger context into the BERT model in three different ways. Flat representation of the path. The first, simple approach is to represent the entire path (claim + context) as a single sequence, where each of the claims is separated by the [SEP] token. BERT was trained on sequence pairs, and therefore the pre-trained encoders only have two segment embeddings BIBREF41. So to fit multiple sequences into this framework, we indicate all tokens of the target claim as belonging to segment A and the tokens for all the claims in the discourse context as belonging to segment B. This way of representing the input, requires no additional changes to the architecture or retraining, and we can just finetune in a similar manner as above. We refer to this representation of the context as a flat representation, and denote the model as $\text{Context}_{f}(i)$, where $i$ indicates the length of the context that is incorporated into the model. Attention over context. Recent work in incorporating argument sequence in predicting persuasiveness BIBREF14 has shown that hierarchical representations are effective in representing context. Similarly, we consider hierarchical representations for representing the discourse. We first encode each claim using the pre-trained BERT model as the claim encoder, and use the representation of the [CLS] token as claim representation. We then employ dot-product attention BIBREF38, to get a weighted representation for the context. We use a learned context vector as the query, for computing attention scores, similar to yang-etal-2016-hierarchical. The attention score $\alpha _c$ is computed as shown below: Where $V_c$ is the claim representation that was computed with the BERT encoder as described above, $V_l$ is the learned context vector that is used for computing attention scores, and $D$ is the set of claims in the discourse. After computing the attention scores, the final context representation $v_d$ is computed as follows: We then concatenate the context representation with the target claim representation $[V_d, V_r]$ and pass it to the classification layer to predict the quality. We denote this model as $\text{Context}_{a}(i)$. GRU to encode context Similar to the approach above, we consider a hierarchical representation for representing the context. We compute the claim representations, as detailed above, and we then feed the discourse claims' representations (in sequence) into a bidirectional Gated Recurrent Unit (GRU) BIBREF42, to compute the context representation. We concatenate this with the target claim representation and use this to predict the claim impact. We denote this model as $\text{Context}_{gru}(i)$. Results and Analysis Table TABREF21 shows the macro precision, recall and F1 scores for the baselines as well as the BERT models with and without context representations. We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline. We find that the BERT model with claim only representation performs significantly better ($p<0.001$) than the baseline models. Incorporating the parent representation only along with the claim representation does not give significant improvement over representing the claim only. However, incorporating the flat representation of the larger context along with the claim representation consistently achieves significantly better ($p<0.001$) performance than the claim representation alone. Similarly, attention representation over the context with the learned query vector achieves significantly better performance then the claim representation only ($p<0.05$). We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\%$). To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with flat context representation for claims with context length values 1, 2, 3 and 4 separately. Table TABREF26 shows the F1 score of the BERT model without context and with flat context representation with different lengths of context. For the claims with context length 1, adding $\text{Context}_{f}(3)$ and $\text{Context}_{f}(4)$ representation along with the claim achieves significantly better $(p<0.05)$ F1 score than modeling the claim only. Similarly for the claims with context length 3 and 4, $\text{Context}_{f}(4)$ and $\text{Context}_{f}(3)$ perform significantly better than BERT with claim only ($(p<0.05)$ and $(p<0.01)$ respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. $\text{C}_{l}=1$). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better. Conclusion In this paper, we present a dataset of claims with their corresponding impact votes, and investigate the role of argumentative discourse context in argument impact classification. We experiment with various models to represent the claims and their context and find that incorporating the context information gives significant improvement in predicting argument impact. In our study, we find that flat representation of the context gives the best improvement in the performance and our analysis indicates that the contextual models perform better even for the claims with limited context. Acknowledgements This work was supported in part by NSF grants IIS-1815455 and SES-1741441. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government.
While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument.
951098f0b7169447695b47c142384f278f451a1e
951098f0b7169447695b47c142384f278f451a1e_0
Q: What annotations are available in the dataset? Text: Introduction Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8. Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments BIBREF9, BIBREF10, BIBREF11, but it does so without controlling for the audience, the communicator or the pragmatic context. Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength BIBREF12, BIBREF13. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits. To the best of our knowledge, however, no NLP studies explicitly investigate the role of kairos — a component of pragmatic context that refers to the context-dependent “timeliness" and “appropriateness" of an argument and its claims within an argumentative discourse — in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: 10.1086/209393, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. article-3 similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument. Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media BIBREF10, BIBREF14. However, in social media not all posts have to express arguments or stay on topic BIBREF15, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals BIBREF16. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research. In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument. We define kairos as the sequence of argumentative text (e.g. claims) along a particular line of argumentative reasoning. To start, we present a dataset extracted from kialo.com of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments. With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument. Related Work Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing BIBREF17, BIBREF18, and argument component classification BIBREF19, BIBREF20. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power BIBREF9, BIBREF10, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) BIBREF16 and the audience BIBREF12, BIBREF13. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience. It has also been shown, in social science studies, that kairos, which refers to the “timeliness” and “appropropriateness” of arguments and claims, is important to consider in studies of argument impact and persuasiveness BIBREF7, BIBREF8. One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at BIBREF14 Change My View, which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them. However, as stated in BIBREF15 many posts on social media platforms either do not express an argument, or diverge from the main topic of conversation. Therefore, it is difficult to measure the effect of pragmatic context in argument impact and persuasion, without confounding factors from using noisy social media data. In contrast, we provide a dataset of claims along with their structured argument path, which only consists of claims and corresponds to a particular line of reasoning for the given controversial topic. This structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context. Consistent with previous findings in the social sciences, we find that incorporating pragmatic and discourse context is important in computational studies of persuasion, as predictive models that with the context representation outperform models that only incorporate claim-specific linguistic features, in predicting the impact of a claim. Such a system that can predict the impact of a claim given an argumentative discourse, for example, could potentially be employed by argument retrieval and generation models which aims to pick or generate the most appropriate possible claim given the discourse. Dataset Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented. Figure FIGREF1 shows a partial argument tree for the argument thesis “Physical torture of prisoners is an acceptable interrogation tool.”. Each node in the argument tree corresponds to a claim, and these argument trees are constructed and edited collaboratively by the users of the platform. Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Each path from the root to leaf nodes corresponds to an argument path which represents a particular line of reasoning on the given controversial topic. Moreover, each claim has impact votes assigned by the users of the platform. The impact votes evaluate how impactful a claim is within its context, which consists of its predecessor claims from the thesis of the tree. For example, claim O1 “It is morally wrong to harm a defenseless person” is an opposing claim for the thesis and it is an impactful claim since most of its impact votes belong to the category of very high impact. However, claim S3 “It is illegitimate for state actors to harm someone without the process” is a supporting claim for its parent O1 and it is a less impactful claim since most of the impact votes belong to the no impact and low impact categories. Distribution of impact votes. The distribution of claims with the given range of number of impact votes are shown in Table TABREF5. There are 19,512 claims in total with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. We limit our study to the claims with at least 5 votes to have a more reliable assignment for the accumulated impact label for each claim. Impact label statistics. Table TABREF7 shows the distribution of the number of votes for each of the impact categories. The claims have $241,884$ total votes. The majority of the impact votes belong to medium impact category. We observe that users assign more high impact and very high impact votes than low impact and no impact votes respectively. When we restrict the claims to the ones with at least 5 impact votes, we have $213,277$ votes in total. Agreement for the impact votes. To determine the agreement in assigning the impact label for a particular claim, for each claim, we compute the percentage of the votes that are the same as the majority impact vote for that claim. Let $c_{i}$ denote the count of the claims with the class labels C=[no impact, low impact, medium impact, high impact, very high impact] for the impact label $l$ at index $i$. For example, for claim S1 in Figure FIGREF1, the agreement score is $100 * \frac{30}{90}\%=33.33\%$ since the majority class (no impact) has 30 votes and there are 90 impact votes in total for this particular claim. We compute the agreement score for the cases where (1) we treat each impact label separately (5-class case) and (2) we combine the classes high impact and very high impact into a one class: impactful and no impact and low impact into a one class: not impactful (3-class case). Table TABREF6 shows the number of claims with the given agreement score thresholds when we include the claims with at least 5 votes. We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60% agreement. Context. In an argument tree, the claims from the thesis node (root) to each leaf node, form an argument path. This argument path represents a particular line of reasoning for the given thesis. Similarly, for each claim, all the claims along the path from the thesis to the claim, represent the context for the claim. For example, in Figure FIGREF1, the context for O1 consists of only the thesis, whereas the context for S3 consists of both the thesis and O1 since S3 is provided to support the claim O1 which is an opposing claim for the thesis. The claims are not constructed independently from their context since they are written in consideration with the line of reasoning so far. In most cases, each claim elaborates on the point made by its parent and presents cases to support or oppose the parent claim's points. Similarly, when users evaluate the impact of a claim, they consider if the claim is timely and appropriate given its context. There are cases in the dataset where the same claim has different impact labels, when presented within a different context. Therefore, we claim that it is not sufficient to only study the linguistic characteristic of a claim to determine its impact, but it is also necessary to consider its context in determining the impact. Context length ($\text{C}_{l}$) for a particular claim C is defined by number of claims included in the argument path starting from the thesis until the claim C. For example, in Figure FIGREF1, the context length for O1 and S3 are 1 and 2 respectively. Table TABREF8 shows number of claims with the given range of context length for the claims with more than 5 votes and $60\%$ agreement score. We observe that more than half of these claims have 3 or higher context length. Methodology ::: Hypothesis and Task Description Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we hypothesize that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models. Prediction task. Given a claim, we want to predict the impact label that is assigned to it by the users: not impactful, medium impact, or impactful. Preprocessing. We restrict our study to claims with at least 5 or more votes and greater than $60\%$ agreement, to have a reliable impact label assignment. We have $7,386$ claims in the dataset satisfying these constraints. We see that the impact class impacful is the majority class since around $58\%$ of the claims belong to this category. For our experiments, we split our data to train (70%), validation (15%) and test (15%) sets. Methodology ::: Baseline Models ::: Majority The majority baseline assigns the most common label of the training examples (high impact) to every test example. Methodology ::: Baseline Models ::: SVM with RBF kernel Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim. The features that represent the simple characteristics of the claim's argument tree include the distance and similarity of the claim to the thesis, the similarity of a claim with its parent, and the impact votes of the claim's parent claim. We encode the similarity of a claim to its parent and the thesis claim with the cosine similarity of their tf-idf vectors. The distance and similarity metrics aim to model whether claims which are more similar (i.e. potentially more topically relevant) to their parent claim or the thesis claim, are more impactful. We encode the quality of the parent claim as the number of votes for each impact class, and incorporate it as a feature to understand if it is more likely for a claim to impactful given an impactful parent claim. Linguistic features. To represent each claim, we extracted the linguistic features proposed by BIBREF9 such as tf-idf scores for unigrams and bigrams, ratio of quotation marks, exclamation marks, modal verbs, stop words, type-token ratio, hedging BIBREF29, named entity types, POS n-grams, sentiment BIBREF30 and subjectivity scores BIBREF31, spell-checking, readibility features such as Coleman-Liau BIBREF32, Flesch BIBREF33, argument lexicon features BIBREF34 and surface features such as word lengths, sentence lengths, word types, and number of complex words. Methodology ::: Baseline Models ::: FastText joulin-etal-2017-bag introduced a simple, yet effective baseline for text classification, which they show to be competitive with deep learning classifiers in terms of accuracy. Their method represents a sequence of text as a bag of n-grams, and each n-gram is passed through a look-up table to get its dense vector representation. The overall sequence representation is simply an average over the dense representations of the bag of n-grams, and is fed into a linear classifier to predict the label. We use the code released by joulin-etal-2017-bag to train a classifier for argument impact prediction, based on the claim text. Methodology ::: Baseline Models ::: BiLSTM with Attention Another effective baseline BIBREF35, BIBREF36 for text classification consists of encoding the text sequence using a bidirectional Long Short Term Memory (LSTM) BIBREF37, to get the token representations in context, and then attending BIBREF38 over the tokens to get the sequence representation. For the query vector for attention, we use a learned context vector, similar to yang-etal-2016-hierarchical. We picked our hyperparameters based on performance on the validation set, and report our results for the best set of hyperparameters. We initialized our word embeddings with glove vectors BIBREF39 pre-trained on Wikipedia + Gigaword, and used the Adam optimizer BIBREF40 with its default settings. Methodology ::: Fine-tuned BERT model devlin2018bert fine-tuned a pre-trained deep bi-directional transformer language model (which they call BERT), by adding a simple classification layer on top, and achieved state of the art results across a variety of NLP tasks. We employ their pre-trained language models for our task and compare it to our baseline models. For all the architectures described below, we finetune for 10 epochs, with a learning rate of 2e-5. We employ an early stopping procedure based on the model performance on a validation set. Methodology ::: Fine-tuned BERT model ::: Claim with no context In this setting, we attempt to classify the impact of the claim, based on the text of the claim only. We follow the fine-tuning procedure for sequence classification detailed in BIBREF41, and input the claim text as a sequence of tokens preceded by the special [CLS] token and followed by the special [SEP] token. We add a classification layer on top of the BERT encoder, to which we pass the representation of the [CLS] token, and fine-tune this for argument impact prediction. Methodology ::: Fine-tuned BERT model ::: Claim with parent representation In this setting, we use the parent claim's text, in addition to the target claim text, in order to classify the impact of the target claim. We treat this as a sequence pair classification task, and combine both the target claim and parent claim as a single sequence of tokens, separated by the special separator [SEP]. We then follow the same procedure above, for fine-tuning. Methodology ::: Fine-tuned BERT model ::: Incorporating larger context In this setting, we consider incorporating a larger context from the discourse, in order to assess the impact of a claim. In particular, we consider up to four previous claims in the discourse (for a total context length of 5). We attempt to incorporate larger context into the BERT model in three different ways. Flat representation of the path. The first, simple approach is to represent the entire path (claim + context) as a single sequence, where each of the claims is separated by the [SEP] token. BERT was trained on sequence pairs, and therefore the pre-trained encoders only have two segment embeddings BIBREF41. So to fit multiple sequences into this framework, we indicate all tokens of the target claim as belonging to segment A and the tokens for all the claims in the discourse context as belonging to segment B. This way of representing the input, requires no additional changes to the architecture or retraining, and we can just finetune in a similar manner as above. We refer to this representation of the context as a flat representation, and denote the model as $\text{Context}_{f}(i)$, where $i$ indicates the length of the context that is incorporated into the model. Attention over context. Recent work in incorporating argument sequence in predicting persuasiveness BIBREF14 has shown that hierarchical representations are effective in representing context. Similarly, we consider hierarchical representations for representing the discourse. We first encode each claim using the pre-trained BERT model as the claim encoder, and use the representation of the [CLS] token as claim representation. We then employ dot-product attention BIBREF38, to get a weighted representation for the context. We use a learned context vector as the query, for computing attention scores, similar to yang-etal-2016-hierarchical. The attention score $\alpha _c$ is computed as shown below: Where $V_c$ is the claim representation that was computed with the BERT encoder as described above, $V_l$ is the learned context vector that is used for computing attention scores, and $D$ is the set of claims in the discourse. After computing the attention scores, the final context representation $v_d$ is computed as follows: We then concatenate the context representation with the target claim representation $[V_d, V_r]$ and pass it to the classification layer to predict the quality. We denote this model as $\text{Context}_{a}(i)$. GRU to encode context Similar to the approach above, we consider a hierarchical representation for representing the context. We compute the claim representations, as detailed above, and we then feed the discourse claims' representations (in sequence) into a bidirectional Gated Recurrent Unit (GRU) BIBREF42, to compute the context representation. We concatenate this with the target claim representation and use this to predict the claim impact. We denote this model as $\text{Context}_{gru}(i)$. Results and Analysis Table TABREF21 shows the macro precision, recall and F1 scores for the baselines as well as the BERT models with and without context representations. We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline. We find that the BERT model with claim only representation performs significantly better ($p<0.001$) than the baseline models. Incorporating the parent representation only along with the claim representation does not give significant improvement over representing the claim only. However, incorporating the flat representation of the larger context along with the claim representation consistently achieves significantly better ($p<0.001$) performance than the claim representation alone. Similarly, attention representation over the context with the learned query vector achieves significantly better performance then the claim representation only ($p<0.05$). We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\%$). To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with flat context representation for claims with context length values 1, 2, 3 and 4 separately. Table TABREF26 shows the F1 score of the BERT model without context and with flat context representation with different lengths of context. For the claims with context length 1, adding $\text{Context}_{f}(3)$ and $\text{Context}_{f}(4)$ representation along with the claim achieves significantly better $(p<0.05)$ F1 score than modeling the claim only. Similarly for the claims with context length 3 and 4, $\text{Context}_{f}(4)$ and $\text{Context}_{f}(3)$ perform significantly better than BERT with claim only ($(p<0.05)$ and $(p<0.01)$ respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. $\text{C}_{l}=1$). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better. Conclusion In this paper, we present a dataset of claims with their corresponding impact votes, and investigate the role of argumentative discourse context in argument impact classification. We experiment with various models to represent the claims and their context and find that incorporating the context information gives significant improvement in predicting argument impact. In our study, we find that flat representation of the context gives the best improvement in the performance and our analysis indicates that the contextual models perform better even for the claims with limited context. Acknowledgements This work was supported in part by NSF grants IIS-1815455 and SES-1741441. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government.
5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact
07c59824f5e7c5399d15491da3543905cfa5f751
07c59824f5e7c5399d15491da3543905cfa5f751_0
Q: How big is dataset used for training/testing? Text: Introduction Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes. The field of electricity consumption is one where expert knowledge is broad. It is known that the major phenomena driving the load demand are calendar (time of the year, day of the week, ...) and meteorological. For instance generalized additive models (GAM) BIBREF12 representing the consumption as a sum of functions of the time of the year, temperature and wind speed (among others) typically yield less than 1.5% of relative error for French national electricity demand and 8% for local one BIBREF13, BIBREF14. Neural networks and their variants, with their ability to extract patterns from heterogeneous types of data have also obtained state-of-the-art results BIBREF15, BIBREF16, BIBREF17. However to our knowledge no exploratory work using text has been conducted yet. Including such data in electricity demand forecasting models would not only contribute to close the gap with other domains, but also help to understand better which aspects of text are useful, how the encoding of the text influences forecasts and to which extend a prediction algorithm can extract relevant information from unstructured data. Moreover the major drawback of all the aforementioned approaches is that they require meteorological data that may be difficult to find, unavailable in real time or expensive. Textual sources such as weather reports on the other hand are easy to find, usually available on a daily basis and free. The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series. The rest of this paper is organized as follows. The following section introduces the two data sets used to conduct our study. Section 3 presents the different machine learning approaches used and how they were tuned. Section 4 highlights the main results of our study, while section 5 concludes this paper and gives insight on future possible work. Presentation of the data In order to prove the consistency of our work, experiments have been conducted on two data sets, one for France and the other for the UK. In this section details about the text and time series data are given, as well as the major preprocessing steps. Presentation of the data ::: Time Series Three types of time series are considered in our work: national net electricity consumption (also referred as load or demand), national temperature and wind speed. The load data sets were retrieved on the websites of the respective grid operators, respectively RTE (Réseau et Transport d'Électricité) for France and National Grid for the UK. For France, the available data ranges from January the 1st 2007 to August the 31st 2018. The default temporal resolution is 30 minutes, but it is averaged to a daily one. For the UK, it is available from January the 1st 2006 to December the 31st 2018 with the same temporal resolution and thus averaging. Due to social factors such as energy policies or new usages of electricity (e.g. Electric Vehicles), the net consumption usually has a long-term trend (fig. FIGREF2). While for France it seems marginal (fig. FIGREF2), there is a strong decreasing trend for the United-Kingdom (fig. FIGREF2). Such a strong non-stationarity of the time series would cause problems for the forecasting process, since the learnt demand levels would differ significantly from the upcoming ones. Therefore a linear regression was used to approximate the decreasing trend of the net consumption in the UK. It is then subtracted before the training of the methods, and then re-added a posteriori for prediction. As for the weather time series, they were extracted from multiple weather stations around France and the UK. The national average is obtained by combining the data from all stations with a weight proportional to the city population the station is located in. For France the stations' data is provided by the French meteorological office, Météo France, while the British ones are scrapped from stations of the National Oceanic and Atmospheric Administration (NOAA). Available on the same time span as the consumption, they usually have a 3 hours temporal resolution but are averaged to a daily one as well. Finally the time series were scaled to the range $[0,1]$ before the training phase, and re-scaled during prediction time. Presentation of the data ::: Text Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2. As emphasized in many studies, preprocessing of the text can ease the learning of the methods and improve accuracy BIBREF18. Therefore the following steps are applied: removal of non-alphabetic characters, removal of stop-words and lowercasing. While it was often highlighted that word lemmatization and stemming improve results, initial experiments showed it was not the case for our study. This is probably due to the technical vocabulary used in both corpora pertaining to the field of meteorology. Already limited in size, the aforementioned preprocessing operations do not yield a significant vocabulary size reduction and can even lead to a loss of linguistic meaning. Finally, extremely frequent or rare words may not have high explanatory power and may reduce the different models' accuracy. That is why words appearing less than 7 times or in more than 40% of the (learning) corpus are removed as well. Figure FIGREF8 represents the distribution of the document lengths after preprocessing, while table TABREF11 gives descriptive statistics on both corpora. Note that the preprocessing steps do not heavily rely on the considered language: therefore our pipeline is easily adaptable for other languages. Modeling and forecasting framework A major target of our work is to show the reports contain an intrinsic information relevant for time series, and that the predictive results do not heavily depend on the encoding of the text or the machine learning algorithm used. Therefore in this section we present the text encoding approaches, as well as the forecasting methods used with them. Modeling and forecasting framework ::: Numerical Encoding of the Text Machines and algorithms cannot work with raw text directly. Thus one major step when working with text is the choice of its numerical representation. In our work two significantly different encoding approaches are considered. The first one is the TF-IDF approach. It embeds a corpus of $N$ documents and $V$ words into a matrix $X$ of size $N \times V$. As such, every document is represented by a vector of size $V$. For each word $w$ and document $d$ the associated coefficient $x_{d,w}$ represents the frequency of that word in that document, penalized by its overall frequency in the rest of the corpus. Thus very common words will have a low TF-IDF value, whereas specific ones which will appear often in a handful of documents will have a large TF-IDF score. The exact formula to calculate the TF-IDF value of word $w$ in document $d$ is: where $f_{d,w}$ is the number of appearances of $w$ in $d$ adjusted by the length of $d$ and $\#\lbrace d: w \in d \rbrace $ is the number of documents in which the word $w$ appears. In our work we considered only individual words, also commonly referred as 1-grams in the field of natural language processing (NLP). The methodology can be easily extended to $n$-grams (groups of $n$ consecutive words), but initial experiments showed that it did not bring any significant improvement over 1-grams. The second representation is a neural word embedding. It consists in representing every word in the corpus by a real-valued vector of dimension $q$. Such models are usually obtained by learning a vector representation from word co-occurrences in a very large corpus (typically hundred thousands of documents, such as Wikipedia articles for example). The two most popular embeddings are probably Google's Word2Vec BIBREF19 and Standford's GloVe BIBREF20. In the former, a neural network is trained to predict a word given its context (continuous bag of word model), whereas in the latter a matrix factorization scheme on the log co-occurences of words is applied. In any case, the very nature of the objective function allows the embedding models to learn to translate linguistic similarities into geometric properties in the vector space. For instance the vector $\overrightarrow{king} - \overrightarrow{man} + \overrightarrow{woman}$ is expected to be very close to the vector $\overrightarrow{queen}$. However in our case we want a vector encoding which is tailored for the technical vocabulary of our weather reports and for the subsequent prediction task. This is why we decided to train our own word embedding from scratch during the learning phase of our recurrent or convolutional neural network. Aside from the much more restricted size of our corpora, the major difference with the aforementioned embeddings is that in our case it is obtained by minimizing a squared loss on the prediction. In that framework there is no explicit reason for our representation to display any geometric structure. However as detailed in section SECREF36, our word vectors nonetheless display geometric properties pertaining to the behavior of the time series. Modeling and forecasting framework ::: Machine Learning Algorithms Multiple machine learning algorithms were applied on top of the encoded textual documents. For the TF-IDF representation, the following approaches are applied: random forests (RF), LASSO and multilayer perceptron (MLP) neural networks (NN). We chose these algorithms combined to the TF-IDF representation due to the possibility of interpretation they give. Indeed, considering the novelty of this work, the understanding of the impact of the words on the forecast is of paramount importance, and as opposed to embeddings, TF-IDF has a natural interpretation. Furthermore the RF and LASSO methods give the possibility to interpret marginal effects and analyze the importance of features, and thus to find the words which affect the time series the most. As for the word embedding, recurrent or convolutional neural networks (respectively RNN and CNN) were used with them. MLPs are not used, for they would require to concatenate all the vector representations of a sentence together beforehand and result in a network with too many parameters to be trained correctly with our number of available documents. Recall that we decided to train our own vector representation of words instead of using an already available one. In order to obtain the embedding, the texts are first converted into a sequence of integers: each word is given a number ranging from 1 to $V$, where $V$ is the vocabulary size (0 is used for padding or unknown words in the test set). One must then calculate the maximum sequence length $S$, and sentences of length shorter than $S$ are then padded by zeros. During the training process of the network, for each word a $q$ dimensional real-valued vector representation is calculated simultaneously to the rest of the weights of the network. Ergo a sentence of $S$ words is translated into a sequence of $S$ $q$-sized vectors, which is then fed into a recurrent neural unit. For both languages, $q=20$ seemed to yield the best results. In the case of recurrent units two main possibilities arise, with LSTM (Long Short-Term Memory) BIBREF21 and GRU (Gated Recurrent Unit) BIBREF22. After a few initial trials, no significant performance differences were noticed between the two types of cells. Therefore GRU were systematically used for recurrent networks, since their lower amount of parameters makes them easier to train and reduces overfitting. The output of the recurrent unit is afterwards linked to a fully connected (also referred as dense) layer, leading to the final forecast as output. The rectified linear unit (ReLU) activation in dense layers systematically gave the best results, except on the output layer where we used a sigmoid one considering the time series' normalization. In order to tone down overfitting, dropout layers BIBREF23 with probabilities of 0.25 or 0.33 are set in between the layers. Batch normalization BIBREF24 is also used before the GRU since it stabilized training and improved performance. Figure FIGREF14 represents the architecture of our RNN. The word embedding matrix is therefore learnt jointly with the rest of the parameters of the neural network by minimization of the quadratic loss with respect to the true electricity demand. Note that while above we described the case of the RNN, the same procedure is considered for the case of the CNN, with only the recurrent layers replaced by a combination of 1D convolution and pooling ones. As for the optimization algorithms of the neural networks, traditional stochastic gradient descent with momentum or ADAM BIBREF25 together with a quadratic loss are used. All of the previously mentioned methods were coded with Python. The LASSO and RF were implemented using the library Scikit Learn BIBREF26, while Keras BIBREF27 was used for the neural networks. Modeling and forecasting framework ::: Hyperparameter Tuning While most parameters are trained during the learning optimization process, all methods still involve a certain number of hyperparameters that must be manually set by the user. For instance for random forests it can correspond to the maximum depth of the trees or the fraction of features used at each split step, while for neural networks it can be the number of layers, neurons, the embedding dimension or the activation functions used. This is why the data is split into three sets: The training set, using all data available up to the 31st of December 2013 (2,557 days for France and 2,922 for the UK). It is used to learn the parameters of the algorithms through mathematical optimization. The years 2014 and 2015 serve as validation set (730 days). It is used to tune the hyperparameters of the different approaches. All the data from January the 1st 2016 (974 days for France and 1,096 for the UK) is used as test set, on which the final results are presented. Grid search is applied to find the best combination of values: for each hyperparameter, a range of values is defined, and all the possible combinations are successively tested. The one yielding the lowest RMSE (see section SECREF4) on the validation set is used for the final results on the test one. While relatively straightforward for RFs and the LASSO, the extreme number of possibilities for NNs and their extensive training time compelled us to limit the range of architectures possible. The hyperparameters are tuned per method and per country: ergo the hyperparameters of a given algorithm will be the same for the different time series of a country (e.g. the RNN architecture for temperature and load for France will be the same, but different from the UK one). Finally before application on the testing set, all the methods are re-trained from scratch using both the training and validation data. Experiments The goal of our experiments is to quantify how close one can get using textual data only when compared to numerical data. However the inputs of the numerical benchmark should be hence comparable to the information contained in the weather reports. Considering they mainly contain calendar (day of the week and month) as well as temperature and wind information, the benchmark of comparison is a random forest trained on four features only: the time of the year (whose value is 0 on January the 1st and 1 on December the 31st with a linear growth in between), the day of the week, the national average temperature and wind speed. The metrics of evaluation are the Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and the $R^2$ coefficient given by: where $T$ is the number of test samples, $y_t$ and $\hat{y}_t$ are respectively the ground truth and the prediction for the document of day $t$, and $\overline{y}$ is the empirical average of the time series over the test sample. A known problem with MAPE is that it unreasonably increases the error score for values close to 0. While for the load it isn't an issue at all, it can be for the meteorological time series. Therefore for the temperature, the MAPE is calculated only when the ground truth is above the 5% empirical quantile. Although we aim at achieving the highest accuracy possible, we focus on the interpretability of our models as well. Experiments ::: Feature selection Many words are obviously irrelevant to the time series in our texts. For instance the day of the week, while playing a significant role for the load demand, is useless for temperature or wind. Such words make the training harder and may decrease the accuracy of the prediction. Therefore a feature selection procedure similar to BIBREF28 is applied to select a subset of useful features for the different algorithms, and for each type of time series. Random forests are naturally able to calculate feature importance through the calculation of error increase in the out-of-bag (OOB) samples. Therefore the following process is applied to select a subset of $V^*$ relevant words to keep: A RF is trained on the whole training & validation set. The OOB feature importance can thus be calculated. The features are then successively added to the RF in decreasing order of feature importance. This process is repeated $B=10$ times to tone down the randomness. The number $V^*$ is then set to the number of features giving the highest median OOB $R^2$ value. The results of this procedure for the French data is represented in figure FIGREF24. The best median $R^2$ is achieved for $V^* = 52$, although one could argue that not much gain is obtained after 36 words. The results are very similar for the UK data set, thus for the sake of simplicity the same value $V^* = 52$ is used. Note that the same subset of words is used for all the different forecasting models, which could be improved in further work using other selection criteria (e.g. mutual information, see BIBREF29). An example of normalized feature importance is given in figure. FIGREF32. Experiments ::: Main results Note that most of the considered algorithms involve randomness during the training phase, with the subsampling in the RFs or the gradient descent in the NNs for instance. In order to tone it down and to increase the consistency of our results, the different models are run $B=10$ times. The results presented hereafter correspond to the average and standard-deviation on those runs. The RF model denoted as "sel" is the one with the reduced number of features, whereas the other RF uses the full vocabulary. We also considered an aggregated forecaster (abridged Agg), consisting of the average of the two best individual ones in terms of RMSE. All the neural network methods have a reduced vocabulary size $V^*$. The results for the French and UK data are respectively given by tables TABREF26 and TABREF27. Our empirical results show that for the electricity consumption prediction task, the order of magnitude of the relative error is around 5%, independently of the language, encoding and machine learning method, thus proving the intrinsic value of the information contained in the textual documents for this time series. As expected, all text based methods perform poorer than when using explicitly numerical input features. Indeed, despite containing relevant information, the text is always more fuzzy and less precise than an explicit value for the temperature or the time of the year for instance. Again the aim of this work is not to beat traditional methods with text, but quantifying how close one can come to traditional approaches when using text exclusively. As such achieving less than 5% of MAPE was nonetheless deemed impressive by expert electricity forecasters. Feature selection brings significant improvement in the French case, although it does not yield any improvement in the English one. The reason for this is currently unknown. Nevertheless the feature selection procedure also helps the NNs by dramatically reducing the vocabulary size, and without it the training of the networks was bound to fail. While the errors accross methods are roughly comparable and highlight the valuable information contained within the reports, the best method nonetheless fluctuates between languages. Indeed in the French case there is a hegemony of the NNs, with the embedding RNN edging the MLP TF-IDF one. However for the UK data set the RFs yield significantly better results on the test set than the NNs. This inversion of performance of the algorithms is possibly due to a change in the way the reports were written by the Met Office after August 2017, since the results of the MLP and RNN on the validation set (not shown here) were satisfactory and better than both RFs. For the two languages both the CNN and the LASSO yielded poor results. For the former, it is because despite grid search no satisfactory architecture was found, whereas the latter is a linear approach and was used more for interpretation purposes than strong performance. Finally the naive aggregation of the two best experts always yields improvement, especially for the French case where the two different encodings are combined. This emphasises the specificity of the two representations leading to different types of errors. An example of comparison between ground truth and forecast for the case of electricity consumption is given for the French language with fig. FIGREF29, while another for temperature may be found in the appendix FIGREF51. The sudden "spikes" in the forecast are due to the presence of winter related words in a summer report. This is the case when used in comparisons, such as "The flood will be as severe as in January" in a June report and is a limit of our approach. Finally, the usual residual $\hat{\varepsilon }_t = y_t - \hat{y}_t$ analyses procedures were applied: Kolmogorov normality test, QQplots comparaison to gaussian quantiles, residual/fit comparison... While not thoroughly gaussian, the residuals were close to normality nonetheless and displayed satisfactory properties such as being generally independent from the fitted and ground truth values. Excerpts of this analysis for France are given in figure FIGREF52 of the appendix. The results for the temperature and wind series are given in appendix. Considering that they have a more stochastic behavior and are thus more difficult to predict, the order of magnitude of the errors differ (the MAPE being around 15% for temperature for instance) but globally the same observations can be made. Experiments ::: Interpretability of the models While accuracy is the most relevant metric to assess forecasts, interpretability of the models is of paramount importance, especially in the field of professional electricity load forecasting and considering the novelty of our work. Therefore in this section we discuss the properties of the RF and LASSO models using the TF-IDF encoding scheme, as well as the RNN word embedding. Experiments ::: Interpretability of the models ::: TF-IDF representation One significant advantage of the TF-IDF encoding when combined with random forests or the LASSO is that it is possible to interpret the behavior of the models. For instance, figure FIGREF32 represents the 20 most important features (in the RF OOB sense) for both data sets when regressing over electricity demand data. As one can see, the random forest naturally extracts calendar information contained in the weather reports, since months or week-end days are among the most important ones. For the former, this is due to the periodic behavior of electricity consumption, which is higher in winter and lower in summer. This is also why characteristic phenomena of summer and winter, such as "thunderstorms", "snow" or "freezing" also have a high feature importance. The fact that August has a much more important role than July also concurs with expert knowledge, especially for France: indeed it is the month when most people go on vacations, and thus when the load drops the most. As for the week-end names, it is due to the significantly different consumer behavior during Saturdays and especially Sundays when most of the businesses are closed and people are usually at home. Therefore the relevant words selected by the random forest are almost all in agreement with expert knowledge. We also performed the analysis of the relevant words for the LASSO. In order to do that, we examined the words $w$ with the largest associated coefficients $\beta _w$ (in absolute value) in the regression. Since the TF-IDF matrix has positive coefficients, it is possible to interpret the sign of the coefficient $\beta _w$ as its impact on the time series. For instance if $\beta _w > 0$ then the presence of the word $w$ causes a rise the time series (respectively if $\beta _w < 0$, it entails a decline). The results are plotted fig. FIGREF35 for the the UK. As one can see, the winter related words have positive coefficients, and thus increase the load demand as expected whereas the summer related ones decrease it. The value of the coefficients also reflects the impact on the load demand. For example January and February have the highest and very similar values, which concurs with the similarity between the months. Sunday has a much more negative coefficient than Saturday, since the demand significantly drops during the last day of the week. The important words also globally match between the LASSO and the RF, which is a proof of the consistency of our results (this is further explored afterwards in figure FIGREF43). Although not presented here, the results are almost identical for the French load, with approximately the same order of relevancy. The important words logically vary in function of the considered time series, but are always coherent. For instance for the wind one, terms such as "gales", "windy" or "strong" have the highest positive coefficients, as seen in the appendix figure FIGREF53. Those results show that a text based approach not only extracts the relevant information by itself, but it may eventually be used to understand which phenomena are relevant to explain the behavior of a time series, and to which extend. Experiments ::: Interpretability of the models ::: Vector embedding representation Word vector embeddings such as Word2Vec and GloVe are known for their vectorial properties translating linguistic ones. However considering the objective function of our problem, there was no obvious reason for such attributes to appear in our own. Nevertheless for both languages we conducted an analysis of the geometric properties of our embedding matrix. We investigated the distances between word vectors, the relevant metric being the cosine distance given by: where $\overrightarrow{w_1}$ and $\overrightarrow{w_2}$ are given word vectors. Thus a cosine distance lower than 1 means similarity between word vectors, whereas a greater than 1 corresponds to opposition. The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance). The results of the experiments are very similar for both languages again. Indeed, the words are globally embedded in the vector space by topic: winter related words such as "January" ("janvier"), "February" ("février"), "snow" ("neige"), "freezing" ("glacial") are close to each other and almost opposite to summer related ones such as "July" ("juillet"), "August" ("août"), "hot" ("chaud"). For both cases the week days Monday ("lundi") to Friday ("vendredi") are grouped very closely to each other, while significantly separated from the week-end ones "Saturday" ("samedi") and "Sunday" ("dimanche"). Despite these observations, a few seemingly unrelated words enter the lists of top 10, especially for the English case (such as "pressure" or "dusk" for "February"). In fact the French language embedding seems of better quality, which is perhaps linked to the longer length of the French reports in average. This issue could probably be addressed with more data. Another observation made is that the importance of a word $w$ seems related to its euclidean norm in the embedding space ${\overrightarrow{w}}_2$. For both languages the list of the 20 words with the largest norm is given fig. FIGREF40. As one can see, it globally matches the selected ones from the RF or the LASSO (especially for the French language), although the order is quite different. This is further supported by the Venn diagram of common words among the top 50 ones for each word selection method represented in figure FIGREF43 for France. Therefore this observation could also be used as feature selection procedure for the RNN or CNN in further work. In order to achieve a global view of the embeddings, the t-SNE algorithm BIBREF30 is applied to project an embedding matrix into a 2 dimensional space, for both languages. The observations for the few aforementioned words are confirmed by this representation, as plotted in figure FIGREF44. Thematic clusters can be observed, roughly corresponding to winter, summer, week-days, week-end days for both languages. Globally summer and winter seem opposed, although one should keep in mind that the t-SNE representation does not preserve the cosine distance. The clusters of the French embedding appear much more compact than the UK one, comforting the observations made when explicitly calculating the cosine distances. Conclusion In this study, a novel pipeline to predict three types of time series using exclusively a textual source was proposed. Making use of publicly available daily weather reports, we were able to predict the electricity consumption with less than 5% of MAPE for both France and the United-Kingdom. Moreover our average national temperature and wind speed predictions displayed sufficient accuracy to be used to replace missing data or as first approximation in traditional models in case of unavailability of meteorological features. The texts were encoded numerically using either TF-IDF or our own neural word embedding. A plethora of machine learning algorithms such as random forests or neural networks were applied on top of those representations. Our results were consistent over language, numerical representation of the text and prediction algorithm, proving the intrinsic value of the textual sources for the three considered time series. Contrarily to previous works in the field of textual data for time series forecasting, we went in depth and quantified the impact of words on the variations of the series. As such we saw that all the algorithms naturally extract calendar and meteorological information from the texts, and that words impact the time series in the expected way (e.g. winter words increase the consumption and summer ones decrease it). Despite being trained on a regular quadratic loss, our neural word embedding spontaneously builds geometric properties. Not only does the norm of a word vector reflect its significance, but the words are also grouped by topic with for example winter, summer or day of the week clusters. Note that this study was a preliminary work on the use of textual information for time series prediction, especially electricity demand one. The long-term goal is to include multiple sources of textual information to improve the accuracy of state-of-the-art methods or to build a text based forecaster which can be used to increase the diversity in a set of experts for electricity consumption BIBREF31. However due to the redundancy of the information of the considered weather reports with meteorological features, it may be necessary to consider alternative textual sources. The use of social media such as Facebook, Twitter or Instagram may give interesting insight and will therefore be investigated in future work. Additional results for the prediction tasks on temperature and wind speed can be found in tables TABREF47 to TABREF50. An example of forecast for the French temperature is given in figure FIGREF51. While not strictly normally distributed, the residuals for the French electricity demand display an acceptable behavior. This holds also true for the British consumption, and both temperature time series, but is of lesser quality for the wind one. The the UK wind LASSO regression, the words with the highest coefficients $\beta _w$ are indeed related to strong wind phenomena, whereas antagonistic ones such as "fog" or "mist" have strongly negative ones as expected (fig. FIGREF53). For both languages we represented the evolution of the (normalized) losses for the problem of load regression in fig. FIGREF54. The aspect is a typical one, with the validation loss slightly above the training one. The slightly erratic behavior of the former one is possibly due to a lack of data to train the embeddings. The cosine distances for three other major words and for both corpora have been calculated as well. The results are given in tables TABREF57 and TABREF58. For both languages, the three summer months are grouped together, and so are the two week-end days. However again the results are less clear for the English language. They are especially mediocre for "hot", considering that only "warm" seems truly relevant and that "August" for instance is quite far away. For the French language instead of "hot" the distances to "thunderstorms" were calculated. The results are quite satisfactory, with "orageux"/"orageuse" ("thundery") coming in the two first places and related meteorological phenomena ("cumulus" and "grêle", meaning "hail") relatively close as well. For the French case, Saturday and Sunday are very close to summer related words. This observation probably highlights the fact that the RNN groups load increasing and decreasing words in opposite parts of the embedding space.
4,261 days for France and 4,748 for the UK
77f04cd553df691e8f4ecbe19da89bc32c7ac734
77f04cd553df691e8f4ecbe19da89bc32c7ac734_0
Q: Is there any example where geometric property is visible for context similarity between words? Text: Introduction Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes. The field of electricity consumption is one where expert knowledge is broad. It is known that the major phenomena driving the load demand are calendar (time of the year, day of the week, ...) and meteorological. For instance generalized additive models (GAM) BIBREF12 representing the consumption as a sum of functions of the time of the year, temperature and wind speed (among others) typically yield less than 1.5% of relative error for French national electricity demand and 8% for local one BIBREF13, BIBREF14. Neural networks and their variants, with their ability to extract patterns from heterogeneous types of data have also obtained state-of-the-art results BIBREF15, BIBREF16, BIBREF17. However to our knowledge no exploratory work using text has been conducted yet. Including such data in electricity demand forecasting models would not only contribute to close the gap with other domains, but also help to understand better which aspects of text are useful, how the encoding of the text influences forecasts and to which extend a prediction algorithm can extract relevant information from unstructured data. Moreover the major drawback of all the aforementioned approaches is that they require meteorological data that may be difficult to find, unavailable in real time or expensive. Textual sources such as weather reports on the other hand are easy to find, usually available on a daily basis and free. The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series. The rest of this paper is organized as follows. The following section introduces the two data sets used to conduct our study. Section 3 presents the different machine learning approaches used and how they were tuned. Section 4 highlights the main results of our study, while section 5 concludes this paper and gives insight on future possible work. Presentation of the data In order to prove the consistency of our work, experiments have been conducted on two data sets, one for France and the other for the UK. In this section details about the text and time series data are given, as well as the major preprocessing steps. Presentation of the data ::: Time Series Three types of time series are considered in our work: national net electricity consumption (also referred as load or demand), national temperature and wind speed. The load data sets were retrieved on the websites of the respective grid operators, respectively RTE (Réseau et Transport d'Électricité) for France and National Grid for the UK. For France, the available data ranges from January the 1st 2007 to August the 31st 2018. The default temporal resolution is 30 minutes, but it is averaged to a daily one. For the UK, it is available from January the 1st 2006 to December the 31st 2018 with the same temporal resolution and thus averaging. Due to social factors such as energy policies or new usages of electricity (e.g. Electric Vehicles), the net consumption usually has a long-term trend (fig. FIGREF2). While for France it seems marginal (fig. FIGREF2), there is a strong decreasing trend for the United-Kingdom (fig. FIGREF2). Such a strong non-stationarity of the time series would cause problems for the forecasting process, since the learnt demand levels would differ significantly from the upcoming ones. Therefore a linear regression was used to approximate the decreasing trend of the net consumption in the UK. It is then subtracted before the training of the methods, and then re-added a posteriori for prediction. As for the weather time series, they were extracted from multiple weather stations around France and the UK. The national average is obtained by combining the data from all stations with a weight proportional to the city population the station is located in. For France the stations' data is provided by the French meteorological office, Météo France, while the British ones are scrapped from stations of the National Oceanic and Atmospheric Administration (NOAA). Available on the same time span as the consumption, they usually have a 3 hours temporal resolution but are averaged to a daily one as well. Finally the time series were scaled to the range $[0,1]$ before the training phase, and re-scaled during prediction time. Presentation of the data ::: Text Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2. As emphasized in many studies, preprocessing of the text can ease the learning of the methods and improve accuracy BIBREF18. Therefore the following steps are applied: removal of non-alphabetic characters, removal of stop-words and lowercasing. While it was often highlighted that word lemmatization and stemming improve results, initial experiments showed it was not the case for our study. This is probably due to the technical vocabulary used in both corpora pertaining to the field of meteorology. Already limited in size, the aforementioned preprocessing operations do not yield a significant vocabulary size reduction and can even lead to a loss of linguistic meaning. Finally, extremely frequent or rare words may not have high explanatory power and may reduce the different models' accuracy. That is why words appearing less than 7 times or in more than 40% of the (learning) corpus are removed as well. Figure FIGREF8 represents the distribution of the document lengths after preprocessing, while table TABREF11 gives descriptive statistics on both corpora. Note that the preprocessing steps do not heavily rely on the considered language: therefore our pipeline is easily adaptable for other languages. Modeling and forecasting framework A major target of our work is to show the reports contain an intrinsic information relevant for time series, and that the predictive results do not heavily depend on the encoding of the text or the machine learning algorithm used. Therefore in this section we present the text encoding approaches, as well as the forecasting methods used with them. Modeling and forecasting framework ::: Numerical Encoding of the Text Machines and algorithms cannot work with raw text directly. Thus one major step when working with text is the choice of its numerical representation. In our work two significantly different encoding approaches are considered. The first one is the TF-IDF approach. It embeds a corpus of $N$ documents and $V$ words into a matrix $X$ of size $N \times V$. As such, every document is represented by a vector of size $V$. For each word $w$ and document $d$ the associated coefficient $x_{d,w}$ represents the frequency of that word in that document, penalized by its overall frequency in the rest of the corpus. Thus very common words will have a low TF-IDF value, whereas specific ones which will appear often in a handful of documents will have a large TF-IDF score. The exact formula to calculate the TF-IDF value of word $w$ in document $d$ is: where $f_{d,w}$ is the number of appearances of $w$ in $d$ adjusted by the length of $d$ and $\#\lbrace d: w \in d \rbrace $ is the number of documents in which the word $w$ appears. In our work we considered only individual words, also commonly referred as 1-grams in the field of natural language processing (NLP). The methodology can be easily extended to $n$-grams (groups of $n$ consecutive words), but initial experiments showed that it did not bring any significant improvement over 1-grams. The second representation is a neural word embedding. It consists in representing every word in the corpus by a real-valued vector of dimension $q$. Such models are usually obtained by learning a vector representation from word co-occurrences in a very large corpus (typically hundred thousands of documents, such as Wikipedia articles for example). The two most popular embeddings are probably Google's Word2Vec BIBREF19 and Standford's GloVe BIBREF20. In the former, a neural network is trained to predict a word given its context (continuous bag of word model), whereas in the latter a matrix factorization scheme on the log co-occurences of words is applied. In any case, the very nature of the objective function allows the embedding models to learn to translate linguistic similarities into geometric properties in the vector space. For instance the vector $\overrightarrow{king} - \overrightarrow{man} + \overrightarrow{woman}$ is expected to be very close to the vector $\overrightarrow{queen}$. However in our case we want a vector encoding which is tailored for the technical vocabulary of our weather reports and for the subsequent prediction task. This is why we decided to train our own word embedding from scratch during the learning phase of our recurrent or convolutional neural network. Aside from the much more restricted size of our corpora, the major difference with the aforementioned embeddings is that in our case it is obtained by minimizing a squared loss on the prediction. In that framework there is no explicit reason for our representation to display any geometric structure. However as detailed in section SECREF36, our word vectors nonetheless display geometric properties pertaining to the behavior of the time series. Modeling and forecasting framework ::: Machine Learning Algorithms Multiple machine learning algorithms were applied on top of the encoded textual documents. For the TF-IDF representation, the following approaches are applied: random forests (RF), LASSO and multilayer perceptron (MLP) neural networks (NN). We chose these algorithms combined to the TF-IDF representation due to the possibility of interpretation they give. Indeed, considering the novelty of this work, the understanding of the impact of the words on the forecast is of paramount importance, and as opposed to embeddings, TF-IDF has a natural interpretation. Furthermore the RF and LASSO methods give the possibility to interpret marginal effects and analyze the importance of features, and thus to find the words which affect the time series the most. As for the word embedding, recurrent or convolutional neural networks (respectively RNN and CNN) were used with them. MLPs are not used, for they would require to concatenate all the vector representations of a sentence together beforehand and result in a network with too many parameters to be trained correctly with our number of available documents. Recall that we decided to train our own vector representation of words instead of using an already available one. In order to obtain the embedding, the texts are first converted into a sequence of integers: each word is given a number ranging from 1 to $V$, where $V$ is the vocabulary size (0 is used for padding or unknown words in the test set). One must then calculate the maximum sequence length $S$, and sentences of length shorter than $S$ are then padded by zeros. During the training process of the network, for each word a $q$ dimensional real-valued vector representation is calculated simultaneously to the rest of the weights of the network. Ergo a sentence of $S$ words is translated into a sequence of $S$ $q$-sized vectors, which is then fed into a recurrent neural unit. For both languages, $q=20$ seemed to yield the best results. In the case of recurrent units two main possibilities arise, with LSTM (Long Short-Term Memory) BIBREF21 and GRU (Gated Recurrent Unit) BIBREF22. After a few initial trials, no significant performance differences were noticed between the two types of cells. Therefore GRU were systematically used for recurrent networks, since their lower amount of parameters makes them easier to train and reduces overfitting. The output of the recurrent unit is afterwards linked to a fully connected (also referred as dense) layer, leading to the final forecast as output. The rectified linear unit (ReLU) activation in dense layers systematically gave the best results, except on the output layer where we used a sigmoid one considering the time series' normalization. In order to tone down overfitting, dropout layers BIBREF23 with probabilities of 0.25 or 0.33 are set in between the layers. Batch normalization BIBREF24 is also used before the GRU since it stabilized training and improved performance. Figure FIGREF14 represents the architecture of our RNN. The word embedding matrix is therefore learnt jointly with the rest of the parameters of the neural network by minimization of the quadratic loss with respect to the true electricity demand. Note that while above we described the case of the RNN, the same procedure is considered for the case of the CNN, with only the recurrent layers replaced by a combination of 1D convolution and pooling ones. As for the optimization algorithms of the neural networks, traditional stochastic gradient descent with momentum or ADAM BIBREF25 together with a quadratic loss are used. All of the previously mentioned methods were coded with Python. The LASSO and RF were implemented using the library Scikit Learn BIBREF26, while Keras BIBREF27 was used for the neural networks. Modeling and forecasting framework ::: Hyperparameter Tuning While most parameters are trained during the learning optimization process, all methods still involve a certain number of hyperparameters that must be manually set by the user. For instance for random forests it can correspond to the maximum depth of the trees or the fraction of features used at each split step, while for neural networks it can be the number of layers, neurons, the embedding dimension or the activation functions used. This is why the data is split into three sets: The training set, using all data available up to the 31st of December 2013 (2,557 days for France and 2,922 for the UK). It is used to learn the parameters of the algorithms through mathematical optimization. The years 2014 and 2015 serve as validation set (730 days). It is used to tune the hyperparameters of the different approaches. All the data from January the 1st 2016 (974 days for France and 1,096 for the UK) is used as test set, on which the final results are presented. Grid search is applied to find the best combination of values: for each hyperparameter, a range of values is defined, and all the possible combinations are successively tested. The one yielding the lowest RMSE (see section SECREF4) on the validation set is used for the final results on the test one. While relatively straightforward for RFs and the LASSO, the extreme number of possibilities for NNs and their extensive training time compelled us to limit the range of architectures possible. The hyperparameters are tuned per method and per country: ergo the hyperparameters of a given algorithm will be the same for the different time series of a country (e.g. the RNN architecture for temperature and load for France will be the same, but different from the UK one). Finally before application on the testing set, all the methods are re-trained from scratch using both the training and validation data. Experiments The goal of our experiments is to quantify how close one can get using textual data only when compared to numerical data. However the inputs of the numerical benchmark should be hence comparable to the information contained in the weather reports. Considering they mainly contain calendar (day of the week and month) as well as temperature and wind information, the benchmark of comparison is a random forest trained on four features only: the time of the year (whose value is 0 on January the 1st and 1 on December the 31st with a linear growth in between), the day of the week, the national average temperature and wind speed. The metrics of evaluation are the Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and the $R^2$ coefficient given by: where $T$ is the number of test samples, $y_t$ and $\hat{y}_t$ are respectively the ground truth and the prediction for the document of day $t$, and $\overline{y}$ is the empirical average of the time series over the test sample. A known problem with MAPE is that it unreasonably increases the error score for values close to 0. While for the load it isn't an issue at all, it can be for the meteorological time series. Therefore for the temperature, the MAPE is calculated only when the ground truth is above the 5% empirical quantile. Although we aim at achieving the highest accuracy possible, we focus on the interpretability of our models as well. Experiments ::: Feature selection Many words are obviously irrelevant to the time series in our texts. For instance the day of the week, while playing a significant role for the load demand, is useless for temperature or wind. Such words make the training harder and may decrease the accuracy of the prediction. Therefore a feature selection procedure similar to BIBREF28 is applied to select a subset of useful features for the different algorithms, and for each type of time series. Random forests are naturally able to calculate feature importance through the calculation of error increase in the out-of-bag (OOB) samples. Therefore the following process is applied to select a subset of $V^*$ relevant words to keep: A RF is trained on the whole training & validation set. The OOB feature importance can thus be calculated. The features are then successively added to the RF in decreasing order of feature importance. This process is repeated $B=10$ times to tone down the randomness. The number $V^*$ is then set to the number of features giving the highest median OOB $R^2$ value. The results of this procedure for the French data is represented in figure FIGREF24. The best median $R^2$ is achieved for $V^* = 52$, although one could argue that not much gain is obtained after 36 words. The results are very similar for the UK data set, thus for the sake of simplicity the same value $V^* = 52$ is used. Note that the same subset of words is used for all the different forecasting models, which could be improved in further work using other selection criteria (e.g. mutual information, see BIBREF29). An example of normalized feature importance is given in figure. FIGREF32. Experiments ::: Main results Note that most of the considered algorithms involve randomness during the training phase, with the subsampling in the RFs or the gradient descent in the NNs for instance. In order to tone it down and to increase the consistency of our results, the different models are run $B=10$ times. The results presented hereafter correspond to the average and standard-deviation on those runs. The RF model denoted as "sel" is the one with the reduced number of features, whereas the other RF uses the full vocabulary. We also considered an aggregated forecaster (abridged Agg), consisting of the average of the two best individual ones in terms of RMSE. All the neural network methods have a reduced vocabulary size $V^*$. The results for the French and UK data are respectively given by tables TABREF26 and TABREF27. Our empirical results show that for the electricity consumption prediction task, the order of magnitude of the relative error is around 5%, independently of the language, encoding and machine learning method, thus proving the intrinsic value of the information contained in the textual documents for this time series. As expected, all text based methods perform poorer than when using explicitly numerical input features. Indeed, despite containing relevant information, the text is always more fuzzy and less precise than an explicit value for the temperature or the time of the year for instance. Again the aim of this work is not to beat traditional methods with text, but quantifying how close one can come to traditional approaches when using text exclusively. As such achieving less than 5% of MAPE was nonetheless deemed impressive by expert electricity forecasters. Feature selection brings significant improvement in the French case, although it does not yield any improvement in the English one. The reason for this is currently unknown. Nevertheless the feature selection procedure also helps the NNs by dramatically reducing the vocabulary size, and without it the training of the networks was bound to fail. While the errors accross methods are roughly comparable and highlight the valuable information contained within the reports, the best method nonetheless fluctuates between languages. Indeed in the French case there is a hegemony of the NNs, with the embedding RNN edging the MLP TF-IDF one. However for the UK data set the RFs yield significantly better results on the test set than the NNs. This inversion of performance of the algorithms is possibly due to a change in the way the reports were written by the Met Office after August 2017, since the results of the MLP and RNN on the validation set (not shown here) were satisfactory and better than both RFs. For the two languages both the CNN and the LASSO yielded poor results. For the former, it is because despite grid search no satisfactory architecture was found, whereas the latter is a linear approach and was used more for interpretation purposes than strong performance. Finally the naive aggregation of the two best experts always yields improvement, especially for the French case where the two different encodings are combined. This emphasises the specificity of the two representations leading to different types of errors. An example of comparison between ground truth and forecast for the case of electricity consumption is given for the French language with fig. FIGREF29, while another for temperature may be found in the appendix FIGREF51. The sudden "spikes" in the forecast are due to the presence of winter related words in a summer report. This is the case when used in comparisons, such as "The flood will be as severe as in January" in a June report and is a limit of our approach. Finally, the usual residual $\hat{\varepsilon }_t = y_t - \hat{y}_t$ analyses procedures were applied: Kolmogorov normality test, QQplots comparaison to gaussian quantiles, residual/fit comparison... While not thoroughly gaussian, the residuals were close to normality nonetheless and displayed satisfactory properties such as being generally independent from the fitted and ground truth values. Excerpts of this analysis for France are given in figure FIGREF52 of the appendix. The results for the temperature and wind series are given in appendix. Considering that they have a more stochastic behavior and are thus more difficult to predict, the order of magnitude of the errors differ (the MAPE being around 15% for temperature for instance) but globally the same observations can be made. Experiments ::: Interpretability of the models While accuracy is the most relevant metric to assess forecasts, interpretability of the models is of paramount importance, especially in the field of professional electricity load forecasting and considering the novelty of our work. Therefore in this section we discuss the properties of the RF and LASSO models using the TF-IDF encoding scheme, as well as the RNN word embedding. Experiments ::: Interpretability of the models ::: TF-IDF representation One significant advantage of the TF-IDF encoding when combined with random forests or the LASSO is that it is possible to interpret the behavior of the models. For instance, figure FIGREF32 represents the 20 most important features (in the RF OOB sense) for both data sets when regressing over electricity demand data. As one can see, the random forest naturally extracts calendar information contained in the weather reports, since months or week-end days are among the most important ones. For the former, this is due to the periodic behavior of electricity consumption, which is higher in winter and lower in summer. This is also why characteristic phenomena of summer and winter, such as "thunderstorms", "snow" or "freezing" also have a high feature importance. The fact that August has a much more important role than July also concurs with expert knowledge, especially for France: indeed it is the month when most people go on vacations, and thus when the load drops the most. As for the week-end names, it is due to the significantly different consumer behavior during Saturdays and especially Sundays when most of the businesses are closed and people are usually at home. Therefore the relevant words selected by the random forest are almost all in agreement with expert knowledge. We also performed the analysis of the relevant words for the LASSO. In order to do that, we examined the words $w$ with the largest associated coefficients $\beta _w$ (in absolute value) in the regression. Since the TF-IDF matrix has positive coefficients, it is possible to interpret the sign of the coefficient $\beta _w$ as its impact on the time series. For instance if $\beta _w > 0$ then the presence of the word $w$ causes a rise the time series (respectively if $\beta _w < 0$, it entails a decline). The results are plotted fig. FIGREF35 for the the UK. As one can see, the winter related words have positive coefficients, and thus increase the load demand as expected whereas the summer related ones decrease it. The value of the coefficients also reflects the impact on the load demand. For example January and February have the highest and very similar values, which concurs with the similarity between the months. Sunday has a much more negative coefficient than Saturday, since the demand significantly drops during the last day of the week. The important words also globally match between the LASSO and the RF, which is a proof of the consistency of our results (this is further explored afterwards in figure FIGREF43). Although not presented here, the results are almost identical for the French load, with approximately the same order of relevancy. The important words logically vary in function of the considered time series, but are always coherent. For instance for the wind one, terms such as "gales", "windy" or "strong" have the highest positive coefficients, as seen in the appendix figure FIGREF53. Those results show that a text based approach not only extracts the relevant information by itself, but it may eventually be used to understand which phenomena are relevant to explain the behavior of a time series, and to which extend. Experiments ::: Interpretability of the models ::: Vector embedding representation Word vector embeddings such as Word2Vec and GloVe are known for their vectorial properties translating linguistic ones. However considering the objective function of our problem, there was no obvious reason for such attributes to appear in our own. Nevertheless for both languages we conducted an analysis of the geometric properties of our embedding matrix. We investigated the distances between word vectors, the relevant metric being the cosine distance given by: where $\overrightarrow{w_1}$ and $\overrightarrow{w_2}$ are given word vectors. Thus a cosine distance lower than 1 means similarity between word vectors, whereas a greater than 1 corresponds to opposition. The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance). The results of the experiments are very similar for both languages again. Indeed, the words are globally embedded in the vector space by topic: winter related words such as "January" ("janvier"), "February" ("février"), "snow" ("neige"), "freezing" ("glacial") are close to each other and almost opposite to summer related ones such as "July" ("juillet"), "August" ("août"), "hot" ("chaud"). For both cases the week days Monday ("lundi") to Friday ("vendredi") are grouped very closely to each other, while significantly separated from the week-end ones "Saturday" ("samedi") and "Sunday" ("dimanche"). Despite these observations, a few seemingly unrelated words enter the lists of top 10, especially for the English case (such as "pressure" or "dusk" for "February"). In fact the French language embedding seems of better quality, which is perhaps linked to the longer length of the French reports in average. This issue could probably be addressed with more data. Another observation made is that the importance of a word $w$ seems related to its euclidean norm in the embedding space ${\overrightarrow{w}}_2$. For both languages the list of the 20 words with the largest norm is given fig. FIGREF40. As one can see, it globally matches the selected ones from the RF or the LASSO (especially for the French language), although the order is quite different. This is further supported by the Venn diagram of common words among the top 50 ones for each word selection method represented in figure FIGREF43 for France. Therefore this observation could also be used as feature selection procedure for the RNN or CNN in further work. In order to achieve a global view of the embeddings, the t-SNE algorithm BIBREF30 is applied to project an embedding matrix into a 2 dimensional space, for both languages. The observations for the few aforementioned words are confirmed by this representation, as plotted in figure FIGREF44. Thematic clusters can be observed, roughly corresponding to winter, summer, week-days, week-end days for both languages. Globally summer and winter seem opposed, although one should keep in mind that the t-SNE representation does not preserve the cosine distance. The clusters of the French embedding appear much more compact than the UK one, comforting the observations made when explicitly calculating the cosine distances. Conclusion In this study, a novel pipeline to predict three types of time series using exclusively a textual source was proposed. Making use of publicly available daily weather reports, we were able to predict the electricity consumption with less than 5% of MAPE for both France and the United-Kingdom. Moreover our average national temperature and wind speed predictions displayed sufficient accuracy to be used to replace missing data or as first approximation in traditional models in case of unavailability of meteorological features. The texts were encoded numerically using either TF-IDF or our own neural word embedding. A plethora of machine learning algorithms such as random forests or neural networks were applied on top of those representations. Our results were consistent over language, numerical representation of the text and prediction algorithm, proving the intrinsic value of the textual sources for the three considered time series. Contrarily to previous works in the field of textual data for time series forecasting, we went in depth and quantified the impact of words on the variations of the series. As such we saw that all the algorithms naturally extract calendar and meteorological information from the texts, and that words impact the time series in the expected way (e.g. winter words increase the consumption and summer ones decrease it). Despite being trained on a regular quadratic loss, our neural word embedding spontaneously builds geometric properties. Not only does the norm of a word vector reflect its significance, but the words are also grouped by topic with for example winter, summer or day of the week clusters. Note that this study was a preliminary work on the use of textual information for time series prediction, especially electricity demand one. The long-term goal is to include multiple sources of textual information to improve the accuracy of state-of-the-art methods or to build a text based forecaster which can be used to increase the diversity in a set of experts for electricity consumption BIBREF31. However due to the redundancy of the information of the considered weather reports with meteorological features, it may be necessary to consider alternative textual sources. The use of social media such as Facebook, Twitter or Instagram may give interesting insight and will therefore be investigated in future work. Additional results for the prediction tasks on temperature and wind speed can be found in tables TABREF47 to TABREF50. An example of forecast for the French temperature is given in figure FIGREF51. While not strictly normally distributed, the residuals for the French electricity demand display an acceptable behavior. This holds also true for the British consumption, and both temperature time series, but is of lesser quality for the wind one. The the UK wind LASSO regression, the words with the highest coefficients $\beta _w$ are indeed related to strong wind phenomena, whereas antagonistic ones such as "fog" or "mist" have strongly negative ones as expected (fig. FIGREF53). For both languages we represented the evolution of the (normalized) losses for the problem of load regression in fig. FIGREF54. The aspect is a typical one, with the validation loss slightly above the training one. The slightly erratic behavior of the former one is possibly due to a lack of data to train the embeddings. The cosine distances for three other major words and for both corpora have been calculated as well. The results are given in tables TABREF57 and TABREF58. For both languages, the three summer months are grouped together, and so are the two week-end days. However again the results are less clear for the English language. They are especially mediocre for "hot", considering that only "warm" seems truly relevant and that "August" for instance is quite far away. For the French language instead of "hot" the distances to "thunderstorms" were calculated. The results are quite satisfactory, with "orageux"/"orageuse" ("thundery") coming in the two first places and related meteorological phenomena ("cumulus" and "grêle", meaning "hail") relatively close as well. For the French case, Saturday and Sunday are very close to summer related words. This observation probably highlights the fact that the RNN groups load increasing and decreasing words in opposite parts of the embedding space.
Yes
728a55c0f628f2133306b6bd88af00eb54017b12
728a55c0f628f2133306b6bd88af00eb54017b12_0
Q: What geometric properties do embeddings display? Text: Introduction Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes. The field of electricity consumption is one where expert knowledge is broad. It is known that the major phenomena driving the load demand are calendar (time of the year, day of the week, ...) and meteorological. For instance generalized additive models (GAM) BIBREF12 representing the consumption as a sum of functions of the time of the year, temperature and wind speed (among others) typically yield less than 1.5% of relative error for French national electricity demand and 8% for local one BIBREF13, BIBREF14. Neural networks and their variants, with their ability to extract patterns from heterogeneous types of data have also obtained state-of-the-art results BIBREF15, BIBREF16, BIBREF17. However to our knowledge no exploratory work using text has been conducted yet. Including such data in electricity demand forecasting models would not only contribute to close the gap with other domains, but also help to understand better which aspects of text are useful, how the encoding of the text influences forecasts and to which extend a prediction algorithm can extract relevant information from unstructured data. Moreover the major drawback of all the aforementioned approaches is that they require meteorological data that may be difficult to find, unavailable in real time or expensive. Textual sources such as weather reports on the other hand are easy to find, usually available on a daily basis and free. The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series. The rest of this paper is organized as follows. The following section introduces the two data sets used to conduct our study. Section 3 presents the different machine learning approaches used and how they were tuned. Section 4 highlights the main results of our study, while section 5 concludes this paper and gives insight on future possible work. Presentation of the data In order to prove the consistency of our work, experiments have been conducted on two data sets, one for France and the other for the UK. In this section details about the text and time series data are given, as well as the major preprocessing steps. Presentation of the data ::: Time Series Three types of time series are considered in our work: national net electricity consumption (also referred as load or demand), national temperature and wind speed. The load data sets were retrieved on the websites of the respective grid operators, respectively RTE (Réseau et Transport d'Électricité) for France and National Grid for the UK. For France, the available data ranges from January the 1st 2007 to August the 31st 2018. The default temporal resolution is 30 minutes, but it is averaged to a daily one. For the UK, it is available from January the 1st 2006 to December the 31st 2018 with the same temporal resolution and thus averaging. Due to social factors such as energy policies or new usages of electricity (e.g. Electric Vehicles), the net consumption usually has a long-term trend (fig. FIGREF2). While for France it seems marginal (fig. FIGREF2), there is a strong decreasing trend for the United-Kingdom (fig. FIGREF2). Such a strong non-stationarity of the time series would cause problems for the forecasting process, since the learnt demand levels would differ significantly from the upcoming ones. Therefore a linear regression was used to approximate the decreasing trend of the net consumption in the UK. It is then subtracted before the training of the methods, and then re-added a posteriori for prediction. As for the weather time series, they were extracted from multiple weather stations around France and the UK. The national average is obtained by combining the data from all stations with a weight proportional to the city population the station is located in. For France the stations' data is provided by the French meteorological office, Météo France, while the British ones are scrapped from stations of the National Oceanic and Atmospheric Administration (NOAA). Available on the same time span as the consumption, they usually have a 3 hours temporal resolution but are averaged to a daily one as well. Finally the time series were scaled to the range $[0,1]$ before the training phase, and re-scaled during prediction time. Presentation of the data ::: Text Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2. As emphasized in many studies, preprocessing of the text can ease the learning of the methods and improve accuracy BIBREF18. Therefore the following steps are applied: removal of non-alphabetic characters, removal of stop-words and lowercasing. While it was often highlighted that word lemmatization and stemming improve results, initial experiments showed it was not the case for our study. This is probably due to the technical vocabulary used in both corpora pertaining to the field of meteorology. Already limited in size, the aforementioned preprocessing operations do not yield a significant vocabulary size reduction and can even lead to a loss of linguistic meaning. Finally, extremely frequent or rare words may not have high explanatory power and may reduce the different models' accuracy. That is why words appearing less than 7 times or in more than 40% of the (learning) corpus are removed as well. Figure FIGREF8 represents the distribution of the document lengths after preprocessing, while table TABREF11 gives descriptive statistics on both corpora. Note that the preprocessing steps do not heavily rely on the considered language: therefore our pipeline is easily adaptable for other languages. Modeling and forecasting framework A major target of our work is to show the reports contain an intrinsic information relevant for time series, and that the predictive results do not heavily depend on the encoding of the text or the machine learning algorithm used. Therefore in this section we present the text encoding approaches, as well as the forecasting methods used with them. Modeling and forecasting framework ::: Numerical Encoding of the Text Machines and algorithms cannot work with raw text directly. Thus one major step when working with text is the choice of its numerical representation. In our work two significantly different encoding approaches are considered. The first one is the TF-IDF approach. It embeds a corpus of $N$ documents and $V$ words into a matrix $X$ of size $N \times V$. As such, every document is represented by a vector of size $V$. For each word $w$ and document $d$ the associated coefficient $x_{d,w}$ represents the frequency of that word in that document, penalized by its overall frequency in the rest of the corpus. Thus very common words will have a low TF-IDF value, whereas specific ones which will appear often in a handful of documents will have a large TF-IDF score. The exact formula to calculate the TF-IDF value of word $w$ in document $d$ is: where $f_{d,w}$ is the number of appearances of $w$ in $d$ adjusted by the length of $d$ and $\#\lbrace d: w \in d \rbrace $ is the number of documents in which the word $w$ appears. In our work we considered only individual words, also commonly referred as 1-grams in the field of natural language processing (NLP). The methodology can be easily extended to $n$-grams (groups of $n$ consecutive words), but initial experiments showed that it did not bring any significant improvement over 1-grams. The second representation is a neural word embedding. It consists in representing every word in the corpus by a real-valued vector of dimension $q$. Such models are usually obtained by learning a vector representation from word co-occurrences in a very large corpus (typically hundred thousands of documents, such as Wikipedia articles for example). The two most popular embeddings are probably Google's Word2Vec BIBREF19 and Standford's GloVe BIBREF20. In the former, a neural network is trained to predict a word given its context (continuous bag of word model), whereas in the latter a matrix factorization scheme on the log co-occurences of words is applied. In any case, the very nature of the objective function allows the embedding models to learn to translate linguistic similarities into geometric properties in the vector space. For instance the vector $\overrightarrow{king} - \overrightarrow{man} + \overrightarrow{woman}$ is expected to be very close to the vector $\overrightarrow{queen}$. However in our case we want a vector encoding which is tailored for the technical vocabulary of our weather reports and for the subsequent prediction task. This is why we decided to train our own word embedding from scratch during the learning phase of our recurrent or convolutional neural network. Aside from the much more restricted size of our corpora, the major difference with the aforementioned embeddings is that in our case it is obtained by minimizing a squared loss on the prediction. In that framework there is no explicit reason for our representation to display any geometric structure. However as detailed in section SECREF36, our word vectors nonetheless display geometric properties pertaining to the behavior of the time series. Modeling and forecasting framework ::: Machine Learning Algorithms Multiple machine learning algorithms were applied on top of the encoded textual documents. For the TF-IDF representation, the following approaches are applied: random forests (RF), LASSO and multilayer perceptron (MLP) neural networks (NN). We chose these algorithms combined to the TF-IDF representation due to the possibility of interpretation they give. Indeed, considering the novelty of this work, the understanding of the impact of the words on the forecast is of paramount importance, and as opposed to embeddings, TF-IDF has a natural interpretation. Furthermore the RF and LASSO methods give the possibility to interpret marginal effects and analyze the importance of features, and thus to find the words which affect the time series the most. As for the word embedding, recurrent or convolutional neural networks (respectively RNN and CNN) were used with them. MLPs are not used, for they would require to concatenate all the vector representations of a sentence together beforehand and result in a network with too many parameters to be trained correctly with our number of available documents. Recall that we decided to train our own vector representation of words instead of using an already available one. In order to obtain the embedding, the texts are first converted into a sequence of integers: each word is given a number ranging from 1 to $V$, where $V$ is the vocabulary size (0 is used for padding or unknown words in the test set). One must then calculate the maximum sequence length $S$, and sentences of length shorter than $S$ are then padded by zeros. During the training process of the network, for each word a $q$ dimensional real-valued vector representation is calculated simultaneously to the rest of the weights of the network. Ergo a sentence of $S$ words is translated into a sequence of $S$ $q$-sized vectors, which is then fed into a recurrent neural unit. For both languages, $q=20$ seemed to yield the best results. In the case of recurrent units two main possibilities arise, with LSTM (Long Short-Term Memory) BIBREF21 and GRU (Gated Recurrent Unit) BIBREF22. After a few initial trials, no significant performance differences were noticed between the two types of cells. Therefore GRU were systematically used for recurrent networks, since their lower amount of parameters makes them easier to train and reduces overfitting. The output of the recurrent unit is afterwards linked to a fully connected (also referred as dense) layer, leading to the final forecast as output. The rectified linear unit (ReLU) activation in dense layers systematically gave the best results, except on the output layer where we used a sigmoid one considering the time series' normalization. In order to tone down overfitting, dropout layers BIBREF23 with probabilities of 0.25 or 0.33 are set in between the layers. Batch normalization BIBREF24 is also used before the GRU since it stabilized training and improved performance. Figure FIGREF14 represents the architecture of our RNN. The word embedding matrix is therefore learnt jointly with the rest of the parameters of the neural network by minimization of the quadratic loss with respect to the true electricity demand. Note that while above we described the case of the RNN, the same procedure is considered for the case of the CNN, with only the recurrent layers replaced by a combination of 1D convolution and pooling ones. As for the optimization algorithms of the neural networks, traditional stochastic gradient descent with momentum or ADAM BIBREF25 together with a quadratic loss are used. All of the previously mentioned methods were coded with Python. The LASSO and RF were implemented using the library Scikit Learn BIBREF26, while Keras BIBREF27 was used for the neural networks. Modeling and forecasting framework ::: Hyperparameter Tuning While most parameters are trained during the learning optimization process, all methods still involve a certain number of hyperparameters that must be manually set by the user. For instance for random forests it can correspond to the maximum depth of the trees or the fraction of features used at each split step, while for neural networks it can be the number of layers, neurons, the embedding dimension or the activation functions used. This is why the data is split into three sets: The training set, using all data available up to the 31st of December 2013 (2,557 days for France and 2,922 for the UK). It is used to learn the parameters of the algorithms through mathematical optimization. The years 2014 and 2015 serve as validation set (730 days). It is used to tune the hyperparameters of the different approaches. All the data from January the 1st 2016 (974 days for France and 1,096 for the UK) is used as test set, on which the final results are presented. Grid search is applied to find the best combination of values: for each hyperparameter, a range of values is defined, and all the possible combinations are successively tested. The one yielding the lowest RMSE (see section SECREF4) on the validation set is used for the final results on the test one. While relatively straightforward for RFs and the LASSO, the extreme number of possibilities for NNs and their extensive training time compelled us to limit the range of architectures possible. The hyperparameters are tuned per method and per country: ergo the hyperparameters of a given algorithm will be the same for the different time series of a country (e.g. the RNN architecture for temperature and load for France will be the same, but different from the UK one). Finally before application on the testing set, all the methods are re-trained from scratch using both the training and validation data. Experiments The goal of our experiments is to quantify how close one can get using textual data only when compared to numerical data. However the inputs of the numerical benchmark should be hence comparable to the information contained in the weather reports. Considering they mainly contain calendar (day of the week and month) as well as temperature and wind information, the benchmark of comparison is a random forest trained on four features only: the time of the year (whose value is 0 on January the 1st and 1 on December the 31st with a linear growth in between), the day of the week, the national average temperature and wind speed. The metrics of evaluation are the Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and the $R^2$ coefficient given by: where $T$ is the number of test samples, $y_t$ and $\hat{y}_t$ are respectively the ground truth and the prediction for the document of day $t$, and $\overline{y}$ is the empirical average of the time series over the test sample. A known problem with MAPE is that it unreasonably increases the error score for values close to 0. While for the load it isn't an issue at all, it can be for the meteorological time series. Therefore for the temperature, the MAPE is calculated only when the ground truth is above the 5% empirical quantile. Although we aim at achieving the highest accuracy possible, we focus on the interpretability of our models as well. Experiments ::: Feature selection Many words are obviously irrelevant to the time series in our texts. For instance the day of the week, while playing a significant role for the load demand, is useless for temperature or wind. Such words make the training harder and may decrease the accuracy of the prediction. Therefore a feature selection procedure similar to BIBREF28 is applied to select a subset of useful features for the different algorithms, and for each type of time series. Random forests are naturally able to calculate feature importance through the calculation of error increase in the out-of-bag (OOB) samples. Therefore the following process is applied to select a subset of $V^*$ relevant words to keep: A RF is trained on the whole training & validation set. The OOB feature importance can thus be calculated. The features are then successively added to the RF in decreasing order of feature importance. This process is repeated $B=10$ times to tone down the randomness. The number $V^*$ is then set to the number of features giving the highest median OOB $R^2$ value. The results of this procedure for the French data is represented in figure FIGREF24. The best median $R^2$ is achieved for $V^* = 52$, although one could argue that not much gain is obtained after 36 words. The results are very similar for the UK data set, thus for the sake of simplicity the same value $V^* = 52$ is used. Note that the same subset of words is used for all the different forecasting models, which could be improved in further work using other selection criteria (e.g. mutual information, see BIBREF29). An example of normalized feature importance is given in figure. FIGREF32. Experiments ::: Main results Note that most of the considered algorithms involve randomness during the training phase, with the subsampling in the RFs or the gradient descent in the NNs for instance. In order to tone it down and to increase the consistency of our results, the different models are run $B=10$ times. The results presented hereafter correspond to the average and standard-deviation on those runs. The RF model denoted as "sel" is the one with the reduced number of features, whereas the other RF uses the full vocabulary. We also considered an aggregated forecaster (abridged Agg), consisting of the average of the two best individual ones in terms of RMSE. All the neural network methods have a reduced vocabulary size $V^*$. The results for the French and UK data are respectively given by tables TABREF26 and TABREF27. Our empirical results show that for the electricity consumption prediction task, the order of magnitude of the relative error is around 5%, independently of the language, encoding and machine learning method, thus proving the intrinsic value of the information contained in the textual documents for this time series. As expected, all text based methods perform poorer than when using explicitly numerical input features. Indeed, despite containing relevant information, the text is always more fuzzy and less precise than an explicit value for the temperature or the time of the year for instance. Again the aim of this work is not to beat traditional methods with text, but quantifying how close one can come to traditional approaches when using text exclusively. As such achieving less than 5% of MAPE was nonetheless deemed impressive by expert electricity forecasters. Feature selection brings significant improvement in the French case, although it does not yield any improvement in the English one. The reason for this is currently unknown. Nevertheless the feature selection procedure also helps the NNs by dramatically reducing the vocabulary size, and without it the training of the networks was bound to fail. While the errors accross methods are roughly comparable and highlight the valuable information contained within the reports, the best method nonetheless fluctuates between languages. Indeed in the French case there is a hegemony of the NNs, with the embedding RNN edging the MLP TF-IDF one. However for the UK data set the RFs yield significantly better results on the test set than the NNs. This inversion of performance of the algorithms is possibly due to a change in the way the reports were written by the Met Office after August 2017, since the results of the MLP and RNN on the validation set (not shown here) were satisfactory and better than both RFs. For the two languages both the CNN and the LASSO yielded poor results. For the former, it is because despite grid search no satisfactory architecture was found, whereas the latter is a linear approach and was used more for interpretation purposes than strong performance. Finally the naive aggregation of the two best experts always yields improvement, especially for the French case where the two different encodings are combined. This emphasises the specificity of the two representations leading to different types of errors. An example of comparison between ground truth and forecast for the case of electricity consumption is given for the French language with fig. FIGREF29, while another for temperature may be found in the appendix FIGREF51. The sudden "spikes" in the forecast are due to the presence of winter related words in a summer report. This is the case when used in comparisons, such as "The flood will be as severe as in January" in a June report and is a limit of our approach. Finally, the usual residual $\hat{\varepsilon }_t = y_t - \hat{y}_t$ analyses procedures were applied: Kolmogorov normality test, QQplots comparaison to gaussian quantiles, residual/fit comparison... While not thoroughly gaussian, the residuals were close to normality nonetheless and displayed satisfactory properties such as being generally independent from the fitted and ground truth values. Excerpts of this analysis for France are given in figure FIGREF52 of the appendix. The results for the temperature and wind series are given in appendix. Considering that they have a more stochastic behavior and are thus more difficult to predict, the order of magnitude of the errors differ (the MAPE being around 15% for temperature for instance) but globally the same observations can be made. Experiments ::: Interpretability of the models While accuracy is the most relevant metric to assess forecasts, interpretability of the models is of paramount importance, especially in the field of professional electricity load forecasting and considering the novelty of our work. Therefore in this section we discuss the properties of the RF and LASSO models using the TF-IDF encoding scheme, as well as the RNN word embedding. Experiments ::: Interpretability of the models ::: TF-IDF representation One significant advantage of the TF-IDF encoding when combined with random forests or the LASSO is that it is possible to interpret the behavior of the models. For instance, figure FIGREF32 represents the 20 most important features (in the RF OOB sense) for both data sets when regressing over electricity demand data. As one can see, the random forest naturally extracts calendar information contained in the weather reports, since months or week-end days are among the most important ones. For the former, this is due to the periodic behavior of electricity consumption, which is higher in winter and lower in summer. This is also why characteristic phenomena of summer and winter, such as "thunderstorms", "snow" or "freezing" also have a high feature importance. The fact that August has a much more important role than July also concurs with expert knowledge, especially for France: indeed it is the month when most people go on vacations, and thus when the load drops the most. As for the week-end names, it is due to the significantly different consumer behavior during Saturdays and especially Sundays when most of the businesses are closed and people are usually at home. Therefore the relevant words selected by the random forest are almost all in agreement with expert knowledge. We also performed the analysis of the relevant words for the LASSO. In order to do that, we examined the words $w$ with the largest associated coefficients $\beta _w$ (in absolute value) in the regression. Since the TF-IDF matrix has positive coefficients, it is possible to interpret the sign of the coefficient $\beta _w$ as its impact on the time series. For instance if $\beta _w > 0$ then the presence of the word $w$ causes a rise the time series (respectively if $\beta _w < 0$, it entails a decline). The results are plotted fig. FIGREF35 for the the UK. As one can see, the winter related words have positive coefficients, and thus increase the load demand as expected whereas the summer related ones decrease it. The value of the coefficients also reflects the impact on the load demand. For example January and February have the highest and very similar values, which concurs with the similarity between the months. Sunday has a much more negative coefficient than Saturday, since the demand significantly drops during the last day of the week. The important words also globally match between the LASSO and the RF, which is a proof of the consistency of our results (this is further explored afterwards in figure FIGREF43). Although not presented here, the results are almost identical for the French load, with approximately the same order of relevancy. The important words logically vary in function of the considered time series, but are always coherent. For instance for the wind one, terms such as "gales", "windy" or "strong" have the highest positive coefficients, as seen in the appendix figure FIGREF53. Those results show that a text based approach not only extracts the relevant information by itself, but it may eventually be used to understand which phenomena are relevant to explain the behavior of a time series, and to which extend. Experiments ::: Interpretability of the models ::: Vector embedding representation Word vector embeddings such as Word2Vec and GloVe are known for their vectorial properties translating linguistic ones. However considering the objective function of our problem, there was no obvious reason for such attributes to appear in our own. Nevertheless for both languages we conducted an analysis of the geometric properties of our embedding matrix. We investigated the distances between word vectors, the relevant metric being the cosine distance given by: where $\overrightarrow{w_1}$ and $\overrightarrow{w_2}$ are given word vectors. Thus a cosine distance lower than 1 means similarity between word vectors, whereas a greater than 1 corresponds to opposition. The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance). The results of the experiments are very similar for both languages again. Indeed, the words are globally embedded in the vector space by topic: winter related words such as "January" ("janvier"), "February" ("février"), "snow" ("neige"), "freezing" ("glacial") are close to each other and almost opposite to summer related ones such as "July" ("juillet"), "August" ("août"), "hot" ("chaud"). For both cases the week days Monday ("lundi") to Friday ("vendredi") are grouped very closely to each other, while significantly separated from the week-end ones "Saturday" ("samedi") and "Sunday" ("dimanche"). Despite these observations, a few seemingly unrelated words enter the lists of top 10, especially for the English case (such as "pressure" or "dusk" for "February"). In fact the French language embedding seems of better quality, which is perhaps linked to the longer length of the French reports in average. This issue could probably be addressed with more data. Another observation made is that the importance of a word $w$ seems related to its euclidean norm in the embedding space ${\overrightarrow{w}}_2$. For both languages the list of the 20 words with the largest norm is given fig. FIGREF40. As one can see, it globally matches the selected ones from the RF or the LASSO (especially for the French language), although the order is quite different. This is further supported by the Venn diagram of common words among the top 50 ones for each word selection method represented in figure FIGREF43 for France. Therefore this observation could also be used as feature selection procedure for the RNN or CNN in further work. In order to achieve a global view of the embeddings, the t-SNE algorithm BIBREF30 is applied to project an embedding matrix into a 2 dimensional space, for both languages. The observations for the few aforementioned words are confirmed by this representation, as plotted in figure FIGREF44. Thematic clusters can be observed, roughly corresponding to winter, summer, week-days, week-end days for both languages. Globally summer and winter seem opposed, although one should keep in mind that the t-SNE representation does not preserve the cosine distance. The clusters of the French embedding appear much more compact than the UK one, comforting the observations made when explicitly calculating the cosine distances. Conclusion In this study, a novel pipeline to predict three types of time series using exclusively a textual source was proposed. Making use of publicly available daily weather reports, we were able to predict the electricity consumption with less than 5% of MAPE for both France and the United-Kingdom. Moreover our average national temperature and wind speed predictions displayed sufficient accuracy to be used to replace missing data or as first approximation in traditional models in case of unavailability of meteorological features. The texts were encoded numerically using either TF-IDF or our own neural word embedding. A plethora of machine learning algorithms such as random forests or neural networks were applied on top of those representations. Our results were consistent over language, numerical representation of the text and prediction algorithm, proving the intrinsic value of the textual sources for the three considered time series. Contrarily to previous works in the field of textual data for time series forecasting, we went in depth and quantified the impact of words on the variations of the series. As such we saw that all the algorithms naturally extract calendar and meteorological information from the texts, and that words impact the time series in the expected way (e.g. winter words increase the consumption and summer ones decrease it). Despite being trained on a regular quadratic loss, our neural word embedding spontaneously builds geometric properties. Not only does the norm of a word vector reflect its significance, but the words are also grouped by topic with for example winter, summer or day of the week clusters. Note that this study was a preliminary work on the use of textual information for time series prediction, especially electricity demand one. The long-term goal is to include multiple sources of textual information to improve the accuracy of state-of-the-art methods or to build a text based forecaster which can be used to increase the diversity in a set of experts for electricity consumption BIBREF31. However due to the redundancy of the information of the considered weather reports with meteorological features, it may be necessary to consider alternative textual sources. The use of social media such as Facebook, Twitter or Instagram may give interesting insight and will therefore be investigated in future work. Additional results for the prediction tasks on temperature and wind speed can be found in tables TABREF47 to TABREF50. An example of forecast for the French temperature is given in figure FIGREF51. While not strictly normally distributed, the residuals for the French electricity demand display an acceptable behavior. This holds also true for the British consumption, and both temperature time series, but is of lesser quality for the wind one. The the UK wind LASSO regression, the words with the highest coefficients $\beta _w$ are indeed related to strong wind phenomena, whereas antagonistic ones such as "fog" or "mist" have strongly negative ones as expected (fig. FIGREF53). For both languages we represented the evolution of the (normalized) losses for the problem of load regression in fig. FIGREF54. The aspect is a typical one, with the validation loss slightly above the training one. The slightly erratic behavior of the former one is possibly due to a lack of data to train the embeddings. The cosine distances for three other major words and for both corpora have been calculated as well. The results are given in tables TABREF57 and TABREF58. For both languages, the three summer months are grouped together, and so are the two week-end days. However again the results are less clear for the English language. They are especially mediocre for "hot", considering that only "warm" seems truly relevant and that "August" for instance is quite far away. For the French language instead of "hot" the distances to "thunderstorms" were calculated. The results are quite satisfactory, with "orageux"/"orageuse" ("thundery") coming in the two first places and related meteorological phenomena ("cumulus" and "grêle", meaning "hail") relatively close as well. For the French case, Saturday and Sunday are very close to summer related words. This observation probably highlights the fact that the RNN groups load increasing and decreasing words in opposite parts of the embedding space.
Winter and summer words formed two separate clusters. Week day and week-end day words also formed separate clusters.
d5498d16e8350c9785782b57b1e5a82212dbdaad
d5498d16e8350c9785782b57b1e5a82212dbdaad_0
Q: How accurate is model trained on text exclusively? Text: Introduction Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes. The field of electricity consumption is one where expert knowledge is broad. It is known that the major phenomena driving the load demand are calendar (time of the year, day of the week, ...) and meteorological. For instance generalized additive models (GAM) BIBREF12 representing the consumption as a sum of functions of the time of the year, temperature and wind speed (among others) typically yield less than 1.5% of relative error for French national electricity demand and 8% for local one BIBREF13, BIBREF14. Neural networks and their variants, with their ability to extract patterns from heterogeneous types of data have also obtained state-of-the-art results BIBREF15, BIBREF16, BIBREF17. However to our knowledge no exploratory work using text has been conducted yet. Including such data in electricity demand forecasting models would not only contribute to close the gap with other domains, but also help to understand better which aspects of text are useful, how the encoding of the text influences forecasts and to which extend a prediction algorithm can extract relevant information from unstructured data. Moreover the major drawback of all the aforementioned approaches is that they require meteorological data that may be difficult to find, unavailable in real time or expensive. Textual sources such as weather reports on the other hand are easy to find, usually available on a daily basis and free. The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series. The rest of this paper is organized as follows. The following section introduces the two data sets used to conduct our study. Section 3 presents the different machine learning approaches used and how they were tuned. Section 4 highlights the main results of our study, while section 5 concludes this paper and gives insight on future possible work. Presentation of the data In order to prove the consistency of our work, experiments have been conducted on two data sets, one for France and the other for the UK. In this section details about the text and time series data are given, as well as the major preprocessing steps. Presentation of the data ::: Time Series Three types of time series are considered in our work: national net electricity consumption (also referred as load or demand), national temperature and wind speed. The load data sets were retrieved on the websites of the respective grid operators, respectively RTE (Réseau et Transport d'Électricité) for France and National Grid for the UK. For France, the available data ranges from January the 1st 2007 to August the 31st 2018. The default temporal resolution is 30 minutes, but it is averaged to a daily one. For the UK, it is available from January the 1st 2006 to December the 31st 2018 with the same temporal resolution and thus averaging. Due to social factors such as energy policies or new usages of electricity (e.g. Electric Vehicles), the net consumption usually has a long-term trend (fig. FIGREF2). While for France it seems marginal (fig. FIGREF2), there is a strong decreasing trend for the United-Kingdom (fig. FIGREF2). Such a strong non-stationarity of the time series would cause problems for the forecasting process, since the learnt demand levels would differ significantly from the upcoming ones. Therefore a linear regression was used to approximate the decreasing trend of the net consumption in the UK. It is then subtracted before the training of the methods, and then re-added a posteriori for prediction. As for the weather time series, they were extracted from multiple weather stations around France and the UK. The national average is obtained by combining the data from all stations with a weight proportional to the city population the station is located in. For France the stations' data is provided by the French meteorological office, Météo France, while the British ones are scrapped from stations of the National Oceanic and Atmospheric Administration (NOAA). Available on the same time span as the consumption, they usually have a 3 hours temporal resolution but are averaged to a daily one as well. Finally the time series were scaled to the range $[0,1]$ before the training phase, and re-scaled during prediction time. Presentation of the data ::: Text Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2. As emphasized in many studies, preprocessing of the text can ease the learning of the methods and improve accuracy BIBREF18. Therefore the following steps are applied: removal of non-alphabetic characters, removal of stop-words and lowercasing. While it was often highlighted that word lemmatization and stemming improve results, initial experiments showed it was not the case for our study. This is probably due to the technical vocabulary used in both corpora pertaining to the field of meteorology. Already limited in size, the aforementioned preprocessing operations do not yield a significant vocabulary size reduction and can even lead to a loss of linguistic meaning. Finally, extremely frequent or rare words may not have high explanatory power and may reduce the different models' accuracy. That is why words appearing less than 7 times or in more than 40% of the (learning) corpus are removed as well. Figure FIGREF8 represents the distribution of the document lengths after preprocessing, while table TABREF11 gives descriptive statistics on both corpora. Note that the preprocessing steps do not heavily rely on the considered language: therefore our pipeline is easily adaptable for other languages. Modeling and forecasting framework A major target of our work is to show the reports contain an intrinsic information relevant for time series, and that the predictive results do not heavily depend on the encoding of the text or the machine learning algorithm used. Therefore in this section we present the text encoding approaches, as well as the forecasting methods used with them. Modeling and forecasting framework ::: Numerical Encoding of the Text Machines and algorithms cannot work with raw text directly. Thus one major step when working with text is the choice of its numerical representation. In our work two significantly different encoding approaches are considered. The first one is the TF-IDF approach. It embeds a corpus of $N$ documents and $V$ words into a matrix $X$ of size $N \times V$. As such, every document is represented by a vector of size $V$. For each word $w$ and document $d$ the associated coefficient $x_{d,w}$ represents the frequency of that word in that document, penalized by its overall frequency in the rest of the corpus. Thus very common words will have a low TF-IDF value, whereas specific ones which will appear often in a handful of documents will have a large TF-IDF score. The exact formula to calculate the TF-IDF value of word $w$ in document $d$ is: where $f_{d,w}$ is the number of appearances of $w$ in $d$ adjusted by the length of $d$ and $\#\lbrace d: w \in d \rbrace $ is the number of documents in which the word $w$ appears. In our work we considered only individual words, also commonly referred as 1-grams in the field of natural language processing (NLP). The methodology can be easily extended to $n$-grams (groups of $n$ consecutive words), but initial experiments showed that it did not bring any significant improvement over 1-grams. The second representation is a neural word embedding. It consists in representing every word in the corpus by a real-valued vector of dimension $q$. Such models are usually obtained by learning a vector representation from word co-occurrences in a very large corpus (typically hundred thousands of documents, such as Wikipedia articles for example). The two most popular embeddings are probably Google's Word2Vec BIBREF19 and Standford's GloVe BIBREF20. In the former, a neural network is trained to predict a word given its context (continuous bag of word model), whereas in the latter a matrix factorization scheme on the log co-occurences of words is applied. In any case, the very nature of the objective function allows the embedding models to learn to translate linguistic similarities into geometric properties in the vector space. For instance the vector $\overrightarrow{king} - \overrightarrow{man} + \overrightarrow{woman}$ is expected to be very close to the vector $\overrightarrow{queen}$. However in our case we want a vector encoding which is tailored for the technical vocabulary of our weather reports and for the subsequent prediction task. This is why we decided to train our own word embedding from scratch during the learning phase of our recurrent or convolutional neural network. Aside from the much more restricted size of our corpora, the major difference with the aforementioned embeddings is that in our case it is obtained by minimizing a squared loss on the prediction. In that framework there is no explicit reason for our representation to display any geometric structure. However as detailed in section SECREF36, our word vectors nonetheless display geometric properties pertaining to the behavior of the time series. Modeling and forecasting framework ::: Machine Learning Algorithms Multiple machine learning algorithms were applied on top of the encoded textual documents. For the TF-IDF representation, the following approaches are applied: random forests (RF), LASSO and multilayer perceptron (MLP) neural networks (NN). We chose these algorithms combined to the TF-IDF representation due to the possibility of interpretation they give. Indeed, considering the novelty of this work, the understanding of the impact of the words on the forecast is of paramount importance, and as opposed to embeddings, TF-IDF has a natural interpretation. Furthermore the RF and LASSO methods give the possibility to interpret marginal effects and analyze the importance of features, and thus to find the words which affect the time series the most. As for the word embedding, recurrent or convolutional neural networks (respectively RNN and CNN) were used with them. MLPs are not used, for they would require to concatenate all the vector representations of a sentence together beforehand and result in a network with too many parameters to be trained correctly with our number of available documents. Recall that we decided to train our own vector representation of words instead of using an already available one. In order to obtain the embedding, the texts are first converted into a sequence of integers: each word is given a number ranging from 1 to $V$, where $V$ is the vocabulary size (0 is used for padding or unknown words in the test set). One must then calculate the maximum sequence length $S$, and sentences of length shorter than $S$ are then padded by zeros. During the training process of the network, for each word a $q$ dimensional real-valued vector representation is calculated simultaneously to the rest of the weights of the network. Ergo a sentence of $S$ words is translated into a sequence of $S$ $q$-sized vectors, which is then fed into a recurrent neural unit. For both languages, $q=20$ seemed to yield the best results. In the case of recurrent units two main possibilities arise, with LSTM (Long Short-Term Memory) BIBREF21 and GRU (Gated Recurrent Unit) BIBREF22. After a few initial trials, no significant performance differences were noticed between the two types of cells. Therefore GRU were systematically used for recurrent networks, since their lower amount of parameters makes them easier to train and reduces overfitting. The output of the recurrent unit is afterwards linked to a fully connected (also referred as dense) layer, leading to the final forecast as output. The rectified linear unit (ReLU) activation in dense layers systematically gave the best results, except on the output layer where we used a sigmoid one considering the time series' normalization. In order to tone down overfitting, dropout layers BIBREF23 with probabilities of 0.25 or 0.33 are set in between the layers. Batch normalization BIBREF24 is also used before the GRU since it stabilized training and improved performance. Figure FIGREF14 represents the architecture of our RNN. The word embedding matrix is therefore learnt jointly with the rest of the parameters of the neural network by minimization of the quadratic loss with respect to the true electricity demand. Note that while above we described the case of the RNN, the same procedure is considered for the case of the CNN, with only the recurrent layers replaced by a combination of 1D convolution and pooling ones. As for the optimization algorithms of the neural networks, traditional stochastic gradient descent with momentum or ADAM BIBREF25 together with a quadratic loss are used. All of the previously mentioned methods were coded with Python. The LASSO and RF were implemented using the library Scikit Learn BIBREF26, while Keras BIBREF27 was used for the neural networks. Modeling and forecasting framework ::: Hyperparameter Tuning While most parameters are trained during the learning optimization process, all methods still involve a certain number of hyperparameters that must be manually set by the user. For instance for random forests it can correspond to the maximum depth of the trees or the fraction of features used at each split step, while for neural networks it can be the number of layers, neurons, the embedding dimension or the activation functions used. This is why the data is split into three sets: The training set, using all data available up to the 31st of December 2013 (2,557 days for France and 2,922 for the UK). It is used to learn the parameters of the algorithms through mathematical optimization. The years 2014 and 2015 serve as validation set (730 days). It is used to tune the hyperparameters of the different approaches. All the data from January the 1st 2016 (974 days for France and 1,096 for the UK) is used as test set, on which the final results are presented. Grid search is applied to find the best combination of values: for each hyperparameter, a range of values is defined, and all the possible combinations are successively tested. The one yielding the lowest RMSE (see section SECREF4) on the validation set is used for the final results on the test one. While relatively straightforward for RFs and the LASSO, the extreme number of possibilities for NNs and their extensive training time compelled us to limit the range of architectures possible. The hyperparameters are tuned per method and per country: ergo the hyperparameters of a given algorithm will be the same for the different time series of a country (e.g. the RNN architecture for temperature and load for France will be the same, but different from the UK one). Finally before application on the testing set, all the methods are re-trained from scratch using both the training and validation data. Experiments The goal of our experiments is to quantify how close one can get using textual data only when compared to numerical data. However the inputs of the numerical benchmark should be hence comparable to the information contained in the weather reports. Considering they mainly contain calendar (day of the week and month) as well as temperature and wind information, the benchmark of comparison is a random forest trained on four features only: the time of the year (whose value is 0 on January the 1st and 1 on December the 31st with a linear growth in between), the day of the week, the national average temperature and wind speed. The metrics of evaluation are the Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and the $R^2$ coefficient given by: where $T$ is the number of test samples, $y_t$ and $\hat{y}_t$ are respectively the ground truth and the prediction for the document of day $t$, and $\overline{y}$ is the empirical average of the time series over the test sample. A known problem with MAPE is that it unreasonably increases the error score for values close to 0. While for the load it isn't an issue at all, it can be for the meteorological time series. Therefore for the temperature, the MAPE is calculated only when the ground truth is above the 5% empirical quantile. Although we aim at achieving the highest accuracy possible, we focus on the interpretability of our models as well. Experiments ::: Feature selection Many words are obviously irrelevant to the time series in our texts. For instance the day of the week, while playing a significant role for the load demand, is useless for temperature or wind. Such words make the training harder and may decrease the accuracy of the prediction. Therefore a feature selection procedure similar to BIBREF28 is applied to select a subset of useful features for the different algorithms, and for each type of time series. Random forests are naturally able to calculate feature importance through the calculation of error increase in the out-of-bag (OOB) samples. Therefore the following process is applied to select a subset of $V^*$ relevant words to keep: A RF is trained on the whole training & validation set. The OOB feature importance can thus be calculated. The features are then successively added to the RF in decreasing order of feature importance. This process is repeated $B=10$ times to tone down the randomness. The number $V^*$ is then set to the number of features giving the highest median OOB $R^2$ value. The results of this procedure for the French data is represented in figure FIGREF24. The best median $R^2$ is achieved for $V^* = 52$, although one could argue that not much gain is obtained after 36 words. The results are very similar for the UK data set, thus for the sake of simplicity the same value $V^* = 52$ is used. Note that the same subset of words is used for all the different forecasting models, which could be improved in further work using other selection criteria (e.g. mutual information, see BIBREF29). An example of normalized feature importance is given in figure. FIGREF32. Experiments ::: Main results Note that most of the considered algorithms involve randomness during the training phase, with the subsampling in the RFs or the gradient descent in the NNs for instance. In order to tone it down and to increase the consistency of our results, the different models are run $B=10$ times. The results presented hereafter correspond to the average and standard-deviation on those runs. The RF model denoted as "sel" is the one with the reduced number of features, whereas the other RF uses the full vocabulary. We also considered an aggregated forecaster (abridged Agg), consisting of the average of the two best individual ones in terms of RMSE. All the neural network methods have a reduced vocabulary size $V^*$. The results for the French and UK data are respectively given by tables TABREF26 and TABREF27. Our empirical results show that for the electricity consumption prediction task, the order of magnitude of the relative error is around 5%, independently of the language, encoding and machine learning method, thus proving the intrinsic value of the information contained in the textual documents for this time series. As expected, all text based methods perform poorer than when using explicitly numerical input features. Indeed, despite containing relevant information, the text is always more fuzzy and less precise than an explicit value for the temperature or the time of the year for instance. Again the aim of this work is not to beat traditional methods with text, but quantifying how close one can come to traditional approaches when using text exclusively. As such achieving less than 5% of MAPE was nonetheless deemed impressive by expert electricity forecasters. Feature selection brings significant improvement in the French case, although it does not yield any improvement in the English one. The reason for this is currently unknown. Nevertheless the feature selection procedure also helps the NNs by dramatically reducing the vocabulary size, and without it the training of the networks was bound to fail. While the errors accross methods are roughly comparable and highlight the valuable information contained within the reports, the best method nonetheless fluctuates between languages. Indeed in the French case there is a hegemony of the NNs, with the embedding RNN edging the MLP TF-IDF one. However for the UK data set the RFs yield significantly better results on the test set than the NNs. This inversion of performance of the algorithms is possibly due to a change in the way the reports were written by the Met Office after August 2017, since the results of the MLP and RNN on the validation set (not shown here) were satisfactory and better than both RFs. For the two languages both the CNN and the LASSO yielded poor results. For the former, it is because despite grid search no satisfactory architecture was found, whereas the latter is a linear approach and was used more for interpretation purposes than strong performance. Finally the naive aggregation of the two best experts always yields improvement, especially for the French case where the two different encodings are combined. This emphasises the specificity of the two representations leading to different types of errors. An example of comparison between ground truth and forecast for the case of electricity consumption is given for the French language with fig. FIGREF29, while another for temperature may be found in the appendix FIGREF51. The sudden "spikes" in the forecast are due to the presence of winter related words in a summer report. This is the case when used in comparisons, such as "The flood will be as severe as in January" in a June report and is a limit of our approach. Finally, the usual residual $\hat{\varepsilon }_t = y_t - \hat{y}_t$ analyses procedures were applied: Kolmogorov normality test, QQplots comparaison to gaussian quantiles, residual/fit comparison... While not thoroughly gaussian, the residuals were close to normality nonetheless and displayed satisfactory properties such as being generally independent from the fitted and ground truth values. Excerpts of this analysis for France are given in figure FIGREF52 of the appendix. The results for the temperature and wind series are given in appendix. Considering that they have a more stochastic behavior and are thus more difficult to predict, the order of magnitude of the errors differ (the MAPE being around 15% for temperature for instance) but globally the same observations can be made. Experiments ::: Interpretability of the models While accuracy is the most relevant metric to assess forecasts, interpretability of the models is of paramount importance, especially in the field of professional electricity load forecasting and considering the novelty of our work. Therefore in this section we discuss the properties of the RF and LASSO models using the TF-IDF encoding scheme, as well as the RNN word embedding. Experiments ::: Interpretability of the models ::: TF-IDF representation One significant advantage of the TF-IDF encoding when combined with random forests or the LASSO is that it is possible to interpret the behavior of the models. For instance, figure FIGREF32 represents the 20 most important features (in the RF OOB sense) for both data sets when regressing over electricity demand data. As one can see, the random forest naturally extracts calendar information contained in the weather reports, since months or week-end days are among the most important ones. For the former, this is due to the periodic behavior of electricity consumption, which is higher in winter and lower in summer. This is also why characteristic phenomena of summer and winter, such as "thunderstorms", "snow" or "freezing" also have a high feature importance. The fact that August has a much more important role than July also concurs with expert knowledge, especially for France: indeed it is the month when most people go on vacations, and thus when the load drops the most. As for the week-end names, it is due to the significantly different consumer behavior during Saturdays and especially Sundays when most of the businesses are closed and people are usually at home. Therefore the relevant words selected by the random forest are almost all in agreement with expert knowledge. We also performed the analysis of the relevant words for the LASSO. In order to do that, we examined the words $w$ with the largest associated coefficients $\beta _w$ (in absolute value) in the regression. Since the TF-IDF matrix has positive coefficients, it is possible to interpret the sign of the coefficient $\beta _w$ as its impact on the time series. For instance if $\beta _w > 0$ then the presence of the word $w$ causes a rise the time series (respectively if $\beta _w < 0$, it entails a decline). The results are plotted fig. FIGREF35 for the the UK. As one can see, the winter related words have positive coefficients, and thus increase the load demand as expected whereas the summer related ones decrease it. The value of the coefficients also reflects the impact on the load demand. For example January and February have the highest and very similar values, which concurs with the similarity between the months. Sunday has a much more negative coefficient than Saturday, since the demand significantly drops during the last day of the week. The important words also globally match between the LASSO and the RF, which is a proof of the consistency of our results (this is further explored afterwards in figure FIGREF43). Although not presented here, the results are almost identical for the French load, with approximately the same order of relevancy. The important words logically vary in function of the considered time series, but are always coherent. For instance for the wind one, terms such as "gales", "windy" or "strong" have the highest positive coefficients, as seen in the appendix figure FIGREF53. Those results show that a text based approach not only extracts the relevant information by itself, but it may eventually be used to understand which phenomena are relevant to explain the behavior of a time series, and to which extend. Experiments ::: Interpretability of the models ::: Vector embedding representation Word vector embeddings such as Word2Vec and GloVe are known for their vectorial properties translating linguistic ones. However considering the objective function of our problem, there was no obvious reason for such attributes to appear in our own. Nevertheless for both languages we conducted an analysis of the geometric properties of our embedding matrix. We investigated the distances between word vectors, the relevant metric being the cosine distance given by: where $\overrightarrow{w_1}$ and $\overrightarrow{w_2}$ are given word vectors. Thus a cosine distance lower than 1 means similarity between word vectors, whereas a greater than 1 corresponds to opposition. The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance). The results of the experiments are very similar for both languages again. Indeed, the words are globally embedded in the vector space by topic: winter related words such as "January" ("janvier"), "February" ("février"), "snow" ("neige"), "freezing" ("glacial") are close to each other and almost opposite to summer related ones such as "July" ("juillet"), "August" ("août"), "hot" ("chaud"). For both cases the week days Monday ("lundi") to Friday ("vendredi") are grouped very closely to each other, while significantly separated from the week-end ones "Saturday" ("samedi") and "Sunday" ("dimanche"). Despite these observations, a few seemingly unrelated words enter the lists of top 10, especially for the English case (such as "pressure" or "dusk" for "February"). In fact the French language embedding seems of better quality, which is perhaps linked to the longer length of the French reports in average. This issue could probably be addressed with more data. Another observation made is that the importance of a word $w$ seems related to its euclidean norm in the embedding space ${\overrightarrow{w}}_2$. For both languages the list of the 20 words with the largest norm is given fig. FIGREF40. As one can see, it globally matches the selected ones from the RF or the LASSO (especially for the French language), although the order is quite different. This is further supported by the Venn diagram of common words among the top 50 ones for each word selection method represented in figure FIGREF43 for France. Therefore this observation could also be used as feature selection procedure for the RNN or CNN in further work. In order to achieve a global view of the embeddings, the t-SNE algorithm BIBREF30 is applied to project an embedding matrix into a 2 dimensional space, for both languages. The observations for the few aforementioned words are confirmed by this representation, as plotted in figure FIGREF44. Thematic clusters can be observed, roughly corresponding to winter, summer, week-days, week-end days for both languages. Globally summer and winter seem opposed, although one should keep in mind that the t-SNE representation does not preserve the cosine distance. The clusters of the French embedding appear much more compact than the UK one, comforting the observations made when explicitly calculating the cosine distances. Conclusion In this study, a novel pipeline to predict three types of time series using exclusively a textual source was proposed. Making use of publicly available daily weather reports, we were able to predict the electricity consumption with less than 5% of MAPE for both France and the United-Kingdom. Moreover our average national temperature and wind speed predictions displayed sufficient accuracy to be used to replace missing data or as first approximation in traditional models in case of unavailability of meteorological features. The texts were encoded numerically using either TF-IDF or our own neural word embedding. A plethora of machine learning algorithms such as random forests or neural networks were applied on top of those representations. Our results were consistent over language, numerical representation of the text and prediction algorithm, proving the intrinsic value of the textual sources for the three considered time series. Contrarily to previous works in the field of textual data for time series forecasting, we went in depth and quantified the impact of words on the variations of the series. As such we saw that all the algorithms naturally extract calendar and meteorological information from the texts, and that words impact the time series in the expected way (e.g. winter words increase the consumption and summer ones decrease it). Despite being trained on a regular quadratic loss, our neural word embedding spontaneously builds geometric properties. Not only does the norm of a word vector reflect its significance, but the words are also grouped by topic with for example winter, summer or day of the week clusters. Note that this study was a preliminary work on the use of textual information for time series prediction, especially electricity demand one. The long-term goal is to include multiple sources of textual information to improve the accuracy of state-of-the-art methods or to build a text based forecaster which can be used to increase the diversity in a set of experts for electricity consumption BIBREF31. However due to the redundancy of the information of the considered weather reports with meteorological features, it may be necessary to consider alternative textual sources. The use of social media such as Facebook, Twitter or Instagram may give interesting insight and will therefore be investigated in future work. Additional results for the prediction tasks on temperature and wind speed can be found in tables TABREF47 to TABREF50. An example of forecast for the French temperature is given in figure FIGREF51. While not strictly normally distributed, the residuals for the French electricity demand display an acceptable behavior. This holds also true for the British consumption, and both temperature time series, but is of lesser quality for the wind one. The the UK wind LASSO regression, the words with the highest coefficients $\beta _w$ are indeed related to strong wind phenomena, whereas antagonistic ones such as "fog" or "mist" have strongly negative ones as expected (fig. FIGREF53). For both languages we represented the evolution of the (normalized) losses for the problem of load regression in fig. FIGREF54. The aspect is a typical one, with the validation loss slightly above the training one. The slightly erratic behavior of the former one is possibly due to a lack of data to train the embeddings. The cosine distances for three other major words and for both corpora have been calculated as well. The results are given in tables TABREF57 and TABREF58. For both languages, the three summer months are grouped together, and so are the two week-end days. However again the results are less clear for the English language. They are especially mediocre for "hot", considering that only "warm" seems truly relevant and that "August" for instance is quite far away. For the French language instead of "hot" the distances to "thunderstorms" were calculated. The results are quite satisfactory, with "orageux"/"orageuse" ("thundery") coming in the two first places and related meteorological phenomena ("cumulus" and "grêle", meaning "hail") relatively close as well. For the French case, Saturday and Sunday are very close to summer related words. This observation probably highlights the fact that the RNN groups load increasing and decreasing words in opposite parts of the embedding space.
Relative error is less than 5%
3e839783d8a4f2fe50ece4a9b476546f0842b193
3e839783d8a4f2fe50ece4a9b476546f0842b193_0
Q: What was their result on Stance Sentiment Emotion Corpus? Text: Introduction The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. Related Work A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time. Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis. Proposed Methodology We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow. BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis). Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$): where $W_w$ and $b_{w}$ are jointly learned parameters. Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms. Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as: where $W_s$ and $b_{s}$ are parameters to be learned. $\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification. Proposed Methodology ::: Distributional Thesaurus Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system. Proposed Methodology ::: Word Embeddings Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word. Datasets, Experiments and Analysis In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis. Datasets, Experiments and Analysis ::: Datasets We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively. Datasets, Experiments and Analysis ::: Preprocessing The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token. Datasets, Experiments and Analysis ::: Implementation Details We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. Datasets, Experiments and Analysis ::: Results and Analysis We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance. Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis. We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise. Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis. As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below. @realMessi he is a real sportsman and deserves to be the skipper. The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text. A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting. Datasets, Experiments and Analysis ::: Error Analysis We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions. Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios: $\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances. Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is! Actual Sentiment: positive Actual Emotion: anticipation, joy, surprise, trust Predicted Sentiment: negative Predicted Emotion: anger, anticipation, sadness The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life. $\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion. Text: I've been called many things, quitter is not one of them... Actual Sentiment: positive Actual Emotion: anticipation, joy, trust Predicted Sentiment: negative Predicted Emotion: anticipation, sadness Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion. Conclusion In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages. Acknowledgements Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
F1 score of 66.66%
2869d19e54fb554fcf1d6888e526135803bb7d75
2869d19e54fb554fcf1d6888e526135803bb7d75_0
Q: What performance did they obtain on the SemEval dataset? Text: Introduction The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. Related Work A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time. Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis. Proposed Methodology We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow. BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis). Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$): where $W_w$ and $b_{w}$ are jointly learned parameters. Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms. Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as: where $W_s$ and $b_{s}$ are parameters to be learned. $\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification. Proposed Methodology ::: Distributional Thesaurus Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system. Proposed Methodology ::: Word Embeddings Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word. Datasets, Experiments and Analysis In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis. Datasets, Experiments and Analysis ::: Datasets We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively. Datasets, Experiments and Analysis ::: Preprocessing The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token. Datasets, Experiments and Analysis ::: Implementation Details We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. Datasets, Experiments and Analysis ::: Results and Analysis We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance. Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis. We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise. Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis. As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below. @realMessi he is a real sportsman and deserves to be the skipper. The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text. A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting. Datasets, Experiments and Analysis ::: Error Analysis We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions. Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios: $\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances. Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is! Actual Sentiment: positive Actual Emotion: anticipation, joy, surprise, trust Predicted Sentiment: negative Predicted Emotion: anger, anticipation, sadness The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life. $\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion. Text: I've been called many things, quitter is not one of them... Actual Sentiment: positive Actual Emotion: anticipation, joy, trust Predicted Sentiment: negative Predicted Emotion: anticipation, sadness Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion. Conclusion In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages. Acknowledgements Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
F1 score of 82.10%
894c086a2cbfe64aa094c1edabbb1932a3d7c38a
894c086a2cbfe64aa094c1edabbb1932a3d7c38a_0
Q: What are the state-of-the-art systems? Text: Introduction The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. Related Work A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time. Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis. Proposed Methodology We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow. BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis). Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$): where $W_w$ and $b_{w}$ are jointly learned parameters. Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms. Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as: where $W_s$ and $b_{s}$ are parameters to be learned. $\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification. Proposed Methodology ::: Distributional Thesaurus Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system. Proposed Methodology ::: Word Embeddings Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word. Datasets, Experiments and Analysis In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis. Datasets, Experiments and Analysis ::: Datasets We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively. Datasets, Experiments and Analysis ::: Preprocessing The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token. Datasets, Experiments and Analysis ::: Implementation Details We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. Datasets, Experiments and Analysis ::: Results and Analysis We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance. Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis. We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise. Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis. As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below. @realMessi he is a real sportsman and deserves to be the skipper. The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text. A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting. Datasets, Experiments and Analysis ::: Error Analysis We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions. Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios: $\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances. Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is! Actual Sentiment: positive Actual Emotion: anticipation, joy, surprise, trust Predicted Sentiment: negative Predicted Emotion: anger, anticipation, sadness The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life. $\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion. Text: I've been called many things, quitter is not one of them... Actual Sentiment: positive Actual Emotion: anticipation, joy, trust Predicted Sentiment: negative Predicted Emotion: anticipation, sadness Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion. Conclusion In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages. Acknowledgements Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
For sentiment analysis UWB, INF-UFRGS-OPINION-MINING, LitisMind, pkudblab and SVM + n-grams + sentiment and for emotion analysis MaxEnt, SVM, LSTM, BiLSTM and CNN
722e9b6f55971b4c48a60f7a9fe37372f5bf3742
722e9b6f55971b4c48a60f7a9fe37372f5bf3742_0
Q: How is multi-tasking performed? Text: Introduction The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. Related Work A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time. Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis. Proposed Methodology We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow. BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis). Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$): where $W_w$ and $b_{w}$ are jointly learned parameters. Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms. Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as: where $W_s$ and $b_{s}$ are parameters to be learned. $\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification. Proposed Methodology ::: Distributional Thesaurus Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system. Proposed Methodology ::: Word Embeddings Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word. Datasets, Experiments and Analysis In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis. Datasets, Experiments and Analysis ::: Datasets We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively. Datasets, Experiments and Analysis ::: Preprocessing The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token. Datasets, Experiments and Analysis ::: Implementation Details We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. Datasets, Experiments and Analysis ::: Results and Analysis We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance. Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis. We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise. Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis. As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below. @realMessi he is a real sportsman and deserves to be the skipper. The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text. A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting. Datasets, Experiments and Analysis ::: Error Analysis We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions. Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios: $\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances. Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is! Actual Sentiment: positive Actual Emotion: anticipation, joy, surprise, trust Predicted Sentiment: negative Predicted Emotion: anger, anticipation, sadness The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life. $\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion. Text: I've been called many things, quitter is not one of them... Actual Sentiment: positive Actual Emotion: anticipation, joy, trust Predicted Sentiment: negative Predicted Emotion: anticipation, sadness Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion. Conclusion In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages. Acknowledgements Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks., Each of the shared representations is then fed to the primary attention mechanism
9c2f306044b3d1b3b7fdd05d1c046e887796dd7a
9c2f306044b3d1b3b7fdd05d1c046e887796dd7a_0
Q: What are the datasets used for training? Text: Introduction The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. Related Work A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time. Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis. Proposed Methodology We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow. BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis). Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$): where $W_w$ and $b_{w}$ are jointly learned parameters. Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms. Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as: where $W_s$ and $b_{s}$ are parameters to be learned. $\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification. Proposed Methodology ::: Distributional Thesaurus Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system. Proposed Methodology ::: Word Embeddings Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word. Datasets, Experiments and Analysis In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis. Datasets, Experiments and Analysis ::: Datasets We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively. Datasets, Experiments and Analysis ::: Preprocessing The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token. Datasets, Experiments and Analysis ::: Implementation Details We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. Datasets, Experiments and Analysis ::: Results and Analysis We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance. Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis. We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise. Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis. As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below. @realMessi he is a real sportsman and deserves to be the skipper. The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text. A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting. Datasets, Experiments and Analysis ::: Error Analysis We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions. Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios: $\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances. Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is! Actual Sentiment: positive Actual Emotion: anticipation, joy, surprise, trust Predicted Sentiment: negative Predicted Emotion: anger, anticipation, sadness The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life. $\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion. Text: I've been called many things, quitter is not one of them... Actual Sentiment: positive Actual Emotion: anticipation, joy, trust Predicted Sentiment: negative Predicted Emotion: anticipation, sadness Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion. Conclusion In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages. Acknowledgements Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
SemEval 2016 Task 6 BIBREF7, Stance Sentiment Emotion Corpus (SSEC) BIBREF15
3d99bc8ab2f36d4742e408f211bec154bc6696f7
3d99bc8ab2f36d4742e408f211bec154bc6696f7_0
Q: How many parameters does the model have? Text: Introduction The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. Related Work A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time. Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis. Proposed Methodology We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow. BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis). Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$): where $W_w$ and $b_{w}$ are jointly learned parameters. Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms. Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as: where $W_s$ and $b_{s}$ are parameters to be learned. $\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification. Proposed Methodology ::: Distributional Thesaurus Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system. Proposed Methodology ::: Word Embeddings Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word. Datasets, Experiments and Analysis In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis. Datasets, Experiments and Analysis ::: Datasets We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively. Datasets, Experiments and Analysis ::: Preprocessing The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token. Datasets, Experiments and Analysis ::: Implementation Details We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. Datasets, Experiments and Analysis ::: Results and Analysis We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance. Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis. We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise. Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis. As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below. @realMessi he is a real sportsman and deserves to be the skipper. The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text. A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting. Datasets, Experiments and Analysis ::: Error Analysis We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions. Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios: $\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances. Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is! Actual Sentiment: positive Actual Emotion: anticipation, joy, surprise, trust Predicted Sentiment: negative Predicted Emotion: anger, anticipation, sadness The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life. $\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion. Text: I've been called many things, quitter is not one of them... Actual Sentiment: positive Actual Emotion: anticipation, joy, trust Predicted Sentiment: negative Predicted Emotion: anticipation, sadness Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion. Conclusion In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages. Acknowledgements Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
Unanswerable
9219eef636ddb020b9d394868959325562410f83
9219eef636ddb020b9d394868959325562410f83_0
Q: What is the previous state-of-the-art model? Text: Introduction The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. Related Work A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time. Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis. Proposed Methodology We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow. BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis). Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$): where $W_w$ and $b_{w}$ are jointly learned parameters. Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms. Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as: where $W_s$ and $b_{s}$ are parameters to be learned. $\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification. Proposed Methodology ::: Distributional Thesaurus Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system. Proposed Methodology ::: Word Embeddings Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word. Datasets, Experiments and Analysis In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis. Datasets, Experiments and Analysis ::: Datasets We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively. Datasets, Experiments and Analysis ::: Preprocessing The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token. Datasets, Experiments and Analysis ::: Implementation Details We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. Datasets, Experiments and Analysis ::: Results and Analysis We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance. Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis. We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise. Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis. As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below. @realMessi he is a real sportsman and deserves to be the skipper. The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text. A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting. Datasets, Experiments and Analysis ::: Error Analysis We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions. Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios: $\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances. Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is! Actual Sentiment: positive Actual Emotion: anticipation, joy, surprise, trust Predicted Sentiment: negative Predicted Emotion: anger, anticipation, sadness The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life. $\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion. Text: I've been called many things, quitter is not one of them... Actual Sentiment: positive Actual Emotion: anticipation, joy, trust Predicted Sentiment: negative Predicted Emotion: anticipation, sadness Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion. Conclusion In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages. Acknowledgements Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
BIBREF7, BIBREF39, BIBREF37, LitisMind, Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN
ff83eea2df9976c1a01482818340871b17ad4f8c
ff83eea2df9976c1a01482818340871b17ad4f8c_0
Q: What is the previous state-of-the-art performance? Text: Introduction The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. Related Work A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time. Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis. Proposed Methodology We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow. BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis). Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$): where $W_w$ and $b_{w}$ are jointly learned parameters. Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms. Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as: where $W_s$ and $b_{s}$ are parameters to be learned. $\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both. Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification. Proposed Methodology ::: Distributional Thesaurus Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system. Proposed Methodology ::: Word Embeddings Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word. Datasets, Experiments and Analysis In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis. Datasets, Experiments and Analysis ::: Datasets We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively. Datasets, Experiments and Analysis ::: Preprocessing The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token. Datasets, Experiments and Analysis ::: Implementation Details We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. Datasets, Experiments and Analysis ::: Results and Analysis We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance. Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis. We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise. Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis. As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below. @realMessi he is a real sportsman and deserves to be the skipper. The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text. A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting. Datasets, Experiments and Analysis ::: Error Analysis We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions. Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios: $\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances. Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is! Actual Sentiment: positive Actual Emotion: anticipation, joy, surprise, trust Predicted Sentiment: negative Predicted Emotion: anger, anticipation, sadness The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life. $\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion. Text: I've been called many things, quitter is not one of them... Actual Sentiment: positive Actual Emotion: anticipation, joy, trust Predicted Sentiment: negative Predicted Emotion: anticipation, sadness Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion. Conclusion In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages. Acknowledgements Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
Unanswerable
0ee20a3a343e1e251b74a804e9aa1393d17b46d6
0ee20a3a343e1e251b74a804e9aa1393d17b46d6_0
Q: How can the classifier facilitate the annotation task for human annotators? Text: Introduction Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study. MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation. Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11. In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter. Introduction ::: MH17 Related (Dis-)Information Flow on Twitter We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role. To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed. Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian. Introduction ::: Contributions We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available. Competing Narratives about the MH17 Crash We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists. Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat. The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash. Dataset For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. Classification Models For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN). Classification Models ::: Hashtag-Based Baseline Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi. Classification Models ::: Logistic Regression Classifier As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia. Classification Models ::: Convolutional Neural Network Classifier As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4. Experimental Setup We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits. Experimental Setup ::: Tweet Preprocessing Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding. Experimental Setup ::: Evaluation Metrics We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task. Results The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes. Results ::: Comparison Between Models We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class. Results ::: Per-Class Performance With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset. Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class. In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task. Data Augmentation Experiments using Cross-Lingual Transfer The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time. We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation. Error Analysis In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following. Error Analysis ::: Category I Errors Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them. Error Analysis ::: Category II Errors Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash. Error Analysis ::: Category III Errors Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o). Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home. From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37. Integrating Automatic Predictions into the Retweet Network Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38). Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge. Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class). Conclusion In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible. Acknowledgements We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012).
quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets
f0e8f045e2e33a2129e67fb32f356242db1dc280
f0e8f045e2e33a2129e67fb32f356242db1dc280_0
Q: What recommendations are made to improve the performance in future? Text: Introduction Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study. MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation. Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11. In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter. Introduction ::: MH17 Related (Dis-)Information Flow on Twitter We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role. To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed. Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian. Introduction ::: Contributions We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available. Competing Narratives about the MH17 Crash We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists. Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat. The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash. Dataset For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. Classification Models For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN). Classification Models ::: Hashtag-Based Baseline Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi. Classification Models ::: Logistic Regression Classifier As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia. Classification Models ::: Convolutional Neural Network Classifier As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4. Experimental Setup We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits. Experimental Setup ::: Tweet Preprocessing Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding. Experimental Setup ::: Evaluation Metrics We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task. Results The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes. Results ::: Comparison Between Models We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class. Results ::: Per-Class Performance With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset. Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class. In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task. Data Augmentation Experiments using Cross-Lingual Transfer The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time. We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation. Error Analysis In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following. Error Analysis ::: Category I Errors Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them. Error Analysis ::: Category II Errors Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash. Error Analysis ::: Category III Errors Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o). Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home. From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37. Integrating Automatic Predictions into the Retweet Network Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38). Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge. Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class). Conclusion In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible. Acknowledgements We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012).
applying reasoning BIBREF36 or irony detection methods BIBREF37
b6c235d5986914b380c084d9535a7b01310c0278
b6c235d5986914b380c084d9535a7b01310c0278_0
Q: What type of errors do the classifiers use? Text: Introduction Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study. MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation. Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11. In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter. Introduction ::: MH17 Related (Dis-)Information Flow on Twitter We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role. To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed. Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian. Introduction ::: Contributions We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available. Competing Narratives about the MH17 Crash We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists. Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat. The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash. Dataset For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. Classification Models For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN). Classification Models ::: Hashtag-Based Baseline Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi. Classification Models ::: Logistic Regression Classifier As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia. Classification Models ::: Convolutional Neural Network Classifier As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4. Experimental Setup We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits. Experimental Setup ::: Tweet Preprocessing Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding. Experimental Setup ::: Evaluation Metrics We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task. Results The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes. Results ::: Comparison Between Models We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class. Results ::: Per-Class Performance With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset. Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class. In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task. Data Augmentation Experiments using Cross-Lingual Transfer The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time. We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation. Error Analysis In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following. Error Analysis ::: Category I Errors Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them. Error Analysis ::: Category II Errors Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash. Error Analysis ::: Category III Errors Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o). Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home. From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37. Integrating Automatic Predictions into the Retweet Network Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38). Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge. Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class). Conclusion In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible. Acknowledgements We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012).
correct class can be directly inferred from the text content easily, even without background knowledge, correct class can be inferred from the text content, given that event-specific knowledge is provided, orrect class can be inferred from the text content if the text is interpreted correctly
e9b1e8e575809f7b80b1125305cfa76ae4f5bdfb
e9b1e8e575809f7b80b1125305cfa76ae4f5bdfb_0
Q: What neural classifiers are used? Text: Introduction Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study. MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation. Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11. In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter. Introduction ::: MH17 Related (Dis-)Information Flow on Twitter We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role. To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed. Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian. Introduction ::: Contributions We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available. Competing Narratives about the MH17 Crash We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists. Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat. The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash. Dataset For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. Classification Models For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN). Classification Models ::: Hashtag-Based Baseline Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi. Classification Models ::: Logistic Regression Classifier As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia. Classification Models ::: Convolutional Neural Network Classifier As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4. Experimental Setup We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits. Experimental Setup ::: Tweet Preprocessing Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding. Experimental Setup ::: Evaluation Metrics We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task. Results The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes. Results ::: Comparison Between Models We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class. Results ::: Per-Class Performance With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset. Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class. In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task. Data Augmentation Experiments using Cross-Lingual Transfer The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time. We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation. Error Analysis In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following. Error Analysis ::: Category I Errors Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them. Error Analysis ::: Category II Errors Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash. Error Analysis ::: Category III Errors Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o). Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home. From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37. Integrating Automatic Predictions into the Retweet Network Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38). Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge. Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class). Conclusion In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible. Acknowledgements We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012).
convolutional neural network (CNN) BIBREF29
1e4450e23ec81fdd59821055f998fd9db0398b16
1e4450e23ec81fdd59821055f998fd9db0398b16_0
Q: What is the hashtags does the hashtag-based baseline use? Text: Introduction Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study. MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation. Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11. In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter. Introduction ::: MH17 Related (Dis-)Information Flow on Twitter We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role. To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed. Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian. Introduction ::: Contributions We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available. Competing Narratives about the MH17 Crash We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists. Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat. The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash. Dataset For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. Classification Models For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN). Classification Models ::: Hashtag-Based Baseline Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi. Classification Models ::: Logistic Regression Classifier As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia. Classification Models ::: Convolutional Neural Network Classifier As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4. Experimental Setup We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits. Experimental Setup ::: Tweet Preprocessing Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding. Experimental Setup ::: Evaluation Metrics We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task. Results The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes. Results ::: Comparison Between Models We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class. Results ::: Per-Class Performance With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset. Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class. In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task. Data Augmentation Experiments using Cross-Lingual Transfer The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time. We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation. Error Analysis In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following. Error Analysis ::: Category I Errors Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them. Error Analysis ::: Category II Errors Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash. Error Analysis ::: Category III Errors Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o). Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home. From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37. Integrating Automatic Predictions into the Retweet Network Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38). Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge. Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class). Conclusion In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible. Acknowledgements We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012).
Unanswerable
02ce4c288df14a90a210cb39973c6ac0fb4cec59
02ce4c288df14a90a210cb39973c6ac0fb4cec59_0
Q: What languages are included in the dataset? Text: Introduction Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study. MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation. Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11. In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter. Introduction ::: MH17 Related (Dis-)Information Flow on Twitter We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role. To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed. Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian. Introduction ::: Contributions We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available. Competing Narratives about the MH17 Crash We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists. Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat. The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash. Dataset For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. Classification Models For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN). Classification Models ::: Hashtag-Based Baseline Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi. Classification Models ::: Logistic Regression Classifier As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia. Classification Models ::: Convolutional Neural Network Classifier As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4. Experimental Setup We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits. Experimental Setup ::: Tweet Preprocessing Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding. Experimental Setup ::: Evaluation Metrics We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task. Results The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes. Results ::: Comparison Between Models We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class. Results ::: Per-Class Performance With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset. Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class. In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task. Data Augmentation Experiments using Cross-Lingual Transfer The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time. We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation. Error Analysis In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following. Error Analysis ::: Category I Errors Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them. Error Analysis ::: Category II Errors Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash. Error Analysis ::: Category III Errors Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o). Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home. From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37. Integrating Automatic Predictions into the Retweet Network Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38). Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge. Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class). Conclusion In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible. Acknowledgements We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012).
English
60726d9792d301d5ff8e37fbb31d5104a520dea3
60726d9792d301d5ff8e37fbb31d5104a520dea3_0
Q: What dataset is used for this study? Text: Introduction Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study. MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation. Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11. In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter. Introduction ::: MH17 Related (Dis-)Information Flow on Twitter We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role. To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed. Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian. Introduction ::: Contributions We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available. Competing Narratives about the MH17 Crash We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists. Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat. The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash. Dataset For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. Classification Models For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN). Classification Models ::: Hashtag-Based Baseline Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi. Classification Models ::: Logistic Regression Classifier As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia. Classification Models ::: Convolutional Neural Network Classifier As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4. Experimental Setup We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits. Experimental Setup ::: Tweet Preprocessing Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding. Experimental Setup ::: Evaluation Metrics We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task. Results The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes. Results ::: Comparison Between Models We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class. Results ::: Per-Class Performance With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset. Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class. In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task. Data Augmentation Experiments using Cross-Lingual Transfer The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time. We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation. Error Analysis In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following. Error Analysis ::: Category I Errors Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them. Error Analysis ::: Category II Errors Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash. Error Analysis ::: Category III Errors Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o). Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home. From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37. Integrating Automatic Predictions into the Retweet Network Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38). Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge. Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class). Conclusion In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible. Acknowledgements We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012).
MH17 Twitter dataset
e39d90b8d959697d9780eddce3a343e60543be65
e39d90b8d959697d9780eddce3a343e60543be65_0
Q: What proxies for data annotation were used in previous datasets? Text: Introduction Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study. MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation. Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11. In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter. Introduction ::: MH17 Related (Dis-)Information Flow on Twitter We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role. To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed. Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian. Introduction ::: Contributions We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available. Competing Narratives about the MH17 Crash We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists. Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat. The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash. Dataset For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016. BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates. Classification Models For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN). Classification Models ::: Hashtag-Based Baseline Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi. Classification Models ::: Logistic Regression Classifier As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia. Classification Models ::: Convolutional Neural Network Classifier As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4. Experimental Setup We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits. Experimental Setup ::: Tweet Preprocessing Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding. Experimental Setup ::: Evaluation Metrics We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task. Results The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes. Results ::: Comparison Between Models We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class. Results ::: Per-Class Performance With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset. Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class. In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task. Data Augmentation Experiments using Cross-Lingual Transfer The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time. We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation. Error Analysis In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following. Error Analysis ::: Category I Errors Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them. Error Analysis ::: Category II Errors Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash. Error Analysis ::: Category III Errors Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o). Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home. From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37. Integrating Automatic Predictions into the Retweet Network Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38). Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge. Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class). Conclusion In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible. Acknowledgements We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012).
widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet, Natural Language Processing (NLP) models can be used to automatically label text content
5f7850254b723adf891930c6faced1058b99bd57
5f7850254b723adf891930c6faced1058b99bd57_0
Q: What kind of features are used by the HMM models, and how interpretable are those? Text: Introduction Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models could be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque expert system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power. There are several aspects of what makes a model or algorithm understandable to humans. One aspect is model complexity or parsimony. Another aspect is the ability to trace back from a prediction or model component to particularly influential features in the data BIBREF0 BIBREF1 . This could be useful for understanding mistakes made by neural networks, which have human-level performance most of the time, but can perform very poorly on seemingly easy cases. For instance, convolutional networks can misclassify adversarial examples with very high confidence BIBREF2 , and made headlines in 2015 when the image tagging algorithm in Google Photos mislabeled African Americans as gorillas. It's reasonable to expect recurrent networks to fail in similar ways as well. It would thus be useful to have more visibility into where these sorts of errors come from, i.e. which groups of features contribute to such flawed predictions. Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ). We explore several methods for building interpretable models by combining LSTMs and HMMs. The existing body of literature mostly focuses on methods that specifically train the RNN to predict HMM states BIBREF5 or posteriors BIBREF6 , referred to as hybrid or tandem methods respectively. We first investigate an approach that does not require the RNN to be modified in order to make it understandable, as the interpretation happens after the fact. Here, we model the big picture of the state changes in the LSTM, by extracting the hidden states and approximating them with a continuous emission hidden Markov model (HMM). We then take the reverse approach where the HMM state probabilities are added to the output layer of the LSTM (see Figure 1 ). The LSTM model can then make use of the information from the HMM, and fill in the gaps when the HMM is not performing well, resulting in an LSTM with a smaller number of hidden state dimensions that could be interpreted individually (Figures 3 , 3 ). Methods We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data). LSTM models We use a character-level LSTM with 1 layer and no dropout, based on the Element-Research library. We train the LSTM for 10 epochs, starting with a learning rate of 1, where the learning rate is halved whenever $\exp (-l_t) > \exp (-l_{t-1}) + 1$ , where $l_t$ is the log likelihood score at epoch $t$ . The $L_2$ -norm of the parameter gradient vector is clipped at a threshold of 5. Hidden Markov models The HMM training procedure is as follows: Initialization of HMM hidden states: (Discrete HMM) Random multinomial draw for each time step (i.i.d. across time steps). (Continuous HMM) K-means clusters fit on LSTM states, to speed up convergence relative to random initialization. At each iteration: Sample states using Forward Filtering Backwards Sampling algorithm (FFBS, BIBREF7 ). Sample transition parameters from a Multinomial-Dirichlet posterior. Let $n_{ij}$ be the number of transitions from state $i$ to state $j$ . Then the posterior distribution of the $i$ -th row of transition matrix $T$ (corresponding to transitions from state $i$ ) is: $T_i \sim \text{Mult}(n_{ij} | T_i) \text{Dir}(T_i | \alpha )$ where $\alpha $ is the Dirichlet hyperparameter. (Continuous HMM) Sample multivariate normal emission parameters from Normal-Inverse-Wishart posterior for state $i$ : $ \mu _i, \Sigma _i \sim N(y|\mu _i, \Sigma _i) N(\mu _i |0, \Sigma _i) \text{IW}(\Sigma _i) $ (Discrete HMM) Sample the emission parameters from a Multinomial-Dirichlet posterior. Evaluation: We evaluate the methods on how well they predict the next observation in the validation set. For the HMM models, we do a forward pass on the validation set (no backward pass unlike the full FFBS), and compute the HMM state distribution vector $p_t$ for each time step $t$ . Then we compute the predictive likelihood for the next observation as follows: $ P(y_{t+1} | p_t) =\sum _{x_t=1}^n \sum _{x_{t+1}=1}^n p_{tx_t} \cdot T_{x_t, x_{t+1}} \cdot P(y_{t+1} | x_{t+1})$ where $n$ is the number of hidden states in the HMM. Hybrid models Our main hybrid model is put together sequentially, as shown in Figure 1 . We first run the discrete HMM on the data, outputting the hidden state distributions obtained by the HMM's forward pass, and then add this information to the architecture in parallel with a 1-layer LSTM. The linear layer between the LSTM and the prediction layer is augmented with an extra column for each HMM state. The LSTM component of this architecture can be smaller than a standalone LSTM, since it only needs to fill in the gaps in the HMM's predictions. The HMM is written in Python, and the rest of the architecture is in Torch. We also build a joint hybrid model, where the LSTM and HMM are simultaneously trained in Torch. We implemented an HMM Torch module, optimized using stochastic gradient descent rather than FFBS. Similarly to the sequential hybrid model, we concatenate the LSTM outputs with the HMM state probabilities. Experiments We test the models on several text data sets on the character level: the Penn Tree Bank (5M characters), and two data sets used by BIBREF4 , Tiny Shakespeare (1M characters) and Linux Kernel (5M characters). We chose $k=20$ for the continuous HMM based on a PCA analysis of the LSTM states, as the first 20 components captured almost all the variance. Table 1 shows the predictive log likelihood of the next text character for each method. On all text data sets, the hybrid algorithm performs a bit better than the standalone LSTM with the same LSTM state dimension. This effect gets smaller as we increase the LSTM size and the HMM makes less difference to the prediction (though it can still make a difference in terms of interpretability). The hybrid algorithm with 20 HMM states does better than the one with 10 HMM states. The joint hybrid algorithm outperforms the sequential hybrid on Shakespeare data, but does worse on PTB and Linux data, which suggests that the joint hybrid is more helpful for smaller data sets. The joint hybrid is an order of magnitude slower than the sequential hybrid, as the SGD-based HMM is slower to train than the FFBS-based HMM. We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data. Conclusion and future work Hybrid HMM-RNN approaches combine the interpretability of HMMs with the predictive power of RNNs. Sometimes, a small hybrid model can perform better than a standalone LSTM of the same size. We use visualizations to show how the LSTM and HMM components of the hybrid algorithm complement each other in terms of features learned in the data.
A continuous emission HMM uses the hidden states of a 2-layer LSTM as features and a discrete emission HMM uses data as features. The interpretability of the model is shown in Figure 2.
4d05a264b2353cff310edb480a917d686353b007
4d05a264b2353cff310edb480a917d686353b007_0
Q: What kind of information do the HMMs learn that the LSTMs don't? Text: Introduction Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models could be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque expert system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power. There are several aspects of what makes a model or algorithm understandable to humans. One aspect is model complexity or parsimony. Another aspect is the ability to trace back from a prediction or model component to particularly influential features in the data BIBREF0 BIBREF1 . This could be useful for understanding mistakes made by neural networks, which have human-level performance most of the time, but can perform very poorly on seemingly easy cases. For instance, convolutional networks can misclassify adversarial examples with very high confidence BIBREF2 , and made headlines in 2015 when the image tagging algorithm in Google Photos mislabeled African Americans as gorillas. It's reasonable to expect recurrent networks to fail in similar ways as well. It would thus be useful to have more visibility into where these sorts of errors come from, i.e. which groups of features contribute to such flawed predictions. Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ). We explore several methods for building interpretable models by combining LSTMs and HMMs. The existing body of literature mostly focuses on methods that specifically train the RNN to predict HMM states BIBREF5 or posteriors BIBREF6 , referred to as hybrid or tandem methods respectively. We first investigate an approach that does not require the RNN to be modified in order to make it understandable, as the interpretation happens after the fact. Here, we model the big picture of the state changes in the LSTM, by extracting the hidden states and approximating them with a continuous emission hidden Markov model (HMM). We then take the reverse approach where the HMM state probabilities are added to the output layer of the LSTM (see Figure 1 ). The LSTM model can then make use of the information from the HMM, and fill in the gaps when the HMM is not performing well, resulting in an LSTM with a smaller number of hidden state dimensions that could be interpreted individually (Figures 3 , 3 ). Methods We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data). LSTM models We use a character-level LSTM with 1 layer and no dropout, based on the Element-Research library. We train the LSTM for 10 epochs, starting with a learning rate of 1, where the learning rate is halved whenever $\exp (-l_t) > \exp (-l_{t-1}) + 1$ , where $l_t$ is the log likelihood score at epoch $t$ . The $L_2$ -norm of the parameter gradient vector is clipped at a threshold of 5. Hidden Markov models The HMM training procedure is as follows: Initialization of HMM hidden states: (Discrete HMM) Random multinomial draw for each time step (i.i.d. across time steps). (Continuous HMM) K-means clusters fit on LSTM states, to speed up convergence relative to random initialization. At each iteration: Sample states using Forward Filtering Backwards Sampling algorithm (FFBS, BIBREF7 ). Sample transition parameters from a Multinomial-Dirichlet posterior. Let $n_{ij}$ be the number of transitions from state $i$ to state $j$ . Then the posterior distribution of the $i$ -th row of transition matrix $T$ (corresponding to transitions from state $i$ ) is: $T_i \sim \text{Mult}(n_{ij} | T_i) \text{Dir}(T_i | \alpha )$ where $\alpha $ is the Dirichlet hyperparameter. (Continuous HMM) Sample multivariate normal emission parameters from Normal-Inverse-Wishart posterior for state $i$ : $ \mu _i, \Sigma _i \sim N(y|\mu _i, \Sigma _i) N(\mu _i |0, \Sigma _i) \text{IW}(\Sigma _i) $ (Discrete HMM) Sample the emission parameters from a Multinomial-Dirichlet posterior. Evaluation: We evaluate the methods on how well they predict the next observation in the validation set. For the HMM models, we do a forward pass on the validation set (no backward pass unlike the full FFBS), and compute the HMM state distribution vector $p_t$ for each time step $t$ . Then we compute the predictive likelihood for the next observation as follows: $ P(y_{t+1} | p_t) =\sum _{x_t=1}^n \sum _{x_{t+1}=1}^n p_{tx_t} \cdot T_{x_t, x_{t+1}} \cdot P(y_{t+1} | x_{t+1})$ where $n$ is the number of hidden states in the HMM. Hybrid models Our main hybrid model is put together sequentially, as shown in Figure 1 . We first run the discrete HMM on the data, outputting the hidden state distributions obtained by the HMM's forward pass, and then add this information to the architecture in parallel with a 1-layer LSTM. The linear layer between the LSTM and the prediction layer is augmented with an extra column for each HMM state. The LSTM component of this architecture can be smaller than a standalone LSTM, since it only needs to fill in the gaps in the HMM's predictions. The HMM is written in Python, and the rest of the architecture is in Torch. We also build a joint hybrid model, where the LSTM and HMM are simultaneously trained in Torch. We implemented an HMM Torch module, optimized using stochastic gradient descent rather than FFBS. Similarly to the sequential hybrid model, we concatenate the LSTM outputs with the HMM state probabilities. Experiments We test the models on several text data sets on the character level: the Penn Tree Bank (5M characters), and two data sets used by BIBREF4 , Tiny Shakespeare (1M characters) and Linux Kernel (5M characters). We chose $k=20$ for the continuous HMM based on a PCA analysis of the LSTM states, as the first 20 components captured almost all the variance. Table 1 shows the predictive log likelihood of the next text character for each method. On all text data sets, the hybrid algorithm performs a bit better than the standalone LSTM with the same LSTM state dimension. This effect gets smaller as we increase the LSTM size and the HMM makes less difference to the prediction (though it can still make a difference in terms of interpretability). The hybrid algorithm with 20 HMM states does better than the one with 10 HMM states. The joint hybrid algorithm outperforms the sequential hybrid on Shakespeare data, but does worse on PTB and Linux data, which suggests that the joint hybrid is more helpful for smaller data sets. The joint hybrid is an order of magnitude slower than the sequential hybrid, as the SGD-based HMM is slower to train than the FFBS-based HMM. We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data. Conclusion and future work Hybrid HMM-RNN approaches combine the interpretability of HMMs with the predictive power of RNNs. Sometimes, a small hybrid model can perform better than a standalone LSTM of the same size. We use visualizations to show how the LSTM and HMM components of the hybrid algorithm complement each other in terms of features learned in the data.
The HMM can identify punctuation or pick up on vowels.
7cdce4222cea6955b656c1a3df1129bb8119e2d0
7cdce4222cea6955b656c1a3df1129bb8119e2d0_0
Q: Which methods do the authors use to reach the conclusion that LSTMs and HMMs learn complementary information? Text: Introduction Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models could be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque expert system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power. There are several aspects of what makes a model or algorithm understandable to humans. One aspect is model complexity or parsimony. Another aspect is the ability to trace back from a prediction or model component to particularly influential features in the data BIBREF0 BIBREF1 . This could be useful for understanding mistakes made by neural networks, which have human-level performance most of the time, but can perform very poorly on seemingly easy cases. For instance, convolutional networks can misclassify adversarial examples with very high confidence BIBREF2 , and made headlines in 2015 when the image tagging algorithm in Google Photos mislabeled African Americans as gorillas. It's reasonable to expect recurrent networks to fail in similar ways as well. It would thus be useful to have more visibility into where these sorts of errors come from, i.e. which groups of features contribute to such flawed predictions. Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ). We explore several methods for building interpretable models by combining LSTMs and HMMs. The existing body of literature mostly focuses on methods that specifically train the RNN to predict HMM states BIBREF5 or posteriors BIBREF6 , referred to as hybrid or tandem methods respectively. We first investigate an approach that does not require the RNN to be modified in order to make it understandable, as the interpretation happens after the fact. Here, we model the big picture of the state changes in the LSTM, by extracting the hidden states and approximating them with a continuous emission hidden Markov model (HMM). We then take the reverse approach where the HMM state probabilities are added to the output layer of the LSTM (see Figure 1 ). The LSTM model can then make use of the information from the HMM, and fill in the gaps when the HMM is not performing well, resulting in an LSTM with a smaller number of hidden state dimensions that could be interpreted individually (Figures 3 , 3 ). Methods We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data). LSTM models We use a character-level LSTM with 1 layer and no dropout, based on the Element-Research library. We train the LSTM for 10 epochs, starting with a learning rate of 1, where the learning rate is halved whenever $\exp (-l_t) > \exp (-l_{t-1}) + 1$ , where $l_t$ is the log likelihood score at epoch $t$ . The $L_2$ -norm of the parameter gradient vector is clipped at a threshold of 5. Hidden Markov models The HMM training procedure is as follows: Initialization of HMM hidden states: (Discrete HMM) Random multinomial draw for each time step (i.i.d. across time steps). (Continuous HMM) K-means clusters fit on LSTM states, to speed up convergence relative to random initialization. At each iteration: Sample states using Forward Filtering Backwards Sampling algorithm (FFBS, BIBREF7 ). Sample transition parameters from a Multinomial-Dirichlet posterior. Let $n_{ij}$ be the number of transitions from state $i$ to state $j$ . Then the posterior distribution of the $i$ -th row of transition matrix $T$ (corresponding to transitions from state $i$ ) is: $T_i \sim \text{Mult}(n_{ij} | T_i) \text{Dir}(T_i | \alpha )$ where $\alpha $ is the Dirichlet hyperparameter. (Continuous HMM) Sample multivariate normal emission parameters from Normal-Inverse-Wishart posterior for state $i$ : $ \mu _i, \Sigma _i \sim N(y|\mu _i, \Sigma _i) N(\mu _i |0, \Sigma _i) \text{IW}(\Sigma _i) $ (Discrete HMM) Sample the emission parameters from a Multinomial-Dirichlet posterior. Evaluation: We evaluate the methods on how well they predict the next observation in the validation set. For the HMM models, we do a forward pass on the validation set (no backward pass unlike the full FFBS), and compute the HMM state distribution vector $p_t$ for each time step $t$ . Then we compute the predictive likelihood for the next observation as follows: $ P(y_{t+1} | p_t) =\sum _{x_t=1}^n \sum _{x_{t+1}=1}^n p_{tx_t} \cdot T_{x_t, x_{t+1}} \cdot P(y_{t+1} | x_{t+1})$ where $n$ is the number of hidden states in the HMM. Hybrid models Our main hybrid model is put together sequentially, as shown in Figure 1 . We first run the discrete HMM on the data, outputting the hidden state distributions obtained by the HMM's forward pass, and then add this information to the architecture in parallel with a 1-layer LSTM. The linear layer between the LSTM and the prediction layer is augmented with an extra column for each HMM state. The LSTM component of this architecture can be smaller than a standalone LSTM, since it only needs to fill in the gaps in the HMM's predictions. The HMM is written in Python, and the rest of the architecture is in Torch. We also build a joint hybrid model, where the LSTM and HMM are simultaneously trained in Torch. We implemented an HMM Torch module, optimized using stochastic gradient descent rather than FFBS. Similarly to the sequential hybrid model, we concatenate the LSTM outputs with the HMM state probabilities. Experiments We test the models on several text data sets on the character level: the Penn Tree Bank (5M characters), and two data sets used by BIBREF4 , Tiny Shakespeare (1M characters) and Linux Kernel (5M characters). We chose $k=20$ for the continuous HMM based on a PCA analysis of the LSTM states, as the first 20 components captured almost all the variance. Table 1 shows the predictive log likelihood of the next text character for each method. On all text data sets, the hybrid algorithm performs a bit better than the standalone LSTM with the same LSTM state dimension. This effect gets smaller as we increase the LSTM size and the HMM makes less difference to the prediction (though it can still make a difference in terms of interpretability). The hybrid algorithm with 20 HMM states does better than the one with 10 HMM states. The joint hybrid algorithm outperforms the sequential hybrid on Shakespeare data, but does worse on PTB and Linux data, which suggests that the joint hybrid is more helpful for smaller data sets. The joint hybrid is an order of magnitude slower than the sequential hybrid, as the SGD-based HMM is slower to train than the FFBS-based HMM. We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data. Conclusion and future work Hybrid HMM-RNN approaches combine the interpretability of HMMs with the predictive power of RNNs. Sometimes, a small hybrid model can perform better than a standalone LSTM of the same size. We use visualizations to show how the LSTM and HMM components of the hybrid algorithm complement each other in terms of features learned in the data.
decision trees to predict individual hidden state dimensions, apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters
6ea63327ffbab2fc734dd5c2414e59d3acc56ea5
6ea63327ffbab2fc734dd5c2414e59d3acc56ea5_0
Q: How large is the gap in performance between the HMMs and the LSTMs? Text: Introduction Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models could be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque expert system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power. There are several aspects of what makes a model or algorithm understandable to humans. One aspect is model complexity or parsimony. Another aspect is the ability to trace back from a prediction or model component to particularly influential features in the data BIBREF0 BIBREF1 . This could be useful for understanding mistakes made by neural networks, which have human-level performance most of the time, but can perform very poorly on seemingly easy cases. For instance, convolutional networks can misclassify adversarial examples with very high confidence BIBREF2 , and made headlines in 2015 when the image tagging algorithm in Google Photos mislabeled African Americans as gorillas. It's reasonable to expect recurrent networks to fail in similar ways as well. It would thus be useful to have more visibility into where these sorts of errors come from, i.e. which groups of features contribute to such flawed predictions. Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ). We explore several methods for building interpretable models by combining LSTMs and HMMs. The existing body of literature mostly focuses on methods that specifically train the RNN to predict HMM states BIBREF5 or posteriors BIBREF6 , referred to as hybrid or tandem methods respectively. We first investigate an approach that does not require the RNN to be modified in order to make it understandable, as the interpretation happens after the fact. Here, we model the big picture of the state changes in the LSTM, by extracting the hidden states and approximating them with a continuous emission hidden Markov model (HMM). We then take the reverse approach where the HMM state probabilities are added to the output layer of the LSTM (see Figure 1 ). The LSTM model can then make use of the information from the HMM, and fill in the gaps when the HMM is not performing well, resulting in an LSTM with a smaller number of hidden state dimensions that could be interpreted individually (Figures 3 , 3 ). Methods We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data). LSTM models We use a character-level LSTM with 1 layer and no dropout, based on the Element-Research library. We train the LSTM for 10 epochs, starting with a learning rate of 1, where the learning rate is halved whenever $\exp (-l_t) > \exp (-l_{t-1}) + 1$ , where $l_t$ is the log likelihood score at epoch $t$ . The $L_2$ -norm of the parameter gradient vector is clipped at a threshold of 5. Hidden Markov models The HMM training procedure is as follows: Initialization of HMM hidden states: (Discrete HMM) Random multinomial draw for each time step (i.i.d. across time steps). (Continuous HMM) K-means clusters fit on LSTM states, to speed up convergence relative to random initialization. At each iteration: Sample states using Forward Filtering Backwards Sampling algorithm (FFBS, BIBREF7 ). Sample transition parameters from a Multinomial-Dirichlet posterior. Let $n_{ij}$ be the number of transitions from state $i$ to state $j$ . Then the posterior distribution of the $i$ -th row of transition matrix $T$ (corresponding to transitions from state $i$ ) is: $T_i \sim \text{Mult}(n_{ij} | T_i) \text{Dir}(T_i | \alpha )$ where $\alpha $ is the Dirichlet hyperparameter. (Continuous HMM) Sample multivariate normal emission parameters from Normal-Inverse-Wishart posterior for state $i$ : $ \mu _i, \Sigma _i \sim N(y|\mu _i, \Sigma _i) N(\mu _i |0, \Sigma _i) \text{IW}(\Sigma _i) $ (Discrete HMM) Sample the emission parameters from a Multinomial-Dirichlet posterior. Evaluation: We evaluate the methods on how well they predict the next observation in the validation set. For the HMM models, we do a forward pass on the validation set (no backward pass unlike the full FFBS), and compute the HMM state distribution vector $p_t$ for each time step $t$ . Then we compute the predictive likelihood for the next observation as follows: $ P(y_{t+1} | p_t) =\sum _{x_t=1}^n \sum _{x_{t+1}=1}^n p_{tx_t} \cdot T_{x_t, x_{t+1}} \cdot P(y_{t+1} | x_{t+1})$ where $n$ is the number of hidden states in the HMM. Hybrid models Our main hybrid model is put together sequentially, as shown in Figure 1 . We first run the discrete HMM on the data, outputting the hidden state distributions obtained by the HMM's forward pass, and then add this information to the architecture in parallel with a 1-layer LSTM. The linear layer between the LSTM and the prediction layer is augmented with an extra column for each HMM state. The LSTM component of this architecture can be smaller than a standalone LSTM, since it only needs to fill in the gaps in the HMM's predictions. The HMM is written in Python, and the rest of the architecture is in Torch. We also build a joint hybrid model, where the LSTM and HMM are simultaneously trained in Torch. We implemented an HMM Torch module, optimized using stochastic gradient descent rather than FFBS. Similarly to the sequential hybrid model, we concatenate the LSTM outputs with the HMM state probabilities. Experiments We test the models on several text data sets on the character level: the Penn Tree Bank (5M characters), and two data sets used by BIBREF4 , Tiny Shakespeare (1M characters) and Linux Kernel (5M characters). We chose $k=20$ for the continuous HMM based on a PCA analysis of the LSTM states, as the first 20 components captured almost all the variance. Table 1 shows the predictive log likelihood of the next text character for each method. On all text data sets, the hybrid algorithm performs a bit better than the standalone LSTM with the same LSTM state dimension. This effect gets smaller as we increase the LSTM size and the HMM makes less difference to the prediction (though it can still make a difference in terms of interpretability). The hybrid algorithm with 20 HMM states does better than the one with 10 HMM states. The joint hybrid algorithm outperforms the sequential hybrid on Shakespeare data, but does worse on PTB and Linux data, which suggests that the joint hybrid is more helpful for smaller data sets. The joint hybrid is an order of magnitude slower than the sequential hybrid, as the SGD-based HMM is slower to train than the FFBS-based HMM. We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data. Conclusion and future work Hybrid HMM-RNN approaches combine the interpretability of HMMs with the predictive power of RNNs. Sometimes, a small hybrid model can perform better than a standalone LSTM of the same size. We use visualizations to show how the LSTM and HMM components of the hybrid algorithm complement each other in terms of features learned in the data.
With similar number of parameters, the log likelihood is about 0.1 lower for LSTMs across datasets. When the number of parameters in LSTMs is increased, their log likelihood is up to 0.7 lower.
50690b72dc61748e0159739a9a0243814d37f360
50690b72dc61748e0159739a9a0243814d37f360_0
Q: Do they report results only on English data? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
Yes
8266642303fbc6a1138b4e23ee1d859a6f584fbb
8266642303fbc6a1138b4e23ee1d859a6f584fbb_0
Q: Which publicly available datasets are used? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
BIBREF3, BIBREF4, BIBREF9
3685bf2409b23c47bfd681989fb4a763bcab6be2
3685bf2409b23c47bfd681989fb4a763bcab6be2_0
Q: What embedding algorithm and dimension size are used? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
300 Dimensional Glove
19225e460fff2ac3aebc7fe31fcb4648eda813fb
19225e460fff2ac3aebc7fe31fcb4648eda813fb_0
Q: What data are the embeddings trained on? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
Common Crawl
f37026f518ab56c859f6b80b646d7f19a7b684fa
f37026f518ab56c859f6b80b646d7f19a7b684fa_0
Q: how much was the parameter difference between their model and previous methods? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
our model requires 100k parameters , while BIBREF8 requires 250k parameters
1231934db6adda87c1b15e571468b8e9d225d6fe
1231934db6adda87c1b15e571468b8e9d225d6fe_0
Q: how many parameters did their model use? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
Excluding the embedding weights, our model requires 100k parameters
81303f605da57ddd836b7c121490b0ebb47c60e7
81303f605da57ddd836b7c121490b0ebb47c60e7_0
Q: which datasets were used? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
Sexist/Racist (SR) data set, HATE dataset, HAR
a3f108f60143d13fe38d911b1cc3b17bdffde3bd
a3f108f60143d13fe38d911b1cc3b17bdffde3bd_0
Q: what was their system's f1 performance? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
Proposed model achieves 0.86, 0.924, 0.71 F1 score on SR, HATE, HAR datasets respectively.
118ff1d7000ea0d12289d46430154cc15601fd8e
118ff1d7000ea0d12289d46430154cc15601fd8e_0
Q: what was the baseline? Text: Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech.
logistic regression
102a0439739428aac80ac11795e73ce751b93ea1
102a0439739428aac80ac11795e73ce751b93ea1_0
Q: What datasets were used? Text: Introduction Neural machine translation (NMT, § SECREF2 ; kalchbrenner13emnlp, sutskever14nips) is a variant of statistical machine translation (SMT; brown93cl), using neural networks. NMT has recently gained popularity due to its ability to model the translation process end-to-end using a single probabilistic model, and for its state-of-the-art performance on several language pairs BIBREF0 , BIBREF1 . One feature of NMT systems is that they treat each word in the vocabulary as a vector of continuous-valued numbers. This is in contrast to more traditional SMT methods such as phrase-based machine translation (PBMT; koehn03phrasebased), which represent translations as discrete pairs of word strings in the source and target languages. The use of continuous representations is a major advantage, allowing NMT to share statistical power between similar words (e.g. “dog” and “cat”) or contexts (e.g. “this is” and “that is”). However, this property also has a drawback in that NMT systems often mistranslate into words that seem natural in the context, but do not reflect the content of the source sentence. For example, Figure FIGREF2 is a sentence from our data where the NMT system mistakenly translated “Tunisia” into the word for “Norway.” This variety of error is particularly serious because the content words that are often mistranslated by NMT are also the words that play a key role in determining the whole meaning of the sentence. In contrast, PBMT and other traditional SMT methods tend to rarely make this kind of mistake. This is because they base their translations on discrete phrase mappings, which ensure that source words will be translated into a target word that has been observed as a translation at least once in the training data. In addition, because the discrete mappings are memorized explicitly, they can be learned efficiently from as little as a single instance (barring errors in word alignments). Thus we hypothesize that if we can incorporate a similar variety of information into NMT, this has the potential to alleviate problems with the previously mentioned fatal errors on low-frequency words. In this paper, we propose a simple, yet effective method to incorporate discrete, probabilistic lexicons as an additional information source in NMT (§ SECREF3 ). First we demonstrate how to transform lexical translation probabilities (§ SECREF7 ) into a predictive probability for the next word by utilizing attention vectors from attentional NMT models BIBREF2 . We then describe methods to incorporate this probability into NMT, either through linear interpolation with the NMT probabilities (§ UID10 ) or as the bias to the NMT predictive distribution (§ UID9 ). We construct these lexicon probabilities by using traditional word alignment methods on the training data (§ SECREF11 ), other external parallel data resources such as a handmade dictionary (§ SECREF13 ), or using a hybrid between the two (§ SECREF14 ). We perform experiments (§ SECREF5 ) on two English-Japanese translation corpora to evaluate the method's utility in improving translation accuracy and reducing the time required for training. Neural Machine Translation The goal of machine translation is to translate a sequence of source words INLINEFORM0 into a sequence of target words INLINEFORM1 . These words belong to the source vocabulary INLINEFORM2 , and the target vocabulary INLINEFORM3 respectively. NMT performs this translation by calculating the conditional probability INLINEFORM4 of the INLINEFORM5 th target word INLINEFORM6 based on the source INLINEFORM7 and the preceding target words INLINEFORM8 . This is done by encoding the context INLINEFORM9 a fixed-width vector INLINEFORM10 , and calculating the probability as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are respectively weight matrix and bias vector parameters. The exact variety of the NMT model depends on how we calculate INLINEFORM0 used as input. While there are many methods to perform this modeling, we opt to use attentional models BIBREF2 , which focus on particular words in the source sentence when calculating the probability of INLINEFORM1 . These models represent the current state of the art in NMT, and are also convenient for use in our proposed method. Specifically, we use the method of luong15emnlp, which we describe briefly here and refer readers to the original paper for details. First, an encoder converts the source sentence INLINEFORM0 into a matrix INLINEFORM1 where each column represents a single word in the input sentence as a continuous vector. This representation is generated using a bidirectional encoder INLINEFORM2 Here the INLINEFORM0 function maps the words into a representation BIBREF3 , and INLINEFORM1 is a stacking long short term memory (LSTM) neural network BIBREF4 , BIBREF5 , BIBREF6 . Finally we concatenate the two vectors INLINEFORM2 and INLINEFORM3 into a bidirectional representation INLINEFORM4 . These vectors are further concatenated into the matrix INLINEFORM5 where the INLINEFORM6 th column corresponds to INLINEFORM7 . Next, we generate the output one word at a time while referencing this encoded input sentence and tracking progress with a decoder LSTM. The decoder's hidden state INLINEFORM0 is a fixed-length continuous vector representing the previous target words INLINEFORM1 , initialized as INLINEFORM2 . Based on this INLINEFORM3 , we calculate a similarity vector INLINEFORM4 , with each element equal to DISPLAYFORM0 INLINEFORM0 can be an arbitrary similarity function, which we set to the dot product, following luong15emnlp. We then normalize this into an attention vector, which weights the amount of focus that we put on each word in the source sentence DISPLAYFORM0 This attention vector is then used to weight the encoded representation INLINEFORM0 to create a context vector INLINEFORM1 for the current time step INLINEFORM2 Finally, we create INLINEFORM0 by concatenating the previous hidden state INLINEFORM1 with the context vector, and performing an affine transform INLINEFORM2 Once we have this representation of the current state, we can calculate INLINEFORM0 according to Equation ( EQREF3 ). The next word INLINEFORM1 is chosen according to this probability, and we update the hidden state by inputting the chosen word into the decoder LSTM DISPLAYFORM0 If we define all the parameters in this model as INLINEFORM0 , we can then train the model by minimizing the negative log-likelihood of the training data INLINEFORM1 Integrating Lexicons into NMT In § SECREF2 we described how traditional NMT models calculate the probability of the next target word INLINEFORM0 . Our goal in this paper is to improve the accuracy of this probability estimate by incorporating information from discrete probabilistic lexicons. We assume that we have a lexicon that, given a source word INLINEFORM1 , assigns a probability INLINEFORM2 to target word INLINEFORM3 . For a source word INLINEFORM4 , this probability will generally be non-zero for a small number of translation candidates, and zero for the majority of words in INLINEFORM5 . In this section, we first describe how we incorporate these probabilities into NMT, and explain how we actually obtain the INLINEFORM6 probabilities in § SECREF4 . Converting Lexicon Probabilities into Conditioned Predictive Proabilities First, we need to convert lexical probabilities INLINEFORM0 for the individual words in the source sentence INLINEFORM1 to a form that can be used together with INLINEFORM2 . Given input sentence INLINEFORM3 , we can construct a matrix in which each column corresponds to a word in the input sentence, each row corresponds to a word in the INLINEFORM4 , and the entry corresponds to the appropriate lexical probability: INLINEFORM5 This matrix can be precomputed during the encoding stage because it only requires information about the source sentence INLINEFORM0 . Next we convert this matrix into a predictive probability over the next word: INLINEFORM0 . To do so we use the alignment probability INLINEFORM1 from Equation ( EQREF5 ) to weight each column of the INLINEFORM2 matrix: INLINEFORM3 This calculation is similar to the way how attentional models calculate the context vector INLINEFORM0 , but over a vector representing the probabilities of the target vocabulary, instead of the distributed representations of the source words. The process of involving INLINEFORM1 is important because at every time step INLINEFORM2 , the lexical probability INLINEFORM3 will be influenced by different source words. Combining Predictive Probabilities After calculating the lexicon predictive probability INLINEFORM0 , next we need to integrate this probability with the NMT model probability INLINEFORM1 . To do so, we examine two methods: (1) adding it as a bias, and (2) linear interpolation. In our first bias method, we use INLINEFORM0 to bias the probability distribution calculated by the vanilla NMT model. Specifically, we add a small constant INLINEFORM1 to INLINEFORM2 , take the logarithm, and add this adjusted log probability to the input of the softmax as follows: INLINEFORM3 We take the logarithm of INLINEFORM0 so that the values will still be in the probability domain after the softmax is calculated, and add the hyper-parameter INLINEFORM1 to prevent zero probabilities from becoming INLINEFORM2 after taking the log. When INLINEFORM3 is small, the model will be more heavily biased towards using the lexicon, and when INLINEFORM4 is larger the lexicon probabilities will be given less weight. We use INLINEFORM5 for this paper. We also attempt to incorporate the two probabilities through linear interpolation between the standard NMT probability model probability INLINEFORM0 and the lexicon probability INLINEFORM1 . We will call this the linear method, and define it as follows: INLINEFORM2 where INLINEFORM0 is an interpolation coefficient that is the result of the sigmoid function INLINEFORM1 . INLINEFORM2 is a learnable parameter, and the sigmoid function ensures that the final interpolation level falls between 0 and 1. We choose INLINEFORM3 ( INLINEFORM4 ) at the beginning of training. This notation is partly inspired by allamanis16icml and gu16acl who use linear interpolation to merge a standard attentional model with a “copy” operator that copies a source word as-is into the target sentence. The main difference is that they use this to copy words into the output while our method uses it to influence the probabilities of all target words. Constructing Lexicon Probabilities In the previous section, we have defined some ways to use predictive probabilities INLINEFORM0 based on word-to-word lexical probabilities INLINEFORM1 . Next, we define three ways to construct these lexical probabilities using automatically learned lexicons, handmade lexicons, or a combination of both. Automatically Learned Lexicons In traditional SMT systems, lexical translation probabilities are generally learned directly from parallel data in an unsupervised fashion using a model such as the IBM models BIBREF7 , BIBREF8 . These models can be used to estimate the alignments and lexical translation probabilities INLINEFORM0 between the tokens of the two languages using the expectation maximization (EM) algorithm. First in the expectation step, the algorithm estimates the expected count INLINEFORM0 . In the maximization step, lexical probabilities are calculated by dividing the expected count by all possible counts: INLINEFORM1 The IBM models vary in level of refinement, with Model 1 relying solely on these lexical probabilities, and latter IBM models (Models 2, 3, 4, 5) introducing more sophisticated models of fertility and relative alignment. Even though IBM models also occasionally have problems when dealing with the rare words (e.g. “garbage collecting” effects BIBREF9 ), traditional SMT systems generally achieve better translation accuracies of low-frequency words than NMT systems BIBREF6 , indicating that these problems are less prominent than they are in NMT. Note that in many cases, NMT limits the target vocabulary BIBREF10 for training speed or memory constraints, resulting in rare words not being covered by the NMT vocabulary INLINEFORM0 . Accordingly, we allocate the remaining probability assigned by the lexicon to the unknown word symbol INLINEFORM1 : DISPLAYFORM0 Manual Lexicons In addition, for many language pairs, broad-coverage handmade dictionaries exist, and it is desirable that we be able to use the information included in them as well. Unlike automatically learned lexicons, however, handmade dictionaries generally do not contain translation probabilities. To construct the probability INLINEFORM0 , we define the set of translations INLINEFORM1 existing in the dictionary for particular source word INLINEFORM2 , and assume a uniform distribution over these words: INLINEFORM3 Following Equation ( EQREF12 ), unknown source words will assign their probability mass to the INLINEFORM0 tag. Hybrid Lexicons Handmade lexicons have broad coverage of words but their probabilities might not be as accurate as the learned ones, particularly if the automatic lexicon is constructed on in-domain data. Thus, we also test a hybrid method where we use the handmade lexicons to complement the automatically learned lexicon. Specifically, inspired by phrase table fill-up used in PBMT systems BIBREF11 , we use the probability of the automatically learned lexicons INLINEFORM1 by default, and fall back to the handmade lexicons INLINEFORM2 only for uncovered words: DISPLAYFORM0 Experiment & Result In this section, we describe experiments we use to evaluate our proposed methods. Settings Dataset: We perform experiments on two widely-used tasks for the English-to-Japanese language pair: KFTT BIBREF12 and BTEC BIBREF13 . KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversation corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are depicted in Table TABREF19 . We tokenize English according to the Penn Treebank standard BIBREF14 and lowercase, and tokenize Japanese using KyTea BIBREF15 . We limit training sentence length up to 50 in both experiments and keep the test data at the original length. We replace words of frequency less than a threshold INLINEFORM0 in both languages with the INLINEFORM1 symbol and exclude them from our vocabulary. We choose INLINEFORM2 for BTEC and INLINEFORM3 for KFTT, resulting in INLINEFORM4 k, INLINEFORM5 k for BTEC and INLINEFORM6 k, INLINEFORM7 k for KFTT. NMT Systems: We build the described models using the Chainer toolkit. The depth of the stacking LSTM is INLINEFORM0 and hidden node size INLINEFORM1 . We concatenate the forward and backward encodings (resulting in a 1600 dimension vector) and then perform a linear transformation to 800 dimensions. We train the system using the Adam BIBREF16 optimization method with the default settings: INLINEFORM0 . Additionally, we add dropout BIBREF17 with drop rate INLINEFORM1 at the last layer of each stacking LSTM unit to prevent overfitting. We use a batch size of INLINEFORM2 and we run a total of INLINEFORM3 iterations for all data sets. All of the experiments are conducted on a single GeForce GTX TITAN X GPU with a 12 GB memory cache. At test time, we use beam search with beam size INLINEFORM0 . We follow luong15acl in replacing every unknown token at position INLINEFORM1 with the target token that maximizes the probability INLINEFORM2 . We choose source word INLINEFORM3 according to the highest alignment score in Equation ( EQREF5 ). This unknown word replacement is applied to both baseline and proposed systems. Finally, because NMT models tend to give higher probabilities to shorter sentences BIBREF18 , we discount the probability of INLINEFORM4 token by INLINEFORM5 to correct for this bias. Traditional SMT Systems: We also prepare two traditional SMT systems for comparison: a PBMT system BIBREF19 using Moses BIBREF20 , and a hierarchical phrase-based MT system BIBREF21 using Travatar BIBREF22 , Systems are built using the default settings, with models trained on the training data, and weights tuned on the development data. Lexicons: We use a total of 3 lexicons for the proposed method, and apply bias and linear method for all of them, totaling 6 experiments. The first lexicon (auto) is built on the training data using the automatically learned lexicon method of § SECREF11 separately for both the BTEC and KFTT experiments. Automatic alignment is performed using GIZA++ BIBREF8 . The second lexicon (man) is built using the popular English-Japanese dictionary Eijiro with the manual lexicon method of § SECREF13 . Eijiro contains 104K distinct word-to-word translation entries. The third lexicon (hyb) is built by combining the first and second lexicon with the hybrid method of § SECREF14 . Evaluation: We use standard single reference BLEU-4 BIBREF23 to evaluate the translation performance. Additionally, we also use NIST BIBREF24 , which is a measure that puts a particular focus on low-frequency word strings, and thus is sensitive to the low-frequency words we are focusing on in this paper. We measure the statistical significant differences between systems using paired bootstrap resampling BIBREF25 with 10,000 iterations and measure statistical significance at the INLINEFORM0 and INLINEFORM1 levels. Additionally, we also calculate the recall of rare words from the references. We define “rare words” as words that appear less than eight times in the target training corpus or references, and measure the percentage of time they are recovered by each translation system. Effect of Integrating Lexicons In this section, we first a detailed examination of the utility of the proposed bias method when used with the auto or hyb lexicons, which empirically gave the best results, and perform a comparison among the other lexicon integration methods in the following section. Table TABREF20 shows the results of these methods, along with the corresponding baselines. First, compared to the baseline attn, our bias method achieved consistently higher scores on both test sets. In particular, the gains on the more difficult KFTT set are large, up to 2.3 BLEU, 0.44 NIST, and 30% Recall, demonstrating the utility of the proposed method in the face of more diverse content and fewer high-frequency words. Compared to the traditional pbmt systems hiero, particularly on KFTT we can see that the proposed method allows the NMT system to exceed the traditional SMT methods in BLEU. This is despite the fact that we are not performing ensembling, which has proven to be essential to exceed traditional systems in several previous works BIBREF6 , BIBREF0 , BIBREF1 . Interestingly, despite gains in BLEU, the NMT methods still fall behind in NIST score on the KFTT data set, demonstrating that traditional SMT systems still tend to have a small advantage in translating lower-frequency words, despite the gains made by the proposed method. In Table TABREF27 , we show some illustrative examples where the proposed method (auto-bias) was able to obtain a correct translation while the normal attentional model was not. The first example is a mistake in translating “extramarital affairs” into the Japanese equivalent of “soccer,” entirely changing the main topic of the sentence. This is typical of the errors that we have observed NMT systems make (the mistake from Figure FIGREF2 is also from attn, and was fixed by our proposed method). The second example demonstrates how these mistakes can then affect the process of choosing the remaining words, propagating the error through the whole sentence. Next, we examine the effect of the proposed method on the training time for each neural MT method, drawing training curves for the KFTT data in Figure FIGREF26 . Here we can see that the proposed bias training methods achieve reasonable BLEU scores in the upper 10s even after the first iteration. In contrast, the baseline attn method has a BLEU score of around 5 after the first iteration, and takes significantly longer to approach values close to its maximal accuracy. This shows that by incorporating lexical probabilities, we can effectively bootstrap the learning of the NMT system, allowing it to approach an appropriate answer in a more timely fashion. It is also interesting to examine the alignment vectors produced by the baseline and proposed methods, a visualization of which we show in Figure FIGREF29 . For this sentence, the outputs of both methods were both identical and correct, but we can see that the proposed method (right) placed sharper attention on the actual source word corresponding to content words in the target sentence. This trend of peakier attention distributions in the proposed method held throughout the corpus, with the per-word entropy of the attention vectors being 3.23 bits for auto-bias, compared with 3.81 bits for attn, indicating that the auto-bias method places more certainty in its attention decisions. Comparison of Integration Methods Finally, we perform a full comparison between the various methods for integrating lexicons into the translation process, with results shown in Table TABREF31 . In general the bias method improves accuracy for the auto and hyb lexicon, but is less effective for the man lexicon. This is likely due to the fact that the manual lexicon, despite having broad coverage, did not sufficiently cover target-domain words (coverage of unique words in the source vocabulary was 35.3% and 9.7% for BTEC and KFTT respectively). Interestingly, the trend is reversed for the linear method, with it improving man systems, but causing decreases when using the auto and hyb lexicons. This indicates that the linear method is more suited for cases where the lexicon does not closely match the target domain, and plays a more complementary role. Compared to the log-linear modeling of bias, which strictly enforces constraints imposed by the lexicon distribution BIBREF27 , linear interpolation is intuitively more appropriate for integrating this type of complimentary information. On the other hand, the performance of linear interpolation was generally lower than that of the bias method. One potential reason for this is the fact that we use a constant interpolation coefficient that was set fixed in every context. gu16acl have recently developed methods to use the context information from the decoder to calculate the different interpolation coefficients for every decoding step, and it is possible that introducing these methods would improve our results. Additional Experiments To test whether the proposed method is useful on larger data sets, we also performed follow-up experiments on the larger Japanese-English ASPEC dataset BIBREF28 that consist of 2 million training examples, 63 million tokens, and 81,000 vocabulary size. We gained an improvement in BLEU score from 20.82 using the attn baseline to 22.66 using the auto-bias proposed method. This experiment shows that our method scales to larger datasets. Related Work From the beginning of work on NMT, unknown words that do not exist in the system vocabulary have been focused on as a weakness of these systems. Early methods to handle these unknown words replaced them with appropriate words in the target vocabulary BIBREF10 , BIBREF29 according to a lexicon similar to the one used in this work. In contrast to our work, these only handle unknown words and do not incorporate information from the lexicon in the learning procedure. There have also been other approaches that incorporate models that learn when to copy words as-is into the target language BIBREF30 , BIBREF31 , BIBREF32 . These models are similar to the linear approach of § UID10 , but are only applicable to words that can be copied as-is into the target language. In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. On the other hand, while we are simply using a static interpolation coefficient INLINEFORM0 , these works generally have a more sophisticated method for choosing the interpolation between the standard and “copy” models. Incorporating these into our linear method is a promising avenue for future work. In addition mi16acl have also recently proposed a similar approach by limiting the number of vocabulary being predicted by each batch or sentence. This vocabulary is made by considering the original HMM alignments gathered from the training corpus. Basically, this method is a specific version of our bias method that gives some of the vocabulary a bias of negative infinity and all other vocabulary a uniform distribution. Our method improves over this by considering actual translation probabilities, and also considering the attention vector when deciding how to combine these probabilities. Finally, there have been a number of recent works that improve accuracy of low-frequency words using character-based translation models BIBREF33 , BIBREF34 , BIBREF35 . However, luong16acl have found that even when using character-based models, incorporating information about words allows for gains in translation accuracy, and it is likely that our lexicon-based method could result in improvements in these hybrid systems as well. Conclusion & Future Work In this paper, we have proposed a method to incorporate discrete probabilistic lexicons into NMT systems to solve the difficulties that NMT systems have demonstrated with low-frequency words. As a result, we achieved substantial increases in BLEU (2.0-2.3) and NIST (0.13-0.44) scores, and observed qualitative improvements in the translations of content words. For future work, we are interested in conducting the experiments on larger-scale translation tasks. We also plan to do subjective evaluation, as we expect that improvements in content word translation are critical to subjective impressions of translation results. Finally, we are also interested in improvements to the linear method where INLINEFORM0 is calculated based on the context, instead of using a fixed value. Acknowledgment We thank Makoto Morishita and Yusuke Oda for their help in this project. We also thank the faculty members of AHC lab for their supports and suggestions. This work was supported by grants from the Ministry of Education, Culture, Sport, Science, and Technology of Japan and in part by JSPS KAKENHI Grant Number 16H05873.
KFTT BIBREF12 and BTEC BIBREF13
d9c26c1bfb3830c9f3dbcccf4c8ecbcd3cb54404
d9c26c1bfb3830c9f3dbcccf4c8ecbcd3cb54404_0
Q: What language pairs did they experiment with? Text: Introduction Neural machine translation (NMT, § SECREF2 ; kalchbrenner13emnlp, sutskever14nips) is a variant of statistical machine translation (SMT; brown93cl), using neural networks. NMT has recently gained popularity due to its ability to model the translation process end-to-end using a single probabilistic model, and for its state-of-the-art performance on several language pairs BIBREF0 , BIBREF1 . One feature of NMT systems is that they treat each word in the vocabulary as a vector of continuous-valued numbers. This is in contrast to more traditional SMT methods such as phrase-based machine translation (PBMT; koehn03phrasebased), which represent translations as discrete pairs of word strings in the source and target languages. The use of continuous representations is a major advantage, allowing NMT to share statistical power between similar words (e.g. “dog” and “cat”) or contexts (e.g. “this is” and “that is”). However, this property also has a drawback in that NMT systems often mistranslate into words that seem natural in the context, but do not reflect the content of the source sentence. For example, Figure FIGREF2 is a sentence from our data where the NMT system mistakenly translated “Tunisia” into the word for “Norway.” This variety of error is particularly serious because the content words that are often mistranslated by NMT are also the words that play a key role in determining the whole meaning of the sentence. In contrast, PBMT and other traditional SMT methods tend to rarely make this kind of mistake. This is because they base their translations on discrete phrase mappings, which ensure that source words will be translated into a target word that has been observed as a translation at least once in the training data. In addition, because the discrete mappings are memorized explicitly, they can be learned efficiently from as little as a single instance (barring errors in word alignments). Thus we hypothesize that if we can incorporate a similar variety of information into NMT, this has the potential to alleviate problems with the previously mentioned fatal errors on low-frequency words. In this paper, we propose a simple, yet effective method to incorporate discrete, probabilistic lexicons as an additional information source in NMT (§ SECREF3 ). First we demonstrate how to transform lexical translation probabilities (§ SECREF7 ) into a predictive probability for the next word by utilizing attention vectors from attentional NMT models BIBREF2 . We then describe methods to incorporate this probability into NMT, either through linear interpolation with the NMT probabilities (§ UID10 ) or as the bias to the NMT predictive distribution (§ UID9 ). We construct these lexicon probabilities by using traditional word alignment methods on the training data (§ SECREF11 ), other external parallel data resources such as a handmade dictionary (§ SECREF13 ), or using a hybrid between the two (§ SECREF14 ). We perform experiments (§ SECREF5 ) on two English-Japanese translation corpora to evaluate the method's utility in improving translation accuracy and reducing the time required for training. Neural Machine Translation The goal of machine translation is to translate a sequence of source words INLINEFORM0 into a sequence of target words INLINEFORM1 . These words belong to the source vocabulary INLINEFORM2 , and the target vocabulary INLINEFORM3 respectively. NMT performs this translation by calculating the conditional probability INLINEFORM4 of the INLINEFORM5 th target word INLINEFORM6 based on the source INLINEFORM7 and the preceding target words INLINEFORM8 . This is done by encoding the context INLINEFORM9 a fixed-width vector INLINEFORM10 , and calculating the probability as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are respectively weight matrix and bias vector parameters. The exact variety of the NMT model depends on how we calculate INLINEFORM0 used as input. While there are many methods to perform this modeling, we opt to use attentional models BIBREF2 , which focus on particular words in the source sentence when calculating the probability of INLINEFORM1 . These models represent the current state of the art in NMT, and are also convenient for use in our proposed method. Specifically, we use the method of luong15emnlp, which we describe briefly here and refer readers to the original paper for details. First, an encoder converts the source sentence INLINEFORM0 into a matrix INLINEFORM1 where each column represents a single word in the input sentence as a continuous vector. This representation is generated using a bidirectional encoder INLINEFORM2 Here the INLINEFORM0 function maps the words into a representation BIBREF3 , and INLINEFORM1 is a stacking long short term memory (LSTM) neural network BIBREF4 , BIBREF5 , BIBREF6 . Finally we concatenate the two vectors INLINEFORM2 and INLINEFORM3 into a bidirectional representation INLINEFORM4 . These vectors are further concatenated into the matrix INLINEFORM5 where the INLINEFORM6 th column corresponds to INLINEFORM7 . Next, we generate the output one word at a time while referencing this encoded input sentence and tracking progress with a decoder LSTM. The decoder's hidden state INLINEFORM0 is a fixed-length continuous vector representing the previous target words INLINEFORM1 , initialized as INLINEFORM2 . Based on this INLINEFORM3 , we calculate a similarity vector INLINEFORM4 , with each element equal to DISPLAYFORM0 INLINEFORM0 can be an arbitrary similarity function, which we set to the dot product, following luong15emnlp. We then normalize this into an attention vector, which weights the amount of focus that we put on each word in the source sentence DISPLAYFORM0 This attention vector is then used to weight the encoded representation INLINEFORM0 to create a context vector INLINEFORM1 for the current time step INLINEFORM2 Finally, we create INLINEFORM0 by concatenating the previous hidden state INLINEFORM1 with the context vector, and performing an affine transform INLINEFORM2 Once we have this representation of the current state, we can calculate INLINEFORM0 according to Equation ( EQREF3 ). The next word INLINEFORM1 is chosen according to this probability, and we update the hidden state by inputting the chosen word into the decoder LSTM DISPLAYFORM0 If we define all the parameters in this model as INLINEFORM0 , we can then train the model by minimizing the negative log-likelihood of the training data INLINEFORM1 Integrating Lexicons into NMT In § SECREF2 we described how traditional NMT models calculate the probability of the next target word INLINEFORM0 . Our goal in this paper is to improve the accuracy of this probability estimate by incorporating information from discrete probabilistic lexicons. We assume that we have a lexicon that, given a source word INLINEFORM1 , assigns a probability INLINEFORM2 to target word INLINEFORM3 . For a source word INLINEFORM4 , this probability will generally be non-zero for a small number of translation candidates, and zero for the majority of words in INLINEFORM5 . In this section, we first describe how we incorporate these probabilities into NMT, and explain how we actually obtain the INLINEFORM6 probabilities in § SECREF4 . Converting Lexicon Probabilities into Conditioned Predictive Proabilities First, we need to convert lexical probabilities INLINEFORM0 for the individual words in the source sentence INLINEFORM1 to a form that can be used together with INLINEFORM2 . Given input sentence INLINEFORM3 , we can construct a matrix in which each column corresponds to a word in the input sentence, each row corresponds to a word in the INLINEFORM4 , and the entry corresponds to the appropriate lexical probability: INLINEFORM5 This matrix can be precomputed during the encoding stage because it only requires information about the source sentence INLINEFORM0 . Next we convert this matrix into a predictive probability over the next word: INLINEFORM0 . To do so we use the alignment probability INLINEFORM1 from Equation ( EQREF5 ) to weight each column of the INLINEFORM2 matrix: INLINEFORM3 This calculation is similar to the way how attentional models calculate the context vector INLINEFORM0 , but over a vector representing the probabilities of the target vocabulary, instead of the distributed representations of the source words. The process of involving INLINEFORM1 is important because at every time step INLINEFORM2 , the lexical probability INLINEFORM3 will be influenced by different source words. Combining Predictive Probabilities After calculating the lexicon predictive probability INLINEFORM0 , next we need to integrate this probability with the NMT model probability INLINEFORM1 . To do so, we examine two methods: (1) adding it as a bias, and (2) linear interpolation. In our first bias method, we use INLINEFORM0 to bias the probability distribution calculated by the vanilla NMT model. Specifically, we add a small constant INLINEFORM1 to INLINEFORM2 , take the logarithm, and add this adjusted log probability to the input of the softmax as follows: INLINEFORM3 We take the logarithm of INLINEFORM0 so that the values will still be in the probability domain after the softmax is calculated, and add the hyper-parameter INLINEFORM1 to prevent zero probabilities from becoming INLINEFORM2 after taking the log. When INLINEFORM3 is small, the model will be more heavily biased towards using the lexicon, and when INLINEFORM4 is larger the lexicon probabilities will be given less weight. We use INLINEFORM5 for this paper. We also attempt to incorporate the two probabilities through linear interpolation between the standard NMT probability model probability INLINEFORM0 and the lexicon probability INLINEFORM1 . We will call this the linear method, and define it as follows: INLINEFORM2 where INLINEFORM0 is an interpolation coefficient that is the result of the sigmoid function INLINEFORM1 . INLINEFORM2 is a learnable parameter, and the sigmoid function ensures that the final interpolation level falls between 0 and 1. We choose INLINEFORM3 ( INLINEFORM4 ) at the beginning of training. This notation is partly inspired by allamanis16icml and gu16acl who use linear interpolation to merge a standard attentional model with a “copy” operator that copies a source word as-is into the target sentence. The main difference is that they use this to copy words into the output while our method uses it to influence the probabilities of all target words. Constructing Lexicon Probabilities In the previous section, we have defined some ways to use predictive probabilities INLINEFORM0 based on word-to-word lexical probabilities INLINEFORM1 . Next, we define three ways to construct these lexical probabilities using automatically learned lexicons, handmade lexicons, or a combination of both. Automatically Learned Lexicons In traditional SMT systems, lexical translation probabilities are generally learned directly from parallel data in an unsupervised fashion using a model such as the IBM models BIBREF7 , BIBREF8 . These models can be used to estimate the alignments and lexical translation probabilities INLINEFORM0 between the tokens of the two languages using the expectation maximization (EM) algorithm. First in the expectation step, the algorithm estimates the expected count INLINEFORM0 . In the maximization step, lexical probabilities are calculated by dividing the expected count by all possible counts: INLINEFORM1 The IBM models vary in level of refinement, with Model 1 relying solely on these lexical probabilities, and latter IBM models (Models 2, 3, 4, 5) introducing more sophisticated models of fertility and relative alignment. Even though IBM models also occasionally have problems when dealing with the rare words (e.g. “garbage collecting” effects BIBREF9 ), traditional SMT systems generally achieve better translation accuracies of low-frequency words than NMT systems BIBREF6 , indicating that these problems are less prominent than they are in NMT. Note that in many cases, NMT limits the target vocabulary BIBREF10 for training speed or memory constraints, resulting in rare words not being covered by the NMT vocabulary INLINEFORM0 . Accordingly, we allocate the remaining probability assigned by the lexicon to the unknown word symbol INLINEFORM1 : DISPLAYFORM0 Manual Lexicons In addition, for many language pairs, broad-coverage handmade dictionaries exist, and it is desirable that we be able to use the information included in them as well. Unlike automatically learned lexicons, however, handmade dictionaries generally do not contain translation probabilities. To construct the probability INLINEFORM0 , we define the set of translations INLINEFORM1 existing in the dictionary for particular source word INLINEFORM2 , and assume a uniform distribution over these words: INLINEFORM3 Following Equation ( EQREF12 ), unknown source words will assign their probability mass to the INLINEFORM0 tag. Hybrid Lexicons Handmade lexicons have broad coverage of words but their probabilities might not be as accurate as the learned ones, particularly if the automatic lexicon is constructed on in-domain data. Thus, we also test a hybrid method where we use the handmade lexicons to complement the automatically learned lexicon. Specifically, inspired by phrase table fill-up used in PBMT systems BIBREF11 , we use the probability of the automatically learned lexicons INLINEFORM1 by default, and fall back to the handmade lexicons INLINEFORM2 only for uncovered words: DISPLAYFORM0 Experiment & Result In this section, we describe experiments we use to evaluate our proposed methods. Settings Dataset: We perform experiments on two widely-used tasks for the English-to-Japanese language pair: KFTT BIBREF12 and BTEC BIBREF13 . KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversation corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are depicted in Table TABREF19 . We tokenize English according to the Penn Treebank standard BIBREF14 and lowercase, and tokenize Japanese using KyTea BIBREF15 . We limit training sentence length up to 50 in both experiments and keep the test data at the original length. We replace words of frequency less than a threshold INLINEFORM0 in both languages with the INLINEFORM1 symbol and exclude them from our vocabulary. We choose INLINEFORM2 for BTEC and INLINEFORM3 for KFTT, resulting in INLINEFORM4 k, INLINEFORM5 k for BTEC and INLINEFORM6 k, INLINEFORM7 k for KFTT. NMT Systems: We build the described models using the Chainer toolkit. The depth of the stacking LSTM is INLINEFORM0 and hidden node size INLINEFORM1 . We concatenate the forward and backward encodings (resulting in a 1600 dimension vector) and then perform a linear transformation to 800 dimensions. We train the system using the Adam BIBREF16 optimization method with the default settings: INLINEFORM0 . Additionally, we add dropout BIBREF17 with drop rate INLINEFORM1 at the last layer of each stacking LSTM unit to prevent overfitting. We use a batch size of INLINEFORM2 and we run a total of INLINEFORM3 iterations for all data sets. All of the experiments are conducted on a single GeForce GTX TITAN X GPU with a 12 GB memory cache. At test time, we use beam search with beam size INLINEFORM0 . We follow luong15acl in replacing every unknown token at position INLINEFORM1 with the target token that maximizes the probability INLINEFORM2 . We choose source word INLINEFORM3 according to the highest alignment score in Equation ( EQREF5 ). This unknown word replacement is applied to both baseline and proposed systems. Finally, because NMT models tend to give higher probabilities to shorter sentences BIBREF18 , we discount the probability of INLINEFORM4 token by INLINEFORM5 to correct for this bias. Traditional SMT Systems: We also prepare two traditional SMT systems for comparison: a PBMT system BIBREF19 using Moses BIBREF20 , and a hierarchical phrase-based MT system BIBREF21 using Travatar BIBREF22 , Systems are built using the default settings, with models trained on the training data, and weights tuned on the development data. Lexicons: We use a total of 3 lexicons for the proposed method, and apply bias and linear method for all of them, totaling 6 experiments. The first lexicon (auto) is built on the training data using the automatically learned lexicon method of § SECREF11 separately for both the BTEC and KFTT experiments. Automatic alignment is performed using GIZA++ BIBREF8 . The second lexicon (man) is built using the popular English-Japanese dictionary Eijiro with the manual lexicon method of § SECREF13 . Eijiro contains 104K distinct word-to-word translation entries. The third lexicon (hyb) is built by combining the first and second lexicon with the hybrid method of § SECREF14 . Evaluation: We use standard single reference BLEU-4 BIBREF23 to evaluate the translation performance. Additionally, we also use NIST BIBREF24 , which is a measure that puts a particular focus on low-frequency word strings, and thus is sensitive to the low-frequency words we are focusing on in this paper. We measure the statistical significant differences between systems using paired bootstrap resampling BIBREF25 with 10,000 iterations and measure statistical significance at the INLINEFORM0 and INLINEFORM1 levels. Additionally, we also calculate the recall of rare words from the references. We define “rare words” as words that appear less than eight times in the target training corpus or references, and measure the percentage of time they are recovered by each translation system. Effect of Integrating Lexicons In this section, we first a detailed examination of the utility of the proposed bias method when used with the auto or hyb lexicons, which empirically gave the best results, and perform a comparison among the other lexicon integration methods in the following section. Table TABREF20 shows the results of these methods, along with the corresponding baselines. First, compared to the baseline attn, our bias method achieved consistently higher scores on both test sets. In particular, the gains on the more difficult KFTT set are large, up to 2.3 BLEU, 0.44 NIST, and 30% Recall, demonstrating the utility of the proposed method in the face of more diverse content and fewer high-frequency words. Compared to the traditional pbmt systems hiero, particularly on KFTT we can see that the proposed method allows the NMT system to exceed the traditional SMT methods in BLEU. This is despite the fact that we are not performing ensembling, which has proven to be essential to exceed traditional systems in several previous works BIBREF6 , BIBREF0 , BIBREF1 . Interestingly, despite gains in BLEU, the NMT methods still fall behind in NIST score on the KFTT data set, demonstrating that traditional SMT systems still tend to have a small advantage in translating lower-frequency words, despite the gains made by the proposed method. In Table TABREF27 , we show some illustrative examples where the proposed method (auto-bias) was able to obtain a correct translation while the normal attentional model was not. The first example is a mistake in translating “extramarital affairs” into the Japanese equivalent of “soccer,” entirely changing the main topic of the sentence. This is typical of the errors that we have observed NMT systems make (the mistake from Figure FIGREF2 is also from attn, and was fixed by our proposed method). The second example demonstrates how these mistakes can then affect the process of choosing the remaining words, propagating the error through the whole sentence. Next, we examine the effect of the proposed method on the training time for each neural MT method, drawing training curves for the KFTT data in Figure FIGREF26 . Here we can see that the proposed bias training methods achieve reasonable BLEU scores in the upper 10s even after the first iteration. In contrast, the baseline attn method has a BLEU score of around 5 after the first iteration, and takes significantly longer to approach values close to its maximal accuracy. This shows that by incorporating lexical probabilities, we can effectively bootstrap the learning of the NMT system, allowing it to approach an appropriate answer in a more timely fashion. It is also interesting to examine the alignment vectors produced by the baseline and proposed methods, a visualization of which we show in Figure FIGREF29 . For this sentence, the outputs of both methods were both identical and correct, but we can see that the proposed method (right) placed sharper attention on the actual source word corresponding to content words in the target sentence. This trend of peakier attention distributions in the proposed method held throughout the corpus, with the per-word entropy of the attention vectors being 3.23 bits for auto-bias, compared with 3.81 bits for attn, indicating that the auto-bias method places more certainty in its attention decisions. Comparison of Integration Methods Finally, we perform a full comparison between the various methods for integrating lexicons into the translation process, with results shown in Table TABREF31 . In general the bias method improves accuracy for the auto and hyb lexicon, but is less effective for the man lexicon. This is likely due to the fact that the manual lexicon, despite having broad coverage, did not sufficiently cover target-domain words (coverage of unique words in the source vocabulary was 35.3% and 9.7% for BTEC and KFTT respectively). Interestingly, the trend is reversed for the linear method, with it improving man systems, but causing decreases when using the auto and hyb lexicons. This indicates that the linear method is more suited for cases where the lexicon does not closely match the target domain, and plays a more complementary role. Compared to the log-linear modeling of bias, which strictly enforces constraints imposed by the lexicon distribution BIBREF27 , linear interpolation is intuitively more appropriate for integrating this type of complimentary information. On the other hand, the performance of linear interpolation was generally lower than that of the bias method. One potential reason for this is the fact that we use a constant interpolation coefficient that was set fixed in every context. gu16acl have recently developed methods to use the context information from the decoder to calculate the different interpolation coefficients for every decoding step, and it is possible that introducing these methods would improve our results. Additional Experiments To test whether the proposed method is useful on larger data sets, we also performed follow-up experiments on the larger Japanese-English ASPEC dataset BIBREF28 that consist of 2 million training examples, 63 million tokens, and 81,000 vocabulary size. We gained an improvement in BLEU score from 20.82 using the attn baseline to 22.66 using the auto-bias proposed method. This experiment shows that our method scales to larger datasets. Related Work From the beginning of work on NMT, unknown words that do not exist in the system vocabulary have been focused on as a weakness of these systems. Early methods to handle these unknown words replaced them with appropriate words in the target vocabulary BIBREF10 , BIBREF29 according to a lexicon similar to the one used in this work. In contrast to our work, these only handle unknown words and do not incorporate information from the lexicon in the learning procedure. There have also been other approaches that incorporate models that learn when to copy words as-is into the target language BIBREF30 , BIBREF31 , BIBREF32 . These models are similar to the linear approach of § UID10 , but are only applicable to words that can be copied as-is into the target language. In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. On the other hand, while we are simply using a static interpolation coefficient INLINEFORM0 , these works generally have a more sophisticated method for choosing the interpolation between the standard and “copy” models. Incorporating these into our linear method is a promising avenue for future work. In addition mi16acl have also recently proposed a similar approach by limiting the number of vocabulary being predicted by each batch or sentence. This vocabulary is made by considering the original HMM alignments gathered from the training corpus. Basically, this method is a specific version of our bias method that gives some of the vocabulary a bias of negative infinity and all other vocabulary a uniform distribution. Our method improves over this by considering actual translation probabilities, and also considering the attention vector when deciding how to combine these probabilities. Finally, there have been a number of recent works that improve accuracy of low-frequency words using character-based translation models BIBREF33 , BIBREF34 , BIBREF35 . However, luong16acl have found that even when using character-based models, incorporating information about words allows for gains in translation accuracy, and it is likely that our lexicon-based method could result in improvements in these hybrid systems as well. Conclusion & Future Work In this paper, we have proposed a method to incorporate discrete probabilistic lexicons into NMT systems to solve the difficulties that NMT systems have demonstrated with low-frequency words. As a result, we achieved substantial increases in BLEU (2.0-2.3) and NIST (0.13-0.44) scores, and observed qualitative improvements in the translations of content words. For future work, we are interested in conducting the experiments on larger-scale translation tasks. We also plan to do subjective evaluation, as we expect that improvements in content word translation are critical to subjective impressions of translation results. Finally, we are also interested in improvements to the linear method where INLINEFORM0 is calculated based on the context, instead of using a fixed value. Acknowledgment We thank Makoto Morishita and Yusuke Oda for their help in this project. We also thank the faculty members of AHC lab for their supports and suggestions. This work was supported by grants from the Ministry of Education, Culture, Sport, Science, and Technology of Japan and in part by JSPS KAKENHI Grant Number 16H05873.
English-Japanese
04f72eddb1fc73dd11135a80ca1cf31e9db75578
04f72eddb1fc73dd11135a80ca1cf31e9db75578_0
Q: How much more coverage is in the new dataset? Text: Introduction Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. Background — QA-SRL ::: Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4). Background — QA-SRL ::: Corpora The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others. In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference. We found that these characteristics of the dataset impede its utility for future development of parsers. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4). Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense. Annotation and Evaluation Methods ::: Evaluation Metrics Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives. Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method. As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research. Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant. Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield. Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol. The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Baseline Parser Evaluation To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set. Baseline Parser Evaluation ::: Error Analysis We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones. Conclusion We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing. Supplemental Material ::: The Question Template For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26 Supplemental Material ::: Annotation Pipeline As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples. Supplemental Material ::: Redundant Parser Output As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology.
278 more annotations
f74eaee72cbd727a6dffa1600cdf1208672d713e
f74eaee72cbd727a6dffa1600cdf1208672d713e_0
Q: How was coverage measured? Text: Introduction Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. Background — QA-SRL ::: Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4). Background — QA-SRL ::: Corpora The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others. In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference. We found that these characteristics of the dataset impede its utility for future development of parsers. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4). Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense. Annotation and Evaluation Methods ::: Evaluation Metrics Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives. Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method. As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research. Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant. Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield. Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol. The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Baseline Parser Evaluation To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set. Baseline Parser Evaluation ::: Error Analysis We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones. Conclusion We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing. Supplemental Material ::: The Question Template For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26 Supplemental Material ::: Annotation Pipeline As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples. Supplemental Material ::: Redundant Parser Output As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology.
QA pairs per predicate
068dbcc117c93fa84c002d3424bafb071575f431
068dbcc117c93fa84c002d3424bafb071575f431_0
Q: How was quality measured? Text: Introduction Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. Background — QA-SRL ::: Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4). Background — QA-SRL ::: Corpora The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others. In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference. We found that these characteristics of the dataset impede its utility for future development of parsers. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4). Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense. Annotation and Evaluation Methods ::: Evaluation Metrics Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives. Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method. As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research. Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant. Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield. Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol. The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Baseline Parser Evaluation To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set. Baseline Parser Evaluation ::: Error Analysis We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones. Conclusion We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing. Supplemental Material ::: The Question Template For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26 Supplemental Material ::: Annotation Pipeline As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples. Supplemental Material ::: Redundant Parser Output As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology.
Inter-annotator agreement, comparison against expert annotation, agreement with PropBank Data annotations.
96526a14820b7debfd6f7c5beeade0a854b93d1a
96526a14820b7debfd6f7c5beeade0a854b93d1a_0
Q: How was the corpus obtained? Text: Introduction Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. Background — QA-SRL ::: Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4). Background — QA-SRL ::: Corpora The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others. In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference. We found that these characteristics of the dataset impede its utility for future development of parsers. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4). Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense. Annotation and Evaluation Methods ::: Evaluation Metrics Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives. Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method. As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research. Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant. Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield. Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol. The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Baseline Parser Evaluation To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set. Baseline Parser Evaluation ::: Error Analysis We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones. Conclusion We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing. Supplemental Material ::: The Question Template For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26 Supplemental Material ::: Annotation Pipeline As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples. Supplemental Material ::: Redundant Parser Output As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology.
trained annotators BIBREF4, crowdsourcing BIBREF5
32ba4d2d15194e889cbc9aa1d21ff1aa6fa27679
32ba4d2d15194e889cbc9aa1d21ff1aa6fa27679_0
Q: How are workers trained? Text: Introduction Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. Background — QA-SRL ::: Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4). Background — QA-SRL ::: Corpora The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others. In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference. We found that these characteristics of the dataset impede its utility for future development of parsers. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4). Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense. Annotation and Evaluation Methods ::: Evaluation Metrics Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives. Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method. As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research. Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant. Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield. Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol. The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Baseline Parser Evaluation To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set. Baseline Parser Evaluation ::: Error Analysis We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones. Conclusion We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing. Supplemental Material ::: The Question Template For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26 Supplemental Material ::: Annotation Pipeline As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples. Supplemental Material ::: Redundant Parser Output As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology.
extensive personal feedback
78c010db6413202b4063dc3fb6e3cc59ec16e7e3
78c010db6413202b4063dc3fb6e3cc59ec16e7e3_0
Q: What is different in the improved annotation protocol? Text: Introduction Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. Background — QA-SRL ::: Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4). Background — QA-SRL ::: Corpora The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others. In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference. We found that these characteristics of the dataset impede its utility for future development of parsers. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4). Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense. Annotation and Evaluation Methods ::: Evaluation Metrics Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives. Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method. As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research. Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant. Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield. Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol. The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Baseline Parser Evaluation To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set. Baseline Parser Evaluation ::: Error Analysis We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones. Conclusion We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing. Supplemental Material ::: The Question Template For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26 Supplemental Material ::: Annotation Pipeline As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples. Supplemental Material ::: Redundant Parser Output As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology.
a trained worker consolidates existing annotations
a69af5937cab861977989efd72ad1677484b5c8c
a69af5937cab861977989efd72ad1677484b5c8c_0
Q: How was the previous dataset annotated? Text: Introduction Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. Background — QA-SRL ::: Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4). Background — QA-SRL ::: Corpora The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others. In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference. We found that these characteristics of the dataset impede its utility for future development of parsers. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4). Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense. Annotation and Evaluation Methods ::: Evaluation Metrics Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives. Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method. As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research. Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant. Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield. Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol. The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Baseline Parser Evaluation To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set. Baseline Parser Evaluation ::: Error Analysis We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones. Conclusion We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing. Supplemental Material ::: The Question Template For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26 Supplemental Material ::: Annotation Pipeline As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples. Supplemental Material ::: Redundant Parser Output As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology.
the annotation machinery of BIBREF5
8847f2c676193189a0f9c0fe3b86b05b5657b76a
8847f2c676193189a0f9c0fe3b86b05b5657b76a_0
Q: How big is the dataset? Text: Introduction Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. Background — QA-SRL ::: Specifications In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4). Background — QA-SRL ::: Corpora The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others. In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference. We found that these characteristics of the dataset impede its utility for future development of parsers. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix. Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4). Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense. Annotation and Evaluation Methods ::: Evaluation Metrics Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics. Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives. Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method. As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research. Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant. Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA) To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency. Dataset Quality Analysis ::: Dataset Assessment and Comparison We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield. Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines. Dataset Quality Analysis ::: Agreement with PropBank Data It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol. The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset. Baseline Parser Evaluation To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set. Baseline Parser Evaluation ::: Error Analysis We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones. Conclusion We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing. Supplemental Material ::: The Question Template For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26 Supplemental Material ::: Annotation Pipeline As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples. Supplemental Material ::: Redundant Parser Output As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology.
1593 annotations
05196588320dfb0b9d9be7d64864c43968d329bc
05196588320dfb0b9d9be7d64864c43968d329bc_0
Q: Do the other multilingual baselines make use of the same amount of training data? Text: Introduction Transfer learning has been shown to work well in Computer Vision where pre-trained components from a model trained on ImageNet BIBREF0 are used to initialize models for other tasks BIBREF1 . In most cases, the other tasks are related to and share architectural components with the ImageNet task, enabling the use of such pre-trained models for feature extraction. With this transfer capability, improvements have been obtained on other image classification datasets, and on other tasks such as object detection, action recognition, image segmentation, etc BIBREF2 . Analogously, we propose a method to transfer a pre-trained component - the multilingual encoder from an NMT system - to other NLP tasks. In NLP, initializing word embeddings with pre-trained word representations obtained from Word2Vec BIBREF3 or GloVe BIBREF4 has become a common way of transferring information from large unlabeled data to downstream tasks. Recent work has further shown that we can improve over this approach significantly by considering representations in context, i.e. modeled depending on the sentences that contain them, either by taking the outputs of an encoder in MT BIBREF5 or by obtaining representations from the internal states of a bi-directional Language Model (LM) BIBREF6 . There has also been successful recent work in transferring sentence representations from resource-rich tasks to improve resource-poor tasks BIBREF7 , however, most of the above transfer learning examples have focused on transferring knowledge across tasks for a single language, in English. Cross-lingual or multilingual NLP, the task of transferring knowledge from one language to another, serves as a good test bed for evaluating various transfer learning approaches. For cross-lingual NLP, the most widely studied approach is to use multilingual embeddings as features in neural network models. However, research has shown that representations learned in context are more effective BIBREF5 , BIBREF6 ; therefore, we aim at doing better than just using multilingual embeddings in the cross-lingual tasks. Recent progress in multilingual NMT provides a compelling opportunity for obtaining contextualized multilingual representations, as multilingual NMT systems are capable of generalizing to an unseen language direction, i.e. zero-shot translation. There is also evidence that the encoder of a multilingual NMT system learns language agnostic, universal interlingua representations, which can be further exploited BIBREF8 . In this paper, we focus on using the representations obtained from a multilingual NMT system to enable cross-lingual transfer learning on downstream NLP tasks. Our contributions are three-fold: Proposed Method We propose an Encoder-Classifier model, where the Encoder, leveraging the representations learned by a multilingual NMT model, converts an input sequence ${\mathbf {x}}$ into a set of vectors C, and the Classifier predicts a class label $y$ given the encoding of the input sequence, C. Multilingual Representations Using NMT Although there has been a large body of work in building multilingual NMT models which can translate between multiple languages at the same time BIBREF29 , BIBREF30 , BIBREF31 , BIBREF8 , zero-shot capabilities of such multilingual representations have only been tested for MT BIBREF8 . We propose a simple yet effective solution - reuse the encoder of a multilingual NMT model to initialize the encoder for other NLP tasks. To be able to achieve promising zero-shot classification performance, we consider two factors: (1) The ability to encode multiple source languages with the same encoder and (2) The ability to learn language agnostic representations of the source sequence. Based on the literature, both requirements can be satisfied by training a multilingual NMT model having a shared encoder BIBREF32 , BIBREF8 , and a separate decoder and attention mechanism for each target language BIBREF30 . After training such a multilingual NMT model, the decoder and the corresponding attention mechanisms (which are target-language specific) are discarded, while the multilingual encoder is used to initialize the encoder of our proposed Encoder-Classifier model. Multilingual Encoder-Classifier In order to leverage pre-trained multilingual representations introduced in Section "Analyses" , our encoder strictly follows the structure of a regular Recurrent Neural Network (RNN) based NMT encoder BIBREF33 with a stacked layout BIBREF34 . Given an input sequence ${\mathbf {x}} = (x_{1}, x_{2}, \ldots , x_{T_x})$ of length $T_x$ , our encoder contextualizes or encodes the input sequence into a set of vectors C, by first applying a bi-directional RNN BIBREF35 , followed by a stack of uni-directional RNNs. The hidden states of the final layer RNN, $h_i^l$ , form the set C $~=\lbrace h_i^l \rbrace _{i=1}^{T_x}$ of context vectors which will be used by the classifier, where $l$ denotes the number of RNN layers in the stacked encoder. The task of the classifier is to predict a class label $y$ given the context set C. To ease this classification task given a variable length input set C, a common approach in the literature is to extract a single sentence vector $\mathbf {q}$ by making use of pooling over time BIBREF36 . Further, to increase the modeling capacity, the pooling operation can be parameterized using pre- and post-pooling networks. Formally, given the context set C, we extract a sentence vector $\mathbf {q}$ in three steps, using three networks, (1) pre-pooling feed-forward network $f_{pre}$ , (2) pooling network $f_{pool}$ and (3) post-pooling feed-forward network $f_{post}$ , $ \mathbf {q} = f_{post}( f_{pool} ( f_{pre} (\textbf {C}) ) ). $ Finally, given the sentence vector $\mathbf {q}$ , a class label $y$ is predicted by employing a softmax function. Corpora We evaluate the proposed method on three common NLP tasks: Amazon Reviews, SST and SNLI. We utilize parallel data to train our multilingual NMT system, as detailed below. For the MT task, we use the WMT 2014 En $\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below. The Amazon reviews dataset BIBREF39 is a multilingual sentiment classification dataset, providing data for four languages - English (En), French (Fr), German (De), and Japanese. We use the English and French datasets in our experiments. The dataset contains 6,000 documents in the train and test portions for each language. Each review consists of a category label, a title, a review, and a star rating (5-point scale). We only use the review text in our experiments. Following BIBREF39 , we mapped the reviews with lower scores (1 and 2) to negative examples and the reviews with higher scores (4 and 5) to positive examples, thereby turning it into a binary classification problem. Reviews with score 3 are dropped. We split the training dataset into 10% for development and the rest for training, and we truncate each example and keep the first 200 words in the review. Note that, since the data for each language was obtained by crawling different product pages, the data is not aligned across languages. The sentiment classification task proposed in BIBREF9 is also a binary classification problem where each sentence and phrase is associated with either a positive or a negative sentiment. We ignore phrase-level annotations and sentence-level neutral examples in our experiments. The dataset contains 6920, 872, and 1821 examples for training, development and testing, respectively. Since SST does not provide a multilingual test set, we used the public translation engine Google Translate to translate the SST test set to French. Previous work by BIBREF40 has shown that replacing the human translated test set with a synthetic set (obtained by using Google Translate) produces only a small difference of around 1% absolute accuracy on their human-translated French SNLI test set. Therefore, the performance measured on our `pseudo' French SST test set is expected to be a good indicator of zero-shot performance. Natural language inference is a task that aims to determine whether a natural language hypothesis $\mathbf {h}$ can justifiably be inferred from a natural language premise $\mathbf {p}$ . SNLI BIBREF10 is one of the largest datasets for a natural language inference task in English and contains multiple sentence pairs with a sentence-level entailment label. Each pair of sentences can have one of three labels - entailment, contradiction, and neutral, which are annotated by multiple humans. The dataset contains 550K training, 10K validation, and 10K testing examples. To enable research on multilingual SNLI, BIBREF40 chose a subset of the SNLI test set (1332 sentences) and professionally translated it into four major languages - Arabic, French, Russian, and Spanish. We use the French test set for evaluation in Section "Zero-Shot Classification Results" and "Analyses" . Model and Training Details Here, we first describe the model and training details of the base multilingual NMT model whose encoder is reused in all other tasks. Then we provide details about the task-specific classifiers. For each task, we provide the specifics of $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets that build the task-specific classifier. All the models in our experiments are trained using Adam optimizer BIBREF42 with label smoothing BIBREF43 and unless otherwise stated below, layer normalization BIBREF44 is applied to all LSTM gates and feed-forward layer inputs. We apply L2 regularization to the model weights and dropout to layer activations and sub-word embeddings. Hyper-parameters, such as mixing ratio $\lambda $ of L2 regularization, dropout rates, label smoothing uncertainty, batch sizes, learning rate of optimizers and initialization ranges of weights are tuned on the development sets provided for each task separately. Our multilingual NMT model consists of a shared multilingual encoder and two decoders, one for English and the other for French. The multilingual encoder uses one bi-directional LSTM, followed by three stacked layers of uni-directional LSTMs in the encoder. Each decoder consists of four stacked LSTM layers, with the first LSTM layers intertwined with additive attention networks BIBREF33 to learn a source-target alignment function. All the uni-directional LSTMs are equipped with residual connections BIBREF45 to ease the optimization, both in the encoder and the decoders. LSTM hidden units and the shared source-target embedding dimensions are set to 512. Similar to BIBREF30 , multilingual NMT model is trained in a multi-task learning setup, where each decoder is augmented with a task-specific loss, minimizing the negative conditional log-likelihood of the target sequence given the source sequence. During training, mini-batches of En $\rightarrow $ Fr and Fr $\rightarrow $ En examples are interleaved. We picked the best model based on the best average development set BLEU score on both of the language pairs. The Encoder-Classifier model here uses the encoder defined previously. With regards to the classifier, the pre- and post-pooling networks ( $f_{pre}$ , $f_{post}$ ) are both one-layer feed forward networks to cast the dimension size from 512 to 128 and from 128 to 32, respectively. We used max-pooling operator for the $f_{pool}$ network to pool the activation over time. We extended the proposed Encoder-Classifier model to a multi-source model BIBREF46 since SNLI is an inference task of relations between two input sentences, “premise" and “hypothesis". For the two sources, we use two separate encoders, which are initialized with the same pre-trained multilingual NMT encoder, to obtain their representations. Following our notation, the encoder outputs are processed using $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets, again with two separate network blocks. Specifically, $f_{pre}$ consists of a co-attention layer BIBREF47 followed by a two-layer feed-forward neural network with residual connections. We use max pooling over time for $f_{pool}$ and again a two-layer feed-forward neural network with residual connections as $f_{post}$ . After processing two sentence encodings using two network blocks, we obtain two vectors representing premise $\mathbf {h}_{premise}$ and hypothesis $\mathbf {h}_{hypothesis}$ . Following BIBREF48 , we compute two types of relational vectors with $\mathbf {h}_{-} = |\mathbf {h}_{premise} - \mathbf {h}_{hypothesis}|,$ and $\mathbf {h}_{\times } = \mathbf {h}_{premise} \odot \mathbf {h}_{hypothesis}$ , where $f_{pool}$0 denotes the element-wise multiplication between two vectors. The final relation vector is obtained by concatenating $f_{pool}$1 and $f_{pool}$2 . For both “premise" and “hypothesis" feed-forward networks we used 512 hidden dimensions. For Amazon Reviews, SST and SNLI tasks, we picked the best model based on the highest development set accuracy. Transfer Learning Results In this section, we report our results for the three tasks - Amazon Reviews (English and French), SST, and SNLI. For each task, we first build a baseline system using the proposed Encoder-Classifier architecture described in Section "Proposed Method" where the encoder is initialized randomly. Next, we experiment with using the pre-trained multilingual NMT encoder to initialize the system as described in Section "Analyses" . Finally, we perform an experiment where we freeze the encoder after initialization and only update the classifier component of the system. Table 1 summarizes the accuracy of our proposed system for these three different approaches and the state-of-the-art results on all the tasks. The first row in the table shows the baseline accuracy of our system for all four datasets. The second row shows the result from initializing with a pre-trained multilingual NMT encoder. It can be seen that this provides a significant improvement in accuracy, an average of 4.63%, across all the tasks. This illustrates that the multilingual NMT encoder has successfully learned transferable contextualized representations that are leveraged by the classifier component of our proposed system. These results are in line with the results in BIBREF5 where the authors used the representations from the top NMT encoder layer as an additional input to the task-specific system. However, in our setup we reused all of the layers of the encoder as a single pre-trained component in the task-specific system. The third row shows the results from freezing the pre-trained encoder after initialization and only training the classifier component. For the Amazon English and French tasks, freezing the encoder after initialization significantly improves the performance further. We hypothesize that since the Amazon dataset is a document level classification task, the long input sequences are very different from the short sequences consumed by the NMT system and hence freezing the encoder seems to have a positive effect. This hypothesis is also supported by the SNLI and SST results, which contain sentence-level input sequences, where we did not find any significant difference between freezing and not freezing the encoder. Zero-Shot Classification Results In this section, we explore the zero-shot classification task in French for our systems. We assume that we do not have any French training data for all the three tasks and test how well our proposed method can generalize to the unseen French language without any further training. Specifically, we reuse the three proposed systems from Table 1 after being trained only on the English classification task and test the systems on data from an unseen language (e.g. French). A reasonable upper bound to which zero-shot performance should be compared to is bridging - translating a French test text to English and then applying the English classifier on the translated text. If we assume the translation to be perfect, we should expect this approach to perform as well as the English classifier. The Amazon Reviews and SNLI tasks have a French test set available, and we evaluate the performance of the bridged and zero-shot systems on each French set. However, the SST dataset does not have a French test set, hence the `pseudo French' test set described in Section UID14 is used to evaluate the zero-shot performance. We use the English accuracy scores from the SST column in Table 1 as a high-quality proxy for the SST bridged system. We do this since translating the `pseudo French' back to English will result in two distinct translation steps and hence more errors. Table 2 summarizes all of our zero-shot results for French classification on the three tasks. It can be seen that just by using the pre-trained NMT encoder, the zero-shot performance increases drastically from almost random to within 10% of the bridged system. Freezing the encoder further pushes this performance closer to the bridged system. On the Amazon Review task, our zero-shot system is within 2% of the best bridged system. On the SST task, our zero-shot system obtains an accuracy of 83.14% which is within 1.5% of the bridged equivalent (in this case the English system). Finally, on SNLI, we compare our best zero-shot system with bilingual and multilingual embedding based methods evaluated on the same French test set in BIBREF40 . As illustrated in Table 3 , our best zero-shot system obtains the highest accuracy of 73.88%. INVERT BIBREF23 uses inverted indexing over a parallel corpus to obtain crosslingual word representations. BiCVM BIBREF25 learns bilingual compositional representations from sentence-aligned parallel corpora. In RANDOM BIBREF24 , bilingual embeddings are trained on top of parallel sentences with randomly shuffled tokens using skip-gram with negative sampling, and RATIO is similar to RANDOM with the one difference being that the tokens in the parallel sentences are not randomly shuffled. Our system significantly outperforms all methods listed in the second column by 10.66% to 15.24% and demonstrates the effectiveness of our proposed approach. Analyses In this section, we try to analyze why our simple Encoder-Classifier system is effective at zero-shot classification. We perform a series of experiments to better understand this phenomenon. In particular, we study (1) the effect of shared sub-word vocabulary, (2) the amount of multilingual training data to measure the influence of multilinguality, (3) encoder/classifier capacity to measure the influence of representation power, and (4) model behavior on different training phases to assess the relation between generalization performance on English and zero-shot performance on French. Conclusion In this paper, we have demonstrated a simple yet effective approach to perform cross-lingual transfer learning using representations from a multilingual NMT model. Our proposed approach of reusing the encoder from a multilingual NMT system as a pre-trained component provides significant improvements on three downstream tasks. Further, our approach enables us to perform surprisingly competitive zero-shot classification on an unseen language and outperforms cross-lingual embedding base methods. Finally, we end with a series of analyses which shed light on the factors that contribute to the zero-shot phenomenon. We hope that these results showcase the efficacy of multilingual NMT to learn transferable contextualized representations for many downstream tasks.
Unanswerable
e930f153c89dfe9cff75b7b15e45cd3d700f4c72
e930f153c89dfe9cff75b7b15e45cd3d700f4c72_0
Q: How big is the impact of training data size on the performance of the multilingual encoder? Text: Introduction Transfer learning has been shown to work well in Computer Vision where pre-trained components from a model trained on ImageNet BIBREF0 are used to initialize models for other tasks BIBREF1 . In most cases, the other tasks are related to and share architectural components with the ImageNet task, enabling the use of such pre-trained models for feature extraction. With this transfer capability, improvements have been obtained on other image classification datasets, and on other tasks such as object detection, action recognition, image segmentation, etc BIBREF2 . Analogously, we propose a method to transfer a pre-trained component - the multilingual encoder from an NMT system - to other NLP tasks. In NLP, initializing word embeddings with pre-trained word representations obtained from Word2Vec BIBREF3 or GloVe BIBREF4 has become a common way of transferring information from large unlabeled data to downstream tasks. Recent work has further shown that we can improve over this approach significantly by considering representations in context, i.e. modeled depending on the sentences that contain them, either by taking the outputs of an encoder in MT BIBREF5 or by obtaining representations from the internal states of a bi-directional Language Model (LM) BIBREF6 . There has also been successful recent work in transferring sentence representations from resource-rich tasks to improve resource-poor tasks BIBREF7 , however, most of the above transfer learning examples have focused on transferring knowledge across tasks for a single language, in English. Cross-lingual or multilingual NLP, the task of transferring knowledge from one language to another, serves as a good test bed for evaluating various transfer learning approaches. For cross-lingual NLP, the most widely studied approach is to use multilingual embeddings as features in neural network models. However, research has shown that representations learned in context are more effective BIBREF5 , BIBREF6 ; therefore, we aim at doing better than just using multilingual embeddings in the cross-lingual tasks. Recent progress in multilingual NMT provides a compelling opportunity for obtaining contextualized multilingual representations, as multilingual NMT systems are capable of generalizing to an unseen language direction, i.e. zero-shot translation. There is also evidence that the encoder of a multilingual NMT system learns language agnostic, universal interlingua representations, which can be further exploited BIBREF8 . In this paper, we focus on using the representations obtained from a multilingual NMT system to enable cross-lingual transfer learning on downstream NLP tasks. Our contributions are three-fold: Proposed Method We propose an Encoder-Classifier model, where the Encoder, leveraging the representations learned by a multilingual NMT model, converts an input sequence ${\mathbf {x}}$ into a set of vectors C, and the Classifier predicts a class label $y$ given the encoding of the input sequence, C. Multilingual Representations Using NMT Although there has been a large body of work in building multilingual NMT models which can translate between multiple languages at the same time BIBREF29 , BIBREF30 , BIBREF31 , BIBREF8 , zero-shot capabilities of such multilingual representations have only been tested for MT BIBREF8 . We propose a simple yet effective solution - reuse the encoder of a multilingual NMT model to initialize the encoder for other NLP tasks. To be able to achieve promising zero-shot classification performance, we consider two factors: (1) The ability to encode multiple source languages with the same encoder and (2) The ability to learn language agnostic representations of the source sequence. Based on the literature, both requirements can be satisfied by training a multilingual NMT model having a shared encoder BIBREF32 , BIBREF8 , and a separate decoder and attention mechanism for each target language BIBREF30 . After training such a multilingual NMT model, the decoder and the corresponding attention mechanisms (which are target-language specific) are discarded, while the multilingual encoder is used to initialize the encoder of our proposed Encoder-Classifier model. Multilingual Encoder-Classifier In order to leverage pre-trained multilingual representations introduced in Section "Analyses" , our encoder strictly follows the structure of a regular Recurrent Neural Network (RNN) based NMT encoder BIBREF33 with a stacked layout BIBREF34 . Given an input sequence ${\mathbf {x}} = (x_{1}, x_{2}, \ldots , x_{T_x})$ of length $T_x$ , our encoder contextualizes or encodes the input sequence into a set of vectors C, by first applying a bi-directional RNN BIBREF35 , followed by a stack of uni-directional RNNs. The hidden states of the final layer RNN, $h_i^l$ , form the set C $~=\lbrace h_i^l \rbrace _{i=1}^{T_x}$ of context vectors which will be used by the classifier, where $l$ denotes the number of RNN layers in the stacked encoder. The task of the classifier is to predict a class label $y$ given the context set C. To ease this classification task given a variable length input set C, a common approach in the literature is to extract a single sentence vector $\mathbf {q}$ by making use of pooling over time BIBREF36 . Further, to increase the modeling capacity, the pooling operation can be parameterized using pre- and post-pooling networks. Formally, given the context set C, we extract a sentence vector $\mathbf {q}$ in three steps, using three networks, (1) pre-pooling feed-forward network $f_{pre}$ , (2) pooling network $f_{pool}$ and (3) post-pooling feed-forward network $f_{post}$ , $ \mathbf {q} = f_{post}( f_{pool} ( f_{pre} (\textbf {C}) ) ). $ Finally, given the sentence vector $\mathbf {q}$ , a class label $y$ is predicted by employing a softmax function. Corpora We evaluate the proposed method on three common NLP tasks: Amazon Reviews, SST and SNLI. We utilize parallel data to train our multilingual NMT system, as detailed below. For the MT task, we use the WMT 2014 En $\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below. The Amazon reviews dataset BIBREF39 is a multilingual sentiment classification dataset, providing data for four languages - English (En), French (Fr), German (De), and Japanese. We use the English and French datasets in our experiments. The dataset contains 6,000 documents in the train and test portions for each language. Each review consists of a category label, a title, a review, and a star rating (5-point scale). We only use the review text in our experiments. Following BIBREF39 , we mapped the reviews with lower scores (1 and 2) to negative examples and the reviews with higher scores (4 and 5) to positive examples, thereby turning it into a binary classification problem. Reviews with score 3 are dropped. We split the training dataset into 10% for development and the rest for training, and we truncate each example and keep the first 200 words in the review. Note that, since the data for each language was obtained by crawling different product pages, the data is not aligned across languages. The sentiment classification task proposed in BIBREF9 is also a binary classification problem where each sentence and phrase is associated with either a positive or a negative sentiment. We ignore phrase-level annotations and sentence-level neutral examples in our experiments. The dataset contains 6920, 872, and 1821 examples for training, development and testing, respectively. Since SST does not provide a multilingual test set, we used the public translation engine Google Translate to translate the SST test set to French. Previous work by BIBREF40 has shown that replacing the human translated test set with a synthetic set (obtained by using Google Translate) produces only a small difference of around 1% absolute accuracy on their human-translated French SNLI test set. Therefore, the performance measured on our `pseudo' French SST test set is expected to be a good indicator of zero-shot performance. Natural language inference is a task that aims to determine whether a natural language hypothesis $\mathbf {h}$ can justifiably be inferred from a natural language premise $\mathbf {p}$ . SNLI BIBREF10 is one of the largest datasets for a natural language inference task in English and contains multiple sentence pairs with a sentence-level entailment label. Each pair of sentences can have one of three labels - entailment, contradiction, and neutral, which are annotated by multiple humans. The dataset contains 550K training, 10K validation, and 10K testing examples. To enable research on multilingual SNLI, BIBREF40 chose a subset of the SNLI test set (1332 sentences) and professionally translated it into four major languages - Arabic, French, Russian, and Spanish. We use the French test set for evaluation in Section "Zero-Shot Classification Results" and "Analyses" . Model and Training Details Here, we first describe the model and training details of the base multilingual NMT model whose encoder is reused in all other tasks. Then we provide details about the task-specific classifiers. For each task, we provide the specifics of $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets that build the task-specific classifier. All the models in our experiments are trained using Adam optimizer BIBREF42 with label smoothing BIBREF43 and unless otherwise stated below, layer normalization BIBREF44 is applied to all LSTM gates and feed-forward layer inputs. We apply L2 regularization to the model weights and dropout to layer activations and sub-word embeddings. Hyper-parameters, such as mixing ratio $\lambda $ of L2 regularization, dropout rates, label smoothing uncertainty, batch sizes, learning rate of optimizers and initialization ranges of weights are tuned on the development sets provided for each task separately. Our multilingual NMT model consists of a shared multilingual encoder and two decoders, one for English and the other for French. The multilingual encoder uses one bi-directional LSTM, followed by three stacked layers of uni-directional LSTMs in the encoder. Each decoder consists of four stacked LSTM layers, with the first LSTM layers intertwined with additive attention networks BIBREF33 to learn a source-target alignment function. All the uni-directional LSTMs are equipped with residual connections BIBREF45 to ease the optimization, both in the encoder and the decoders. LSTM hidden units and the shared source-target embedding dimensions are set to 512. Similar to BIBREF30 , multilingual NMT model is trained in a multi-task learning setup, where each decoder is augmented with a task-specific loss, minimizing the negative conditional log-likelihood of the target sequence given the source sequence. During training, mini-batches of En $\rightarrow $ Fr and Fr $\rightarrow $ En examples are interleaved. We picked the best model based on the best average development set BLEU score on both of the language pairs. The Encoder-Classifier model here uses the encoder defined previously. With regards to the classifier, the pre- and post-pooling networks ( $f_{pre}$ , $f_{post}$ ) are both one-layer feed forward networks to cast the dimension size from 512 to 128 and from 128 to 32, respectively. We used max-pooling operator for the $f_{pool}$ network to pool the activation over time. We extended the proposed Encoder-Classifier model to a multi-source model BIBREF46 since SNLI is an inference task of relations between two input sentences, “premise" and “hypothesis". For the two sources, we use two separate encoders, which are initialized with the same pre-trained multilingual NMT encoder, to obtain their representations. Following our notation, the encoder outputs are processed using $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets, again with two separate network blocks. Specifically, $f_{pre}$ consists of a co-attention layer BIBREF47 followed by a two-layer feed-forward neural network with residual connections. We use max pooling over time for $f_{pool}$ and again a two-layer feed-forward neural network with residual connections as $f_{post}$ . After processing two sentence encodings using two network blocks, we obtain two vectors representing premise $\mathbf {h}_{premise}$ and hypothesis $\mathbf {h}_{hypothesis}$ . Following BIBREF48 , we compute two types of relational vectors with $\mathbf {h}_{-} = |\mathbf {h}_{premise} - \mathbf {h}_{hypothesis}|,$ and $\mathbf {h}_{\times } = \mathbf {h}_{premise} \odot \mathbf {h}_{hypothesis}$ , where $f_{pool}$0 denotes the element-wise multiplication between two vectors. The final relation vector is obtained by concatenating $f_{pool}$1 and $f_{pool}$2 . For both “premise" and “hypothesis" feed-forward networks we used 512 hidden dimensions. For Amazon Reviews, SST and SNLI tasks, we picked the best model based on the highest development set accuracy. Transfer Learning Results In this section, we report our results for the three tasks - Amazon Reviews (English and French), SST, and SNLI. For each task, we first build a baseline system using the proposed Encoder-Classifier architecture described in Section "Proposed Method" where the encoder is initialized randomly. Next, we experiment with using the pre-trained multilingual NMT encoder to initialize the system as described in Section "Analyses" . Finally, we perform an experiment where we freeze the encoder after initialization and only update the classifier component of the system. Table 1 summarizes the accuracy of our proposed system for these three different approaches and the state-of-the-art results on all the tasks. The first row in the table shows the baseline accuracy of our system for all four datasets. The second row shows the result from initializing with a pre-trained multilingual NMT encoder. It can be seen that this provides a significant improvement in accuracy, an average of 4.63%, across all the tasks. This illustrates that the multilingual NMT encoder has successfully learned transferable contextualized representations that are leveraged by the classifier component of our proposed system. These results are in line with the results in BIBREF5 where the authors used the representations from the top NMT encoder layer as an additional input to the task-specific system. However, in our setup we reused all of the layers of the encoder as a single pre-trained component in the task-specific system. The third row shows the results from freezing the pre-trained encoder after initialization and only training the classifier component. For the Amazon English and French tasks, freezing the encoder after initialization significantly improves the performance further. We hypothesize that since the Amazon dataset is a document level classification task, the long input sequences are very different from the short sequences consumed by the NMT system and hence freezing the encoder seems to have a positive effect. This hypothesis is also supported by the SNLI and SST results, which contain sentence-level input sequences, where we did not find any significant difference between freezing and not freezing the encoder. Zero-Shot Classification Results In this section, we explore the zero-shot classification task in French for our systems. We assume that we do not have any French training data for all the three tasks and test how well our proposed method can generalize to the unseen French language without any further training. Specifically, we reuse the three proposed systems from Table 1 after being trained only on the English classification task and test the systems on data from an unseen language (e.g. French). A reasonable upper bound to which zero-shot performance should be compared to is bridging - translating a French test text to English and then applying the English classifier on the translated text. If we assume the translation to be perfect, we should expect this approach to perform as well as the English classifier. The Amazon Reviews and SNLI tasks have a French test set available, and we evaluate the performance of the bridged and zero-shot systems on each French set. However, the SST dataset does not have a French test set, hence the `pseudo French' test set described in Section UID14 is used to evaluate the zero-shot performance. We use the English accuracy scores from the SST column in Table 1 as a high-quality proxy for the SST bridged system. We do this since translating the `pseudo French' back to English will result in two distinct translation steps and hence more errors. Table 2 summarizes all of our zero-shot results for French classification on the three tasks. It can be seen that just by using the pre-trained NMT encoder, the zero-shot performance increases drastically from almost random to within 10% of the bridged system. Freezing the encoder further pushes this performance closer to the bridged system. On the Amazon Review task, our zero-shot system is within 2% of the best bridged system. On the SST task, our zero-shot system obtains an accuracy of 83.14% which is within 1.5% of the bridged equivalent (in this case the English system). Finally, on SNLI, we compare our best zero-shot system with bilingual and multilingual embedding based methods evaluated on the same French test set in BIBREF40 . As illustrated in Table 3 , our best zero-shot system obtains the highest accuracy of 73.88%. INVERT BIBREF23 uses inverted indexing over a parallel corpus to obtain crosslingual word representations. BiCVM BIBREF25 learns bilingual compositional representations from sentence-aligned parallel corpora. In RANDOM BIBREF24 , bilingual embeddings are trained on top of parallel sentences with randomly shuffled tokens using skip-gram with negative sampling, and RATIO is similar to RANDOM with the one difference being that the tokens in the parallel sentences are not randomly shuffled. Our system significantly outperforms all methods listed in the second column by 10.66% to 15.24% and demonstrates the effectiveness of our proposed approach. Analyses In this section, we try to analyze why our simple Encoder-Classifier system is effective at zero-shot classification. We perform a series of experiments to better understand this phenomenon. In particular, we study (1) the effect of shared sub-word vocabulary, (2) the amount of multilingual training data to measure the influence of multilinguality, (3) encoder/classifier capacity to measure the influence of representation power, and (4) model behavior on different training phases to assess the relation between generalization performance on English and zero-shot performance on French. Conclusion In this paper, we have demonstrated a simple yet effective approach to perform cross-lingual transfer learning using representations from a multilingual NMT model. Our proposed approach of reusing the encoder from a multilingual NMT system as a pre-trained component provides significant improvements on three downstream tasks. Further, our approach enables us to perform surprisingly competitive zero-shot classification on an unseen language and outperforms cross-lingual embedding base methods. Finally, we end with a series of analyses which shed light on the factors that contribute to the zero-shot phenomenon. We hope that these results showcase the efficacy of multilingual NMT to learn transferable contextualized representations for many downstream tasks.
Unanswerable
545ff2f76913866304bfacdb4cc10d31dbbd2f37
545ff2f76913866304bfacdb4cc10d31dbbd2f37_0
Q: What data were they used to train the multilingual encoder? Text: Introduction Transfer learning has been shown to work well in Computer Vision where pre-trained components from a model trained on ImageNet BIBREF0 are used to initialize models for other tasks BIBREF1 . In most cases, the other tasks are related to and share architectural components with the ImageNet task, enabling the use of such pre-trained models for feature extraction. With this transfer capability, improvements have been obtained on other image classification datasets, and on other tasks such as object detection, action recognition, image segmentation, etc BIBREF2 . Analogously, we propose a method to transfer a pre-trained component - the multilingual encoder from an NMT system - to other NLP tasks. In NLP, initializing word embeddings with pre-trained word representations obtained from Word2Vec BIBREF3 or GloVe BIBREF4 has become a common way of transferring information from large unlabeled data to downstream tasks. Recent work has further shown that we can improve over this approach significantly by considering representations in context, i.e. modeled depending on the sentences that contain them, either by taking the outputs of an encoder in MT BIBREF5 or by obtaining representations from the internal states of a bi-directional Language Model (LM) BIBREF6 . There has also been successful recent work in transferring sentence representations from resource-rich tasks to improve resource-poor tasks BIBREF7 , however, most of the above transfer learning examples have focused on transferring knowledge across tasks for a single language, in English. Cross-lingual or multilingual NLP, the task of transferring knowledge from one language to another, serves as a good test bed for evaluating various transfer learning approaches. For cross-lingual NLP, the most widely studied approach is to use multilingual embeddings as features in neural network models. However, research has shown that representations learned in context are more effective BIBREF5 , BIBREF6 ; therefore, we aim at doing better than just using multilingual embeddings in the cross-lingual tasks. Recent progress in multilingual NMT provides a compelling opportunity for obtaining contextualized multilingual representations, as multilingual NMT systems are capable of generalizing to an unseen language direction, i.e. zero-shot translation. There is also evidence that the encoder of a multilingual NMT system learns language agnostic, universal interlingua representations, which can be further exploited BIBREF8 . In this paper, we focus on using the representations obtained from a multilingual NMT system to enable cross-lingual transfer learning on downstream NLP tasks. Our contributions are three-fold: Proposed Method We propose an Encoder-Classifier model, where the Encoder, leveraging the representations learned by a multilingual NMT model, converts an input sequence ${\mathbf {x}}$ into a set of vectors C, and the Classifier predicts a class label $y$ given the encoding of the input sequence, C. Multilingual Representations Using NMT Although there has been a large body of work in building multilingual NMT models which can translate between multiple languages at the same time BIBREF29 , BIBREF30 , BIBREF31 , BIBREF8 , zero-shot capabilities of such multilingual representations have only been tested for MT BIBREF8 . We propose a simple yet effective solution - reuse the encoder of a multilingual NMT model to initialize the encoder for other NLP tasks. To be able to achieve promising zero-shot classification performance, we consider two factors: (1) The ability to encode multiple source languages with the same encoder and (2) The ability to learn language agnostic representations of the source sequence. Based on the literature, both requirements can be satisfied by training a multilingual NMT model having a shared encoder BIBREF32 , BIBREF8 , and a separate decoder and attention mechanism for each target language BIBREF30 . After training such a multilingual NMT model, the decoder and the corresponding attention mechanisms (which are target-language specific) are discarded, while the multilingual encoder is used to initialize the encoder of our proposed Encoder-Classifier model. Multilingual Encoder-Classifier In order to leverage pre-trained multilingual representations introduced in Section "Analyses" , our encoder strictly follows the structure of a regular Recurrent Neural Network (RNN) based NMT encoder BIBREF33 with a stacked layout BIBREF34 . Given an input sequence ${\mathbf {x}} = (x_{1}, x_{2}, \ldots , x_{T_x})$ of length $T_x$ , our encoder contextualizes or encodes the input sequence into a set of vectors C, by first applying a bi-directional RNN BIBREF35 , followed by a stack of uni-directional RNNs. The hidden states of the final layer RNN, $h_i^l$ , form the set C $~=\lbrace h_i^l \rbrace _{i=1}^{T_x}$ of context vectors which will be used by the classifier, where $l$ denotes the number of RNN layers in the stacked encoder. The task of the classifier is to predict a class label $y$ given the context set C. To ease this classification task given a variable length input set C, a common approach in the literature is to extract a single sentence vector $\mathbf {q}$ by making use of pooling over time BIBREF36 . Further, to increase the modeling capacity, the pooling operation can be parameterized using pre- and post-pooling networks. Formally, given the context set C, we extract a sentence vector $\mathbf {q}$ in three steps, using three networks, (1) pre-pooling feed-forward network $f_{pre}$ , (2) pooling network $f_{pool}$ and (3) post-pooling feed-forward network $f_{post}$ , $ \mathbf {q} = f_{post}( f_{pool} ( f_{pre} (\textbf {C}) ) ). $ Finally, given the sentence vector $\mathbf {q}$ , a class label $y$ is predicted by employing a softmax function. Corpora We evaluate the proposed method on three common NLP tasks: Amazon Reviews, SST and SNLI. We utilize parallel data to train our multilingual NMT system, as detailed below. For the MT task, we use the WMT 2014 En $\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below. The Amazon reviews dataset BIBREF39 is a multilingual sentiment classification dataset, providing data for four languages - English (En), French (Fr), German (De), and Japanese. We use the English and French datasets in our experiments. The dataset contains 6,000 documents in the train and test portions for each language. Each review consists of a category label, a title, a review, and a star rating (5-point scale). We only use the review text in our experiments. Following BIBREF39 , we mapped the reviews with lower scores (1 and 2) to negative examples and the reviews with higher scores (4 and 5) to positive examples, thereby turning it into a binary classification problem. Reviews with score 3 are dropped. We split the training dataset into 10% for development and the rest for training, and we truncate each example and keep the first 200 words in the review. Note that, since the data for each language was obtained by crawling different product pages, the data is not aligned across languages. The sentiment classification task proposed in BIBREF9 is also a binary classification problem where each sentence and phrase is associated with either a positive or a negative sentiment. We ignore phrase-level annotations and sentence-level neutral examples in our experiments. The dataset contains 6920, 872, and 1821 examples for training, development and testing, respectively. Since SST does not provide a multilingual test set, we used the public translation engine Google Translate to translate the SST test set to French. Previous work by BIBREF40 has shown that replacing the human translated test set with a synthetic set (obtained by using Google Translate) produces only a small difference of around 1% absolute accuracy on their human-translated French SNLI test set. Therefore, the performance measured on our `pseudo' French SST test set is expected to be a good indicator of zero-shot performance. Natural language inference is a task that aims to determine whether a natural language hypothesis $\mathbf {h}$ can justifiably be inferred from a natural language premise $\mathbf {p}$ . SNLI BIBREF10 is one of the largest datasets for a natural language inference task in English and contains multiple sentence pairs with a sentence-level entailment label. Each pair of sentences can have one of three labels - entailment, contradiction, and neutral, which are annotated by multiple humans. The dataset contains 550K training, 10K validation, and 10K testing examples. To enable research on multilingual SNLI, BIBREF40 chose a subset of the SNLI test set (1332 sentences) and professionally translated it into four major languages - Arabic, French, Russian, and Spanish. We use the French test set for evaluation in Section "Zero-Shot Classification Results" and "Analyses" . Model and Training Details Here, we first describe the model and training details of the base multilingual NMT model whose encoder is reused in all other tasks. Then we provide details about the task-specific classifiers. For each task, we provide the specifics of $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets that build the task-specific classifier. All the models in our experiments are trained using Adam optimizer BIBREF42 with label smoothing BIBREF43 and unless otherwise stated below, layer normalization BIBREF44 is applied to all LSTM gates and feed-forward layer inputs. We apply L2 regularization to the model weights and dropout to layer activations and sub-word embeddings. Hyper-parameters, such as mixing ratio $\lambda $ of L2 regularization, dropout rates, label smoothing uncertainty, batch sizes, learning rate of optimizers and initialization ranges of weights are tuned on the development sets provided for each task separately. Our multilingual NMT model consists of a shared multilingual encoder and two decoders, one for English and the other for French. The multilingual encoder uses one bi-directional LSTM, followed by three stacked layers of uni-directional LSTMs in the encoder. Each decoder consists of four stacked LSTM layers, with the first LSTM layers intertwined with additive attention networks BIBREF33 to learn a source-target alignment function. All the uni-directional LSTMs are equipped with residual connections BIBREF45 to ease the optimization, both in the encoder and the decoders. LSTM hidden units and the shared source-target embedding dimensions are set to 512. Similar to BIBREF30 , multilingual NMT model is trained in a multi-task learning setup, where each decoder is augmented with a task-specific loss, minimizing the negative conditional log-likelihood of the target sequence given the source sequence. During training, mini-batches of En $\rightarrow $ Fr and Fr $\rightarrow $ En examples are interleaved. We picked the best model based on the best average development set BLEU score on both of the language pairs. The Encoder-Classifier model here uses the encoder defined previously. With regards to the classifier, the pre- and post-pooling networks ( $f_{pre}$ , $f_{post}$ ) are both one-layer feed forward networks to cast the dimension size from 512 to 128 and from 128 to 32, respectively. We used max-pooling operator for the $f_{pool}$ network to pool the activation over time. We extended the proposed Encoder-Classifier model to a multi-source model BIBREF46 since SNLI is an inference task of relations between two input sentences, “premise" and “hypothesis". For the two sources, we use two separate encoders, which are initialized with the same pre-trained multilingual NMT encoder, to obtain their representations. Following our notation, the encoder outputs are processed using $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets, again with two separate network blocks. Specifically, $f_{pre}$ consists of a co-attention layer BIBREF47 followed by a two-layer feed-forward neural network with residual connections. We use max pooling over time for $f_{pool}$ and again a two-layer feed-forward neural network with residual connections as $f_{post}$ . After processing two sentence encodings using two network blocks, we obtain two vectors representing premise $\mathbf {h}_{premise}$ and hypothesis $\mathbf {h}_{hypothesis}$ . Following BIBREF48 , we compute two types of relational vectors with $\mathbf {h}_{-} = |\mathbf {h}_{premise} - \mathbf {h}_{hypothesis}|,$ and $\mathbf {h}_{\times } = \mathbf {h}_{premise} \odot \mathbf {h}_{hypothesis}$ , where $f_{pool}$0 denotes the element-wise multiplication between two vectors. The final relation vector is obtained by concatenating $f_{pool}$1 and $f_{pool}$2 . For both “premise" and “hypothesis" feed-forward networks we used 512 hidden dimensions. For Amazon Reviews, SST and SNLI tasks, we picked the best model based on the highest development set accuracy. Transfer Learning Results In this section, we report our results for the three tasks - Amazon Reviews (English and French), SST, and SNLI. For each task, we first build a baseline system using the proposed Encoder-Classifier architecture described in Section "Proposed Method" where the encoder is initialized randomly. Next, we experiment with using the pre-trained multilingual NMT encoder to initialize the system as described in Section "Analyses" . Finally, we perform an experiment where we freeze the encoder after initialization and only update the classifier component of the system. Table 1 summarizes the accuracy of our proposed system for these three different approaches and the state-of-the-art results on all the tasks. The first row in the table shows the baseline accuracy of our system for all four datasets. The second row shows the result from initializing with a pre-trained multilingual NMT encoder. It can be seen that this provides a significant improvement in accuracy, an average of 4.63%, across all the tasks. This illustrates that the multilingual NMT encoder has successfully learned transferable contextualized representations that are leveraged by the classifier component of our proposed system. These results are in line with the results in BIBREF5 where the authors used the representations from the top NMT encoder layer as an additional input to the task-specific system. However, in our setup we reused all of the layers of the encoder as a single pre-trained component in the task-specific system. The third row shows the results from freezing the pre-trained encoder after initialization and only training the classifier component. For the Amazon English and French tasks, freezing the encoder after initialization significantly improves the performance further. We hypothesize that since the Amazon dataset is a document level classification task, the long input sequences are very different from the short sequences consumed by the NMT system and hence freezing the encoder seems to have a positive effect. This hypothesis is also supported by the SNLI and SST results, which contain sentence-level input sequences, where we did not find any significant difference between freezing and not freezing the encoder. Zero-Shot Classification Results In this section, we explore the zero-shot classification task in French for our systems. We assume that we do not have any French training data for all the three tasks and test how well our proposed method can generalize to the unseen French language without any further training. Specifically, we reuse the three proposed systems from Table 1 after being trained only on the English classification task and test the systems on data from an unseen language (e.g. French). A reasonable upper bound to which zero-shot performance should be compared to is bridging - translating a French test text to English and then applying the English classifier on the translated text. If we assume the translation to be perfect, we should expect this approach to perform as well as the English classifier. The Amazon Reviews and SNLI tasks have a French test set available, and we evaluate the performance of the bridged and zero-shot systems on each French set. However, the SST dataset does not have a French test set, hence the `pseudo French' test set described in Section UID14 is used to evaluate the zero-shot performance. We use the English accuracy scores from the SST column in Table 1 as a high-quality proxy for the SST bridged system. We do this since translating the `pseudo French' back to English will result in two distinct translation steps and hence more errors. Table 2 summarizes all of our zero-shot results for French classification on the three tasks. It can be seen that just by using the pre-trained NMT encoder, the zero-shot performance increases drastically from almost random to within 10% of the bridged system. Freezing the encoder further pushes this performance closer to the bridged system. On the Amazon Review task, our zero-shot system is within 2% of the best bridged system. On the SST task, our zero-shot system obtains an accuracy of 83.14% which is within 1.5% of the bridged equivalent (in this case the English system). Finally, on SNLI, we compare our best zero-shot system with bilingual and multilingual embedding based methods evaluated on the same French test set in BIBREF40 . As illustrated in Table 3 , our best zero-shot system obtains the highest accuracy of 73.88%. INVERT BIBREF23 uses inverted indexing over a parallel corpus to obtain crosslingual word representations. BiCVM BIBREF25 learns bilingual compositional representations from sentence-aligned parallel corpora. In RANDOM BIBREF24 , bilingual embeddings are trained on top of parallel sentences with randomly shuffled tokens using skip-gram with negative sampling, and RATIO is similar to RANDOM with the one difference being that the tokens in the parallel sentences are not randomly shuffled. Our system significantly outperforms all methods listed in the second column by 10.66% to 15.24% and demonstrates the effectiveness of our proposed approach. Analyses In this section, we try to analyze why our simple Encoder-Classifier system is effective at zero-shot classification. We perform a series of experiments to better understand this phenomenon. In particular, we study (1) the effect of shared sub-word vocabulary, (2) the amount of multilingual training data to measure the influence of multilinguality, (3) encoder/classifier capacity to measure the influence of representation power, and (4) model behavior on different training phases to assess the relation between generalization performance on English and zero-shot performance on French. Conclusion In this paper, we have demonstrated a simple yet effective approach to perform cross-lingual transfer learning using representations from a multilingual NMT model. Our proposed approach of reusing the encoder from a multilingual NMT system as a pre-trained component provides significant improvements on three downstream tasks. Further, our approach enables us to perform surprisingly competitive zero-shot classification on an unseen language and outperforms cross-lingual embedding base methods. Finally, we end with a series of analyses which shed light on the factors that contribute to the zero-shot phenomenon. We hope that these results showcase the efficacy of multilingual NMT to learn transferable contextualized representations for many downstream tasks.
WMT 2014 En-Fr parallel corpus
cf93a209c8001ffb4ef505d306b6ced5936c6b63
cf93a209c8001ffb4ef505d306b6ced5936c6b63_0
Q: From when are many VQA datasets collected? Text: Introduction In open-ended visual question answering (VQA) an algorithm must produce answers to arbitrary text-based questions about images BIBREF0 , BIBREF1 . VQA is an exciting computer vision problem that requires a system to be capable of many tasks. Truly solving VQA would be a milestone in artificial intelligence, and would significantly advance human computer interaction. However, VQA datasets must test a wide range of abilities for progress to be adequately measured. VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0 . Including DAQUAR, six major VQA datasets have been released, and algorithms have rapidly improved. On the most popular dataset, `The VQA Dataset' BIBREF1 , the best algorithms are now approaching 70% accuracy BIBREF2 (human performance is 83%). While these results are promising, there are critical problems with existing datasets in terms of multiple kinds of biases. Moreover, because existing datasets do not group instances into meaningful categories, it is not easy to compare the abilities of individual algorithms. For example, one method may excel at color questions compared to answering questions requiring spatial reasoning. Because color questions are far more common in the dataset, an algorithm that performs well at spatial reasoning will not be appropriately rewarded for that feat due to the evaluation metrics that are used. Contributions: Our paper has four major contributions aimed at better analyzing and comparing VQA algorithms: 1) We create a new VQA benchmark dataset where questions are divided into 12 different categories based on the task they solve; 2) We propose two new evaluation metrics that compensate for forms of dataset bias; 3) We balance the number of yes/no object presence detection questions to assess whether a balanced distribution can help algorithms learn better; and 4) We introduce absurd questions that force an algorithm to determine if a question is valid for a given image. We then use the new dataset to re-train and evaluate both baseline and state-of-the-art VQA algorithms. We found that our proposed approach enables more nuanced comparisons of VQA algorithms, and helps us understand the benefits of specific techniques better. In addition, it also allowed us to answer several key questions about VQA algorithms, such as, `Is the generalization capacity of the algorithms hindered by the bias in the dataset?', `Does the use of spatial attention help answer specific question-types?', `How successful are the VQA algorithms in answering less-common questions?', and 'Can the VQA algorithms differentiate between real and absurd questions?' Prior Natural Image VQA Datasets Six datasets for VQA with natural images have been released between 2014–2016: DAQUAR BIBREF0 , COCO-QA BIBREF3 , FM-IQA BIBREF4 , The VQA Dataset BIBREF1 , Visual7W BIBREF5 , and Visual Genome BIBREF6 . FM-IQA needs human judges and has not been widely used, so we do not discuss it further. Table 1 shows statistics for the other datasets. Following others BIBREF7 , BIBREF8 , BIBREF9 , we refer to the portion of The VQA Dataset containing natural images as COCO-VQA. Detailed dataset reviews can be found in BIBREF10 and BIBREF11 . All of the aforementioned VQA datasets are biased. DAQUAR and COCO-QA are small and have a limited variety of question-types. Visual Genome, Visual7W, and COCO-VQA are larger, but they suffer from several biases. Bias takes the form of both the kinds of questions asked and the answers that people give for them. For COCO-VQA, a system trained using only question features achieves 50% accuracy BIBREF7 . This suggests that some questions have predictable answers. Without a more nuanced analysis, it is challenging to determine what kinds of questions are more dependent on the image. For datasets made using Mechanical Turk, annotators often ask object recognition questions, e.g., `What is in the image?' or `Is there an elephant in the image?'. Note that in the latter example, annotators rarely ask that kind of question unless the object is in the image. On COCO-VQA, 79% of questions beginning with `Is there a' will have `yes' as their ground truth answer. In 2017, the VQA 2.0 BIBREF12 dataset was introduced. In VQA 2.0, the same question is asked for two different images and annotators are instructed to give opposite answers, which helped reduce language bias. However, in addition to language bias, these datasets are also biased in their distribution of different types of questions and the distribution of answers within each question-type. Existing VQA datasets use performance metrics that treat each test instance with equal value (e.g., simple accuracy). While some do compute additional statistics for basic question-types, overall performance is not computed from these sub-scores BIBREF1 , BIBREF3 . This exacerbates the issues with the bias because the question-types that are more likely to be biased are also more common. Questions beginning with `Why' and `Where' are rarely asked by annotators compared to those beginning with `Is' and 'Are'. For example, on COCO-VQA, improving accuracy on `Is/Are' questions by 15% will increase overall accuracy by over 5%, but answering all `Why/Where' questions correctly will increase accuracy by only 4.1% BIBREF10 . Due to the inability of the existing evaluation metrics to properly address these biases, algorithms trained on these datasets learn to exploit these biases, resulting in systems that work poorly when deployed in the real-world. For related reasons, major benchmarks released in the last decade do not use simple accuracy for evaluating image recognition and related computer vision tasks, but instead use metrics such as mean-per-class accuracy that compensates for unbalanced categories. For example, on Caltech-101 BIBREF13 , even with balanced training data, simple accuracy fails to address the fact that some categories were much easier to classify than others (e.g., faces and planes were easy and also had the largest number of test images). Mean per-class accuracy compensates for this by requiring a system to do well on each category, even when the amount of test instances in categories vary considerably. Existing benchmarks do not require reporting accuracies across different question-types. Even when they are reported, the question-types can be too coarse to be useful, e.g., `yes/no', `number' and `other' in COCO-VQA. To improve the analysis of the VQA algorithms, we categorize the questions into meaningful types, calculate the sub-scores, and incorporate them in our evaluation metrics. Synthetic Datasets that Fight Bias Previous works have studied bias in VQA and proposed countermeasures. In BIBREF14 , the Yin and Yang dataset was created to study the effect of having an equal number of binary (yes/no) questions about cartoon images. They found that answering questions from a balanced dataset was harder. This work is significant, but it was limited to yes/no questions and their approach using cartoon imagery cannot be directly extended to real-world images. One of the goals of this paper is to determine what kinds of questions an algorithm can answer easily. In BIBREF15 , the SHAPES dataset was proposed, which has similar objectives. SHAPES is a small dataset, consisting of 64 images that are composed by arranging colored geometric shapes in different spatial orientations. Each image has the same 244 yes/no questions, resulting in 15,616 questions. Although SHAPES serves as an important adjunct evaluation, it alone cannot suffice for testing a VQA algorithm. The major limitation of SHAPES is that all of its images are of 2D shapes, which are not representative of real-world imagery. Along similar lines, Compositional Language and Elementary Visual Reasoning (CLEVR) BIBREF16 also proposes use of 3D rendered geometric objects to study reasoning capacities of a model. CLEVR is larger than SHAPES and makes use of 3D rendered geometric objects. In addition to shape and color, it adds material property to the objects. CLEVR has five types of questions: attribute query, attribute comparison, integer comparison, counting, and existence. Both SHAPES and CLEVR were specifically tailored for compositional language approaches BIBREF15 and downplay the importance of visual reasoning. For instance, the CLEVR question, `What size is the cylinder that is left of the brown metal thing that is left of the big sphere?' requires demanding language reasoning capabilities, but only limited visual understanding is needed to parse simple geometric objects. Unlike these three synthetic datasets, our dataset contains natural images and questions. To improve algorithm analysis and comparison, our dataset has more (12) explicitly defined question-types and new evaluation metrics. TDIUC for Nuanced VQA Analysis In the past two years, multiple publicly released datasets have spurred the VQA research. However, due to the biases and issues with evaluation metrics, interpreting and comparing the performance of VQA systems can be opaque. We propose a new benchmark dataset that explicitly assigns questions into 12 distinct categories. This enables measuring performance within each category and understand which kind of questions are easy or hard for today's best systems. Additionally, we use evaluation metrics that further compensate for the biases. We call the dataset the Task Driven Image Understanding Challenge (TDIUC). The overall statistics and example images of this dataset are shown in Table 1 and Fig. 2 respectively. TDIUC has 12 question-types that were chosen to represent both classical computer vision tasks and novel high-level vision tasks which require varying degrees of image understanding and reasoning. The question-types are: The number of each question-type in TDIUC is given in Table 2 . The questions come from three sources. First, we imported a subset of questions from COCO-VQA and Visual Genome. Second, we created algorithms that generated questions from COCO's semantic segmentation annotations BIBREF17 , and Visual Genome's objects and attributes annotations BIBREF6 . Third, we used human annotators for certain question-types. In the following sections, we briefly describe each of these methods. Importing Questions from Existing Datasets We imported questions from COCO-VQA and Visual Genome belonging to all question-types except `object utilities and affordances'. We did this by using a large number of templates and regular expressions. For Visual Genome, we imported questions that had one word answers. For COCO-VQA, we imported questions with one or two word answers and in which five or more annotators agreed. For color questions, a question would be imported if it contained the word `color' in it and the answer was a commonly used color. Questions were classified as activity or sports recognition questions if the answer was one of nine common sports or one of fifteen common activities and the question contained common verbs describing actions or sports, e.g., playing, throwing, etc. For counting, the question had to begin with `How many' and the answer had to be a small countable integer (1-16). The other categories were determined using regular expressions. For example, a question of the form `Are feeling ?' was classified as sentiment understanding and `What is to the right of/left of/ behind the ?' was classified as positional reasoning. Similarly, `What <OBJECT CATEGORY> is in the image?' and similar templates were used to populate subordinate object recognition questions. This method was used for questions about the season and weather as well, e.g., `What season is this?', `Is this rainy/sunny/cloudy?', or `What is the weather like?' were imported to scene classification. Generating Questions using Image Annotations Images in the COCO dataset and Visual Genome both have individual regions with semantic knowledge attached to them. We exploit this information to generate new questions using question templates. To introduce variety, we define multiple templates for each question-type and use the annotations to populate them. For example, for counting we use 8 templates, e.g., `How many <objects> are there?', `How many <objects> are in the photo?', etc. Since the COCO and Visual Genome use different annotation formats, we discuss them separately. Sport recognition, counting, subordinate object recognition, object presence, scene understanding, positional reasoning, and absurd questions were created from COCO, similar to the scheme used in BIBREF18 . For counting, we count the number of object instances in an image annotation. To minimize ambiguity, this was only done if objects covered an area of at least 2,000 pixels. For subordinate object recognition, we create questions that require identifying an object's subordinate-level object classification based on its larger semantic category. To do this, we use COCO supercategories, which are semantic concepts encompassing several objects under a common theme, e.g., the supercategory `furniture' contains chair, couch, etc. If the image contains only one type of furniture, then a question similar to `What kind of furniture is in the picture?' is generated because the answer is not ambiguous. Using similar heuristics, we create questions about identifying food, electronic appliances, kitchen appliances, animals, and vehicles. For object presence questions, we find images with objects that have an area larger than 2,000 pixels and produce a question similar to `Is there a <object> in the picture?' These questions will have `yes' as an answer. To create negative questions, we ask questions about COCO objects that are not present in an image. To make this harder, we prioritize the creation of questions referring to absent objects that belong to the same supercategory of objects that are present in the image. A street scene is more likely to contain trucks and cars than it is to contain couches and televisions. Therefore, it is more difficult to answer `Is there a truck?' in a street scene than it is to answer `Is there a couch?' For sport recognition questions, we detect the presence of specific sports equipment in the annotations and ask questions about the type of sport being played. Images must only contain sports equipment for one particular sport. A similar approach was used to create scene understanding questions. For example, if a toilet and a sink are present in annotations, the room is a bathroom and an appropriate scene recognition question can be created. Additionally, we use the supercategories `indoor' and `outdoor' to ask questions about where a photo was taken. For creating positional reasoning questions, we use the relative locations of bounding boxes to create questions similar to `What is to the left/right of <object>?' This can be ambiguous due to overlapping objects, so we employ the following heuristics to eliminate ambiguity: 1) The vertical separation between the two bounding boxes should be within a small threshold; 2) The objects should not overlap by more than the half the length of its counterpart; and 3) The objects should not be horizontally separated by more than a distance threshold, determined by subjectively judging optimal separation to reduce ambiguity. We tried to generate above/below questions, but the results were unreliable. Absurd questions test the ability of an algorithm to judge when a question is not answerable based on the image's content. To make these, we make a list of the objects that are absent from a given image, and then we find questions from rest of TDIUC that ask about these absent objects, with the exception of yes/no and counting questions. This includes questions imported from COCO-VQA, auto-generated questions, and manually created questions. We make a list of all possible questions that would be `absurd' for each image and we uniformly sample three questions per image. In effect, we will have same question repeated multiple times throughout the dataset, where it can either be a genuine question or a nonsensical question. The algorithm must answer `Does Not Apply' if the question is absurd. Visual Genome's annotations contain region descriptions, relationship graphs, and object boundaries. However, the annotations can be both non-exhaustive and duplicated, which makes using them to automatically make QA pairs difficult. We only use Visual Genome to make color and positional reasoning questions. The methods we used are similar to those used with COCO, but additional precautions were needed due to quirks in their annotations. Additional details are provided in the Appendix. Manual Annotation Creating sentiment understanding and object utility/affordance questions cannot be readily done using templates, so we used manual annotation to create these. Twelve volunteer annotators were trained to generate these questions, and they used a web-based annotation tool that we developed. They were shown random images from COCO and Visual Genome and could also upload images. Post Processing Post processing was performed on questions from all sources. All numbers were converted to text, e.g., 2 became two. All answers were converted to lowercase, and trailing punctuation was stripped. Duplicate questions for the same image were removed. All questions had to have answers that appeared at least twice. The dataset was split into train and test splits with 70% for train and 30% for test. Proposed Evaluation Metric One of the main goals of VQA research is to build computer vision systems capable of many tasks, instead of only having expertise at one specific task (e.g., object recognition). For this reason, some have argued that VQA is a kind of Visual Turing Test BIBREF0 . However, if simple accuracy is used for evaluating performance, then it is hard to know if a system succeeds at this goal because some question-types have far more questions than others. In VQA, skewed distributions of question-types are to be expected. If each test question is treated equally, then it is difficult to assess performance on rarer question-types and to compensate for bias. We propose multiple measures to compensate for bias and skewed distributions. To compensate for the skewed question-type distribution, we compute accuracy for each of the 12 question-types separately. However, it is also important to have a final unified accuracy metric. Our overall metrics are the arithmetic and harmonic means across all per question-type accuracies, referred to as arithmetic mean-per-type (Arithmetic MPT) accuracy and harmonic mean-per-type accuracy (Harmonic MPT). Unlike the Arithmetic MPT, Harmonic MPT measures the ability of a system to have high scores across all question-types and is skewed towards lowest performing categories. We also use normalized metrics that compensate for bias in the form of imbalance in the distribution of answers within each question-type, e.g., the most repeated answer `two' covers over 35% of all the counting-type questions. To do this, we compute the accuracy for each unique answer separately within a question-type and then average them together for the question-type. To compute overall performance, we compute the arithmetic normalized mean per-type (N-MPT) and harmonic N-MPT scores. A large discrepancy between unnormalized and normalized scores suggests an algorithm is not generalizing to rarer answers. Algorithms for VQA While there are alternative formulations (e.g., BIBREF4 , BIBREF19 ), the majority of VQA systems formulate it as a classification problem in which the system is given an image and a question, with the answers as categories. BIBREF1 , BIBREF3 , BIBREF2 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF9 , BIBREF27 , BIBREF28 , BIBREF8 , BIBREF19 , BIBREF29 . Almost all systems use CNN features to represent the image and either a recurrent neural network (RNN) or a bag-of-words model for the question. We briefly review some of these systems, focusing on the models we compare in experiments. For a more comprehensive review, see BIBREF10 and BIBREF11 . Two simple VQA baselines are linear or multi-layer perceptron (MLP) classifiers that take as input the question and image embeddings concatenated to each other BIBREF1 , BIBREF7 , BIBREF8 , where the image features come from the last hidden layer of a CNN. These simple approaches often work well and can be competitive with complex attentive models BIBREF7 , BIBREF8 . Spatial attention has been heavily investigated in VQA models BIBREF2 , BIBREF20 , BIBREF28 , BIBREF30 , BIBREF27 , BIBREF24 , BIBREF21 . These systems weigh the visual features based on their relevance to the question, instead of using global features, e.g., from the last hidden layer of a CNN. For example, to answer `What color is the bear?' they aim emphasize the visual features around the bear and suppress other features. The MCB system BIBREF2 won the CVPR-2016 VQA Workshop Challenge. In addition to using spatial attention, it implicitly computes the outer product between the image and question features to ensure that all of their elements interact. Explicitly computing the outer product would be slow and extremely high dimensional, so it is done using an efficient approximation. It uses an long short-term memory (LSTM) networks to embed the question. The neural module network (NMN) is an especially interesting compositional approach to VQA BIBREF15 , BIBREF31 . The main idea is to compose a series of discrete modules (sub-networks) that can be executed collectively to answer a given question. To achieve this, they use a variety of modules, e.g., the find(x) module outputs a heat map for detecting $x$ . To arrange the modules, the question is first parsed into a concise expression (called an S-expression), e.g., `What is to the right of the car?' is parsed into (what car);(what right);(what (and car right)). Using these expressions, modules are composed into a sequence to answer the query. The multi-step recurrent answering units (RAU) model for VQA is another state-of-the-art method BIBREF32 . Each inference step in RAU consists of a complete answering block that takes in an image, a question, and the output from the previous LSTM step. Each of these is part of a larger LSTM network that progressively reasons about the question. Experiments We trained multiple baseline models as well as state-of-the-art VQA methods on TDIUC. The methods we use are: For image features, ResNet-152 BIBREF33 with $448 \times 448$ images was used for all models. QUES and IMG provide information about biases in the dataset. QUES, Q+I, and MLP all use 4800-dimensional skip-thought vectors BIBREF34 to embed the question, as was done in BIBREF7 . For image features, these all use the `pool5' layer of ResNet-152 normalized to unit length. MLP is a 4-layer net with a softmax output layer. The 3 ReLU hidden layers have 6000, 4000, and 2000 units, respectively. During training, dropout (0.3) was used for the hidden layers. For MCB, MCB-A, NMN and RAU, we used publicly available code to train them on TDIUC. The experimental setup and hyperparamters were kept unchanged from the default choices in the code, except for upgrading NMN and RAU's visual representation to both use ResNet-152. Results on TDIUC for these models are given in Table 3 . Accuracy scores are given for each of the 12 question-types in Table 3 , and scores that are normalized by using mean-per-unique-answer are given in appendix Table 5 . Easy Question-Types for Today's Methods By inspecting Table 3 , we can see that some question-types are comparatively easy ( $>90$ %) under MPT: scene recognition, sport recognition, and object presence. High accuracy is also achieved on absurd, which we discuss in greater detail in Sec. "Effects of Including Absurd Questions" . Subordinate object recognition is moderately high ( $>80$ %), despite having a large number of unique answers. Accuracy on counting is low across all methods, despite a large number of training data. For the remaining question-types, more analysis is needed to pinpoint whether the weaker performance is due to lower amounts of training data, bias, or limitations of the models. We next investigate how much of the good performance is due to bias in the answer distribution, which N-MPT compensates for. Effects of the Proposed Accuracy Metrics One of our major aims was to compensate for the fact that algorithms can achieve high scores by simply learning to answer more populated and easier question-types. For existing datasets, earlier work has shown that simple baseline methods routinely exceed more complex methods using simple accuracy BIBREF7 , BIBREF8 , BIBREF19 . On TDIUC, MLP surpasses MCB and NMN in terms of simple accuracy, but a closer inspection reveals that MLP's score is highly determined by performance on categories with a large number of examples, such as `absurd' and `object presence.' Using MPT, we find that both NMN and MCB outperform MLP. Inspecting normalized scores for each question-type (Appendix Table 5 ) shows an even more pronounced differences, which is also reflected in arithmetic N-MPT score presented in Table 3 . This indicates that MLP is prone to overfitting. Similar observations can be made for MCB-A compared to RAU, where RAU outperforms MCB-A using simple accuracy, but scores lower on all the metrics designed to compensate for the skewed answer distribution and bias. Comparing the unnormalized and normalized metrics can help us determine the generalization capacity of the VQA algorithms for a given question-type. A large difference in these scores suggests that an algorithm is relying on the skewed answer distribution to obtain high scores. We found that for MCB-A, the accuracy on subordinate object recognition drops from 85.54% with unnormalized to 23.22% with normalized, and for scene recognition it drops from 93.06% (unnormalized) to 38.53% (normalized). Both these categories have a heavily skewed answer distribution; the top-25 answers in subordinate object recognition and the top-5 answers in scene recognition cover over 80% of all questions in their respective question-types. This shows that question-types that appear to be easy may simply be due to the algorithms learning the answer statistics. A truly easy question-type will have similar performance for both unnormalized and normalized metrics. For example, sport recognition shows only 17.39% drop compared to a 30.21% drop for counting, despite counting having same number of unique answers and far more training data. By comparing relative drop in performance between normalized and unnormalized metric, we can also compare the generalization capability of the algorithms, e.g., for subordinate object recognition, RAU has higher unnormalized score (86.11%) compared to MCB-A (85.54%). However, for normalized scores, MCB-A has significantly higher performance (23.22%) than RAU (21.67%). This shows RAU may be more dependent on the answer distribution. Similar observations can be made for MLP compared to MCB. Can Algorithms Predict Rare Answers? In the previous section, we saw that the VQA models struggle to correctly predict rarer answers. Are the less repeated questions actually harder to answer, or are the algorithms simply biased toward more frequent answers? To study this, we created a subset of TDIUC that only consisted of questions that have answers repeated less than 1000 times. We call this dataset TDIUC-Tail, which has 46,590 train and 22,065 test questions. Then, we trained MCB on: 1) the full TDIUC dataset; and 2) TDIUC-Tail. Both versions were evaluated on the validation split of TDIUC-Tail. We found that MCB trained only on TDIUC-Tail outperformed MCB trained on all of TDIUC across all question-types (details are in appendix Table 6 and 7 ). This shows that MCB is capable of learning to correctly predict rarer answers, but it is simply biased towards predicting more common answers to maximize overall accuracy. Using normalized accuracy disincentivizes the VQA algorithms' reliance on the answer statistics, and for deploying a VQA system it may be useful to optimize directly for N-MPT. Effects of Including Absurd Questions Absurd questions force a VQA system to look at the image to answer the question. In TDIUC, these questions are sampled from the rest of the dataset, and they have a high prior probability of being answered `Does not apply.' This is corroborated by the QUES model, which achieves a high accuracy on absurd; however, for the same questions when they are genuine for an image, it only achieves 6.77% accuracy on these questions. Good absurd performance is achieved by sacrificing performance on other categories. A robust VQA system should be able to detect absurd questions without then failing on others. By examining the accuracy on real questions that are identical to absurd questions, we can quantify an algorithm's ability to differentiate the absurd questions from the real ones. We found that simpler models had much lower accuracy on these questions, (QUES: 6.77%, Q+I: 34%), compared to more complex models (MCB: 62.44%, MCB-A: 68.83%). To further study this, we we trained two VQA systems, Q+I and MCB, both with and without absurd. The results are presented in Table 3 . For Q+I trained without absurd questions, accuracies for other categories increase considerably compared to Q+I trained with full TDIUC, especially for question-types that are used to sample absurd questions, e.g., activity recognition (24% when trained with absurd and 48% without). Arithmetic MPT accuracy for the Q+I model that is trained without absurd (57.03%) is also substantially greater than MPT for the model trained with absurd (51.45% for all categories except absurd). This suggests that Q+I is not properly discriminating between absurd and real questions and is biased towards mis-identifying genuine questions as being absurd. In contrast, MCB, a more capable model, produces worse results for absurd, but the version trained without absurd shows much smaller differences than Q+I, which shows that MCB is more capable of identifying absurd questions. Effects of Balancing Object Presence In Sec. "Can Algorithms Predict Rare Answers?" , we saw that a skewed answer distribution can impact generalization. This effect is strong even for simple questions and affects even the most sophisticated algorithms. Consider MCB-A when it is trained on both COCO-VQA and Visual Genome, i.e., the winner of the CVPR-2016 VQA Workshop Challenge. When it is evaluated on object presence questions from TDIUC, which contains 50% `yes' and 50% `no' questions, it correctly predicts `yes' answers with 86.3% accuracy, but only 11.2% for questions with `no' as an answer. However, after training it on TDIUC, MCB-A is able to achieve 95.02% for `yes' and 92.26% for `no.' MCB-A performed poorly by learning the biases in the COCO-VQA dataset, but it is capable of performing well when the dataset is unbiased. Similar observations about balancing yes/no questions were made in BIBREF14 . Datasets could balance simple categories like object presence, but extending the same idea to all other categories is a challenging task and undermines the natural statistics of the real-world. Adopting mean-per-class and normalized accuracy metrics can help compensate for this problem. Advantages of Attentive Models By breaking questions into types, we can assess which types benefit the most from attention. We do this by comparing the MCB model with and without attention, i.e., MCB and MCB-A. As seen in Table 3 , attention helped improve results on several question categories. The most pronounced increases are for color recognition, attribute recognition, absurd, and counting. All of these question-types require the algorithm to detect specified object(s) (or lack thereof) to be answered correctly. MCB-A computes attention using local features from different spatial locations, instead of global image features. This aids in localizing individual objects. The attention mechanism learns the relative importance of these features. RAU also utilizes spatial attention and shows similar increments. Compositional and Modular Approaches NMN, and, to a lesser extent, RAU propose compositional approaches for VQA. For COCO-VQA, NMN has performed worse than some MLP models BIBREF7 using simple accuracy. We hoped that it would achieve better performance than other models for questions that require logically analyzing an image in a step-by-step manner, e.g., positional reasoning. However, while NMN did perform better than MLP using MPT and N-MPT metric, we did not see any substantial benefits in specific question-types. This may be because NMN is limited by the quality of the `S-expression' parser, which produces incorrect or misleading parses in many cases. For example, `What color is the jacket of the man on the far left?' is parsed as (color jacket);(color leave);(color (and jacket leave)). This expression not only fails to parse `the man', which is a crucial element needed to correctly answer the question, but also wrongly interprets `left' as past tense of leave. RAU performs inference over multiple hops, and because each hop contains a complete VQA system, it can learn to solve different tasks in each step. Since it is trained end-to-end, it does not need to rely on rigid question parses. It showed very good performance in detecting absurd questions and also performed well on other categories. Conclusion We introduced TDIUC, a VQA dataset that consists of 12 explicitly defined question-types, including absurd questions, and we used it to perform a rigorous analysis of recent VQA algorithms. We proposed new evaluation metrics to compensate for biases in VQA datasets. Results show that the absurd questions and the new evaluation metrics enable a deeper understanding of VQA algorithm behavior. Additional Details About TDIUC In this section, we will provide additional details about the TDIUC dataset creation and additional statistics that were omitted from the main paper due to inadequate space. Questions using Visual Genome Annotations As mentioned in the main text, Visual Genome's annotations are both non-exhaustive and duplicated. This makes using them to automatically make question-answer (QA) pairs difficult. Due to these issues, we only used them to make two types of questions: Color Attributes and Positional Reasoning. Moreover, a number of restrictions needed to be placed, which are outlined below. For making Color Attribute questions, we make use of the attributes metadata in the Visual Genome annotations to populate the template `What color is the <object>?' However, Visual Genome metadata can contain several color attributes for the same object as well as different names for the same object. Since the annotators type the name of the object manually rather than choosing from a predetermined set of objects, the same object can be referred by different names, e.g., `xbox controller,' `game controller,' `joystick,' and `controller' can all refer to same object in an image. The object name is sometimes also accompanied by its color, e.g., `white horse' instead of `horse' which makes asking the Color Attribute question `What color is the white horse?' pointless. One potential solution is to use the wordnet `synset' which accompanies every object annotation in the Visual Genome annotations. Synsets are used to group different variations of the common objects names under a single noun from wordnet. However, we found that the synset matching was erroneous in numerous instances, where the object category was misrepresented by the given synset. For example, A `controller' is matched with synset `accountant' even when the `controller' is referring to a game controller. Similarly, a `cd' is matched with synset of `cadmium.' To avoid these problems we made a set of stringent requirements before making questions: The chosen object should only have a single attribute that belongs to a set of commonly used colors. The chosen object name or synset must be one of the 91 common objects in the MS-COCO annotations. There must be only one instance of the chosen object. Using these criteria, we found that we could safely ask the question of the form `What color is the <object>?'. Similarly, for making Positional Reasoning questions, we used the relationships metadata in the Visual Genome annotations. The relationships metadata connects two objects by a relationship phrase. Many of these relationships describe the positions of the two objects, e.g., A is `on right' of B, where `on right' is one of the example relationship clause from Visual Genome, with the object A as the subject and the object B as the object. This can be used to generate Positional Reasoning questions. Again, we take several measures to avoid ambiguity. First, we only use objects that appear once in the image because `What is to the left of A' can be ambiguous if there are two instances of the object A. However, since visual genome annotations are non-exhaustive, there may still (rarely) be more than one instance of object A that was not annotated. To disambiguate such cases, we use the attributes metadata to further specify the object wherever possible, e.g., instead of asking `What is to the right of the bus?', we ask `What is to the right of the green bus?' Due to a these stringent criteria, we could only create a small number of questions using Visual Genome annotations compared to other sources. The number of questions produced via each source is shown in Table 4 . Answer Distribution Figure 3 shows the answer distribution for the different question-types. We can see that some categories, such as counting, scene recognition and sentiment understanding, have a very large share of questions represented by only a few top answers. In such cases, the performance of a VQA algorithm can be inflated unless the evaluation metric compensates for this bias. In other cases, such as positional reasoning and object utility and affordances, the answers are much more varied, with top-50 answers covering less than 60% of all answers. We have completely balanced answer distribution for object presence questions, where exactly 50% of questions being answered `yes' and the remaining 50% of the questions are answered `no'. For other categories, we have tried to design our question generation algorithms so that a single answer does not have a significant majority within a question type. For example, while scene understanding has top-4 answers covering over 85% of all the questions, there are roughly as many `no' questions (most common answer) as there are `yes' questions (second most-common answer). Similar distributions can be seen for counting, where `two' (most-common answer) is repeated almost as many times as `one' (second most-common answer). By having at least the top-2 answers split almost equally, we remove the incentive for an algorithm to perform well using simple mode guessing, even when using the simple accuracy metric. Train and Test Split In the paper, we mentioned that we split the entire collection into 70% train and 30% test/validation. To do this, we not only need to have a roughly equal distribution of question types and answers, but also need to make sure that the multiple questions for same image do not end up in two different splits, i.e., the same image cannot occur in both the train and the test partitions. So, we took following measures to split the questions into train-test splits. First, we split all the images into three separate clusters. Manually uploaded images, which includes all the images manually uploaded by our volunteer annotators. Images from the COCO dataset, including all the images for questions generated from COCO annotations and those imported from COCO-VQA dataset. In addition, a large number of Visual Genome questions also refer to COCO images. So, some questions that are generated and imported from Visual Genome are also included in this cluster. Images exclusively in the Visual Genome dataset, which includes images for a part of the questions imported from Visual Genome and those generated using that dataset. We follow simple rules to split each of these clusters of images into either belonging to the train or test splits. All the questions belonging to images coming from the `train2014' split of COCO images are assigned to the train split and all the questions belonging to images from the `val2014' split are assigned to test split. For manual and Visual Genome images, we randomly split 70% of images to train and rest to test. Additional Experimental Results In this section, we present additional experimental results that were omitted from the main paper due to inadequate space. First, the detailed normalized scores for each of the question-types is presented in Table 3 . To compute these scores, the accuracy for each unique answer is calculated separately within a question-type and averaged. Second, we present the results from the experiment in section "Can Algorithms Predict Rare Answers?" in table 6 (Unnormalized) and table 7 (Normalized). The results are evaluated on TDIUC-Tail, which is a subset of TDIUC that only consists of questions that have answers repeated less than 1000 times (uncommon answers). Note that the TDIUC-Tail excludes the absurd and the object presence question-types, as they do not contain any questions with uncommon answers. The algorithms are identical in both Table 6 and 7 and are named as follows:
late 2014